aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
0807.2328
|
2110288480
|
We collected mobility traces of 84,208 avatars spanning 22 regions over two months in Second Life, a popular networked virtual environment. We analyzed the traces to characterize the dynamics of the avatars mobility and behavior, both temporally and spatially. We discuss the implications of the our findings to the design o f peer-to-peer networked virtual environments, interest management, mobility modeling of avatars, server load balancing and zone partitioning, client-side caching, and prefetching.
|
We now describe previous efforts in collecting avatar traces from networked virtual environments and games. @cite_13 collected a 5-hour trace of 400 players from an online game called FreeWar. @cite_12 collected a trace of 28 players from a game they developed called Orbius. The focus of their work is not on the trace, but rather, the trace is a way to evaluate their proposed algorithms. use their trace to evaluate a load balancing scheme, while use their trace to evaluate different interest management algorithms. Beside traces collected from games, both works use randomly generated movements in their evaluation, and both observe significant differences in their results evaluated using the traces and using generated movements. Their results highlight the importance of having real mobility traces for researchers to evaluate their work.
|
{
"abstract": [
"Multiplayer games played over the Internet have become very popular in the lastfew years. An interesting subcategory are the so-called massively multiplayeronline games (MMOGs) that allow thousands of player characters to share asingle game world. Such a world is usually run on a high-performance and high-availability server cluster. However, even with games that have been extensivelybeta-tested, downtimes of several hours because of hard- or software failures arenot uncommon. Downtimes, especially in the first few weeks after the release,can negatively affect the image of the game and the company that created it.Traditionally, a cluster of servers contains one virtual world of a MMOG.Such infrastructure is inflexible and error-prone. One would rather like to havea system that allows disconnecting a server at runtime while others take overits tasks. Server-based MMOGs can have performance problems if players areconcentrated in certain parts of the game world or some worlds are overpopu-lated. Thus, there is also a need for load balancing mechanisms. Peer-to-Peer(P2P) systems quite naturally support the use of load balancing.In this paper we use a structured P2P technology for the organization of theinfrastructure and thus for the reduction of downtimes in MMOGs. We splitthe game world in disjunctive rectangular zones and distribute them on differentnodes of the P2P network.Online games are an interesting challenge and chance for the future devel-opment of the P2P paradigm. A wide variety of aspects of only theoreticallysolved and especially yet completely unsolved problems are covered by this ap-plication. Security and trust problems appear as well as the need to preventcheating. The application is not as tolerant to faults as instant messaging or filesharing. Consistent data storage is a problem, decisions and transactions haveto be performed in a decentralized way. Moreover, the P2P network is not usedas pure lookup service, but more as a communication and application-specificsocial structure.The rest of this paper is organized as follows: First we discuss related work inSection 2 and give a brief introduction to P2P and MMOGs and their challengesin Section 3. Section 4 shows our approach to use structured P2P Systems forMMOGs and section 5 the evaluation with player traces from a real MMOG.Finally, Section 6 provides conclusions.",
"Broadcasting all state changes to every player of a massively multiplayer game is not a viable solution. To successfully overcome the challenge of scale, massively multiplayer games have to employ sophisticated interest management techniques that only send relevant state changes to each player. This paper compares the performance of different interest management algorithms based on measurements obtained in a real massively multiplayer game using human and computer-generated player actions. We show that interest management algorithms that take into account obstacles in the world reduce the number of update messages between players by up to a factor of 6, and that some computationally inexpensive tile-based interest management algorithms can approximate ideal visibility-based interest management at very low cost. The experiments also show that measurements obtained with computer-controlled players performing random actions can approximate measurements of games played by real humans, provided that the starting positions of the random players are chosen adequately. As the size of the world and the number of players of massively multiplayer games increases, adaptive interest management techniques such as the ones studied in this paper will become increasingly important."
],
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2122478169",
"2017232301"
]
}
| 0 |
||
0807.2328
|
2110288480
|
We collected mobility traces of 84,208 avatars spanning 22 regions over two months in Second Life, a popular networked virtual environment. We analyzed the traces to characterize the dynamics of the avatars mobility and behavior, both temporally and spatially. We discuss the implications of the our findings to the design o f peer-to-peer networked virtual environments, interest management, mobility modeling of avatars, server load balancing and zone partitioning, client-side caching, and prefetching.
|
@cite_5 and @cite_26 collected traces from Quake III, a popular, multi-player, first person shooting (FPS) game and developed mobility models to describe the movement of the players. @cite_3 collected a large trace, comparable in scale to ours, of players movement from World of Warcraft (WoW), a massively multi-player online role playing game (MMORPG), and analyzed the dynamics of the populations, players arrival departure rate, session length, player distribution, and player movements. FPS games and MMORPGs have different characteristics than NVEs. Players in fast-action, FPS games tend to move around constantly. In MMORPGs, players usually engage in quests to gain level and new abilities. Players tend to gather in a location for an event (e.g. new monsters to fight) and disperse afterwards. Players also tend to move in groups. We observed a different pattern for NVEs.
|
{
"abstract": [
"This paper proposes the Networked Game Mobility Model (NGMM), for synthesising mobility in First-Person-Shooter (FPS) networked games. Current networked game research focuses on modelling low-level aspects, such as packet inter-arrival times and packet sizes, to optimise network traffic and efficient use of gaming servers. Due to the increasing popularity of multiplayer online games, the need has arisen to develop more realistic models. NGMM is such a model that utilises application level aspects of networked game traces to statistically model FPS games. It is believed that an understanding of the application level aspect (e.g. mobility and user actions) of the network is necessary to derive the causality of increasing workloads on the servers, particularly in response to increasing online game popularity. To evaluate the performance of the model simulations, comparisons are made between the original game traces, the Random Way Point Model and NGMM. Analyses of the comparative simulation results show that NGMM is capable of closely matching actual game traces. The incorporation of application level knowledge, performance boundaries of current optimisation techniques, including dead-reckoning and interest management, are also effectively ascertained in this research. This is particularly significant as current models are unable to evaluate their impact with optimisation techniques.",
"This paper presents the design, implementation, and evaluation of Colyseus, a distributed architecture for interactive multiplayer games. Colyseus takes advantage of a game's tolerance for weakly consistent state and predictable workload to meet the tight latency constraints of game-play and maintain scalable communication costs. In addition, it provides a rich distributed query interface and effective prefetching subsystem to help locate and replicate objects before they are accessed at a node. We have implemented Colyseus and modified Quake II, a popular first person shooter game, to use it. Our measurements of Quake II and our own Colyseus-based game with hundreds of players shows that Colyseus effectively distributes game traffic across the participating nodes, allowing Colyseus to support low-latency game-play for an order of magnitude more players than existing single server designs, with similar per-node bandwidth costs.",
"Understanding the distributions and behaviors of players within Massively Multiplayer Online Games (MMOGs) is essential for research in scalable architectures for these systems. We provide the first look into this problem through a measurement study on one of the most popular MMOGs, World of Warcraft [15]. Our goal is to answer four fundamental questions: how does the population of the virtual world change over time, how are players distributed in the virtual world, how much churn occurs with players, and how do they move in the virtual world. Through probing-based measurements, our preliminary results show that populations fluctuate according to a prime-time schedule, player distribution and churn appears to occur on a power-law distribution, and players move to only a small number of zones during each playing session. The ultimate goal of our research is to design an accurate player model for MMOGs so that future research can predict and simulate player behavior and population fluctuations over time."
],
"cite_N": [
"@cite_5",
"@cite_26",
"@cite_3"
],
"mid": [
"2083589344",
"1585981768",
"2021955144"
]
}
| 0 |
||
0807.2328
|
2110288480
|
We collected mobility traces of 84,208 avatars spanning 22 regions over two months in Second Life, a popular networked virtual environment. We analyzed the traces to characterize the dynamics of the avatars mobility and behavior, both temporally and spatially. We discuss the implications of the our findings to the design o f peer-to-peer networked virtual environments, interest management, mobility modeling of avatars, server load balancing and zone partitioning, client-side caching, and prefetching.
|
Most recently, La and Pietro have independently conducted a similar study on mobility in Second Life @cite_17 . Their study, however, focuses on metrics relevant to mobile communications, such as graph theoretic properties of line-of-sight networks formed by the avatars, travel length and time of avatars, and contact opportunities among avatars. Their goal is to use the mobility traces of avatars to model human mobility for applications related to wireless and delay-tolerant networks. On the other hand, we focus on metrics that are of interest to systems design of NVEs.
|
{
"abstract": [
"In this work we present a measurement study of user mobility in Second Life. We first discuss different techniques to collect user traces and then focus on results obtained using a crawler that we built. Tempted by the question whether our methodology could provide similar results to those obtained in real-world experiments, we study the statistical distribution of user contacts and show that from a qualitative point of view user mobility in Second Life presents similar traits to those of real humans. We further push our analysis to line of sight networks that emerge from user interaction and show that they are highly clustered. Lastly, we focus on the spatial properties of user movements and observe that users in Second Life revolve around several point of interests traveling in general short distances. Besides our findings, the traces collected in this work can be very useful for trace-driven simulations of communication schemes in delay tolerant networks and their performance evaluation."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2953210768"
]
}
| 0 |
||
0807.1297
|
2952124658
|
In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad on the page. This gives rise to a bipartite matching market that is typically cleared by the way of an automated auction. Several auction mechanisms have been proposed, with variants of the Generalized Second Price (GSP) being widely used in practice. A rich body of work on bipartite matching markets builds upon the stable marriage model of Gale and Shapley and the assignment model of Shapley and Shubik. We apply insights from this line of research into the structure of stable outcomes and their incentive properties to advertising auctions. We model advertising auctions in terms of an assignment model with linear utilities, extended with bidder and item specific maximum and minimum prices. Auction mechanisms like the commonly used GSP or the well-known Vickrey-Clarke-Groves (VCG) are interpreted as simply computing a in this model, for a suitably defined set of bidder preferences. In our model, the existence of a stable matching is guaranteed, and under a non-degeneracy assumption a bidder-optimal stable matching exists as well. We give an algorithm to find such matching in polynomial time, and use it to design truthful mechanism that generalizes GSP, is truthful for profit-maximizing bidders, implements features like bidder-specific minimum prices and position-specific bids, and works for rich mixtures of bidders and preferences.
|
In the marriage model, a set @math of men and a set @math of women is given, where each man and woman is endowed with a ranked list of members of the opposite sex. Men and women are to be matched in a one to one fasion. A matching is considered stable if there is no man and a woman who would simultaneously prefer each other to their respective assigned partners. A stable matching is guaranteed to exist, and the algorithm can be used to find it. The stable matching found by this algorithm is , in that every man prefers it to any other stable matching. Moreover when using the deferred acceptance algorithm, no man has an incentive to misreport his true preference order @cite_14 .
|
{
"abstract": [
"This paper considers some game-theoretic aspects of matching problems and procedures, of the sort which involve matching the members of one group of agents with one or more members of a second, disjoint group of agents, ail of whom have preferences over the possible resulting matches. The main focus of this paper is on determining the extent to which matching procedures can be designed which give agents the incentive to honestly reveal their preferences, and which produce stable matches.Two principal results are demonstrated. The first is that no matching procedure exists which always yields a stable outcome and gives players the incentive to reveal their true preferences, even though procedures exist which accomplish either of these goals separately. The second result is that matching procedures do exist, however, which always yield a stable outcome and which always give all the agents in one of the two disjoint sets of agents the incentive to reveal their true preferences."
],
"cite_N": [
"@cite_14"
],
"mid": [
"2071667058"
]
}
|
General Auction Mechanism for Search Advertising
| 0 |
|
0807.1297
|
2952124658
|
In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad on the page. This gives rise to a bipartite matching market that is typically cleared by the way of an automated auction. Several auction mechanisms have been proposed, with variants of the Generalized Second Price (GSP) being widely used in practice. A rich body of work on bipartite matching markets builds upon the stable marriage model of Gale and Shapley and the assignment model of Shapley and Shubik. We apply insights from this line of research into the structure of stable outcomes and their incentive properties to advertising auctions. We model advertising auctions in terms of an assignment model with linear utilities, extended with bidder and item specific maximum and minimum prices. Auction mechanisms like the commonly used GSP or the well-known Vickrey-Clarke-Groves (VCG) are interpreted as simply computing a in this model, for a suitably defined set of bidder preferences. In our model, the existence of a stable matching is guaranteed, and under a non-degeneracy assumption a bidder-optimal stable matching exists as well. We give an algorithm to find such matching in polynomial time, and use it to design truthful mechanism that generalizes GSP, is truthful for profit-maximizing bidders, implements features like bidder-specific minimum prices and position-specific bids, and works for rich mixtures of bidders and preferences.
|
The assignment model @cite_25 , (see also @cite_20 @cite_6 ) differs in that each player derives a certain value from being matched to each person of the opposite sex, and side payments between partners are allowed. The goal of each player is to maximize his or her payoff which is the sum of partner's value and monetary payment (positive or negative negative) from the partner. The set of stable outcomes is non-empty by a linear programming argument. In fact, each stable outcome corresponds to a maximum-weight matching, and player payoffs correpond to dual variables of the maximum matching LP. A man-optimal outcome is guaranteed to exist, and its allocation and prices are identical to the VCG mechanism for maximum weight matchings @cite_19 @cite_13 .
|
{
"abstract": [
"",
"The problem of eliciting honest preferences from individuals who must be assigned to a set of positions is considered. Individuals know that they will be charged for the positions to which they are assigned. A set of prices that provide no incentive for the individual to misrepresent his preferences is suggested. It is shown that these prices constitute an element of the optimal solution to the dual of a linear programming assignment problem. Both the optimal allocation and the prices to be charged can be derived by solving two linear programming problems once preferences have been elicited. The procedure can usefully be viewed as a simulation of a competitive market under conditions where such a market cannot be expected to function well. It results in an efficient allocation where all resources are valued at their opportunity costs and \"consumer surplus\" is maximized; its outcome thus has the desirable properties of competitive market equilibria.",
"The goal of this chapter is to describe efficient auctions for multiple, indivisible objects in terms of the duality theory of linear programming. Because of its well-known incentive properties, we shall focus on Vickrey auctions. These are efficient auctions in which buyers pay the social opportunity cost of their purchases and consequently are rewarded with their (social) marginal product. We use the assignment model to frame our analysis.",
"The assignment game is a model for a two-sided market in which a product that comes in large, indivisible units (e.g., houses, cars, etc.) is exchanged for money, and in which each participant either supplies or demands exactly one unit. The units need not be alike, and the same unit may have different values to different participants. It is shown here that the outcomes in thecore of such a game — i.e., those that cannot be improved upon by any subset of players — are the solutions of a certain linear programming problem dual to the optimal assignment problem, and that these outcomes correspond exactly to the price-lists that competitively balance supply and demand. The geometric structure of the core is then described and interpreted in economic terms, with explicit attention given to the special case (familiar in the classic literature) in which there is no product differentiation — i.e., in which the units are interchangeable. Finally, a critique of the core solution reveals an insensitivity to some of the bargaining possibilities inherent in the situation, and indicates that further analysis would be desirable using other game-theoretic solution concepts.",
"The paper presents a model of an exchange economy with indivisible goods and money. There are a finite number of agents, each one initially endowed with a certain amount of money and at most one indivisible good. Each agent is assumed to have no use for more than one indivisible good. It is proved that the core of the economy is nonempty. If utility functions are increasing in money, and if the initial resources in money are in some sense “sufficient” the core allocations coincide with the competitive equilibrium allocations."
],
"cite_N": [
"@cite_6",
"@cite_19",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2024694589",
"2495217534",
"2002430844",
"2004551127"
]
}
|
General Auction Mechanism for Search Advertising
| 0 |
|
0805.3972
|
1901490813
|
The investigation of the terrorist attack is a time-critical task. The investigators have a limited time window to diagnose the organizational background of the terrorists, to run down and arrest the wire-pullers, and to take an action to prevent or eradicate the terrorist attack. The intuitive interface to visualize the intelligence data set stimulates the investigators’ experience and knowledge, and aids them in decision-making for an immediately effective action. This paper presents a computational method to analyze the intelligence data set on the collective actions of the perpetrators of the attack, and to visualize it into the form of a social network diagram which predicts the positions where the wire-pullers conceals themselves.
|
Social network analysis @cite_17 is a study of social structures, which are made of nodes (individuals, organizations etc.), which are linked by one or more specific types of relationship (transmission of influence, presence of trust etc.). Terrorist or criminal organizations have been studied empirically @cite_19 . Factor analysis is applied to study email exchange in Enron, which ended in bankruptcy due to the institutionalized accounting fraud @cite_20 . The criminal organizations tend to be strings of inter-linked small groups that lack a central leader, but to coordinate their activities along logistic trails and through bonds of friends. Hypothesis can be built by paying attention to remarkable white spots and hard-to-fill positions in a network @cite_12 . The conspirators in the 9 11 terrorist organization are relevant in reducing the distance between hijackers, and enhancing communication efficiently @cite_21 . The 9 11 terrorists' social network is investigated from the viewpoint of efficiency and security trade-off @cite_3 . More security-oriented structure arises from longer time-to-task of the terrorists' objectives. The conspirators improve communication efficiency, preserving hijackers' small visibility and exposure.
|
{
"abstract": [
"This paper looks at the difficulty in mapping covert networks. Analyzing networks after an event is fairly easy for prosecution purposes. Mapping covert networks to prevent criminal activity is much more difficult. We examine the network surrounding the tragic events of September 11th, 2001. Through public data we are able to map a portion of the network centered around the 19 dead hijackers. This map gives us some insight into the terrorist organization, yet it is incomplete. Suggestions for further work and research are offered.",
"A consistent trade-off facing participants in any criminal network is that between organizing for efficiency or security—participants collectively pursue an objective while keeping the action leading to that goal concealed. Which side of the trade-off is prioritized depends on the objective that is pursued by the criminal group. The distinction is most salient when comparing terrorist with criminal enterprise networks. Terrorist networks are ideologically driven, while criminal enterprises pursue monetary ends. Time-to-task is shorter in the criminal enterprise and group efficiency is therefore prioritized over group security. Terrorist networks, in contrast, have longer horizons and security is prioritized over the execution of any single attack. Using Krebs’ exploratory research on networks of terrorist cells and electronic surveillance transcripts of a drug importation network, our analyses demonstrate how these opposing trade-offs emerge in criminal group structures.",
"Preface 1. The Origins of the Jihad 2. The Evolution of the Jihad 3. The Mujahedin 4. Joining the Jihad 5. Social Networks and the Jihad Conclusion Appendix: Names of Terrorists Glossary of Foreign-Language Terms Bibliography Index",
"We investigate the structures present in the Enron email dataset using singular value decomposition and semidiscrete decomposition. Using word frequency profiles, we show that messages fall into two distinct groups, whose extrema are characterized by short messages and rare words versus long messages and common words. It is surprising that length of message and word use pattern should be related in this way. We also investigate relationships among individuals based on their patterns of word use in email. We show that word use is correlated to function within the organization, as expected. Lastly, we show that relative changes to individuals' word usage over time can be used to identify key players in major company events.",
"",
"Acknowledgements 1. Introduction Stanley Wasserman, John Scott and Peter J. Carrington 2. Recent developments in network measurement Peter V. Marsden 3. Network sampling and model fitting Ove Frank 4. Extending centrality Martin Everett and Stephen P. Borgatti 5. Positional analyses of sociometric data Patrick Doreian, Vladimir Batagelj and Anuska Ferligoj 6. Network models and methods for studying the diffusion of innovations Thomas W. Valente 7. Using correspondence analysis for joint displays of affiliation networks Katherine Faust 8. An introduction to random graphs, dependence graphs, and p* Stanley Wasserman and Garry Robins 9. Random graph models for social networks: multiple relations or multiple raters Laura M. Koehly and Philippa Pattison 10. Interdependencies and social processes: dependence graphs and generalized dependence structures Garry Robins and Philippa Pattison 11. Models for longitudinal network data Tom A. B. Snijders 12. Graphical techniques for exploring social network data Linton C. Freeman 13. Software for social network analysis Mark Huisman and Marijtje A. J. van Duijn Index."
],
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_19",
"@cite_20",
"@cite_12",
"@cite_17"
],
"mid": [
"1903894141",
"2161260361",
"1767515428",
"2043690777",
"",
"1554338108"
]
}
|
Intuitive visualization of the intelligence for the run-down of terrorist wire-pullers
| 0 |
|
0805.3972
|
1901490813
|
The investigation of the terrorist attack is a time-critical task. The investigators have a limited time window to diagnose the organizational background of the terrorists, to run down and arrest the wire-pullers, and to take an action to prevent or eradicate the terrorist attack. The intuitive interface to visualize the intelligence data set stimulates the investigators’ experience and knowledge, and aids them in decision-making for an immediately effective action. This paper presents a computational method to analyze the intelligence data set on the collective actions of the perpetrators of the attack, and to visualize it into the form of a social network diagram which predicts the positions where the wire-pullers conceals themselves.
|
On the other hand, the node discovery predicts the existence of an unknown node around the known nodes from the information on the collective behavior of the network. Related works in the node discovery is, however, limited. Heuristic method for node discovery is proposed in @cite_5 , @cite_10 . The method is applied to analyze the covert social network foundation behind the terrorism disasters @cite_9 . Learning techniques of latent variables can be employed, once the presence of a node is known. @cite_18 studied learning of a structure of a linear latent variable graph. @cite_7 studied learning of a structure of a dynamic probabilistic network. But, while the accuracy of the heuristic method is limited, these principled analytic approaches in learning are not practical to handle real human relationship and communication observed in a social network, where much complexity appears. The complexity includes bi-directional and cyclic influence among many observed and latent nodes. We need an efficient and accurate method to solve the node discovery problem.
|
{
"abstract": [
"We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is point-wise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for non-linear systems.",
"Dynamic probabilistic networks are a compact representation of complex stochastic processes. In this paper we examine how to learn the structure of a DPN from data. We extend structure scoring rules for standard probabilistic networks to the dynamic case, and show how to search for structure when some of the variables are hidden. Finally, we examine two applications where such a technology might be useful: predicting and classifying dynamic behaviors, and learning causal orderings in biological processes. We provide empirical results that demonstrate the applicability of our methods in both domains.",
"This paper addresses a method to analyse the covert social network foundation hidden behind the terrorism disaster. It is to solve a node discovery problem, which means to discover a node, which functions relevantly in a social network, but escaped from monitoring on the presence and mutual relationship of nodes. The method aims at integrating the expert investigator's prior understanding, insight on the terrorists' social network nature derived from the complex graph theory and computational data processing. The social network responsible for the 9 11 attack in 2001 is used to execute simulation experiment to evaluate the performance of the method.",
"Experts of chance discovery have recognized a new class of problems where the previous methods fail to visualize a latent structure behind observation. There are invisible events that play an important role in the dynamics of visible events. An invisible leader in a communication network is a typical example. Such an event is named a dark event. A novel technique has been proposed to understand a dark event and to extend the process of chance discovery. This paper presents a new method named \"human-computer interactive annealing\" for revealing latent structures along with the algorithm for discovering dark events. Demonstration using test data generated from a scale-free network shows that the precision regarding the algorithm ranges from 80 to 90 . An experiment on discovering an invisible leader under an online collective decision-making circumstance is successful",
"This paper introduces the concept of chance discovery, i.e. discovery of an event significant for decision making. Then, this paper also presents a current research project on data crystallization, which is an extension of chance discovery. The need for data crystallization is that only the observable part of the real world can be stored in data. For such scattered, i.e. incomplete and ill-structured data, data crystallizing aims at presenting the hidden structure among events including unobservable ones. This is realized with a tool which inserts dummy items, corresponding to unobservable but significant events, to the given data on past events. The existence of these unobservable events and their relations with other events are visualized with KeyGraph, showing events by nodes and their relations by links, on the data with inserted dummy items. This visualization is iterated with gradually increasing the number of links in the graph. This process is similar to the crystallization of snow with gradual decrease in the air temperature. For tuning the granularity level of structure to be visualized, this tool is integrated with human's process of chance discovery. This basic method is expected to be applicable for various real world domains where chance-discovery methods have been applied."
],
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_9",
"@cite_5",
"@cite_10"
],
"mid": [
"2137099275",
"1896341954",
"1970857195",
"2167819790",
"2083215617"
]
}
|
Intuitive visualization of the intelligence for the run-down of terrorist wire-pullers
| 0 |
|
0804.3255
|
2117617716
|
We focus on a multidimensional field with uncorrelated spectrum and study the quality of the reconstructed signal when the field samples are irregularly spaced and affected by independent and identically distributed noise. More specifically, we apply linear reconstruction techniques and take the mean-square error (MSE) of the field estimate as a metric to evaluate the signal reconstruction quality. We find that the MSE analysis could be carried out by using the closed-form expression of the eigenvalue distribution of the matrix representing the sampling system. Unfortunately, such distribution is still unknown. Thus, we first derive a closed-form expression of the distribution moments, and we find that the eigenvalue distribution tends to the Marcenko-Pastur distribution as the field dimension goes to infinity. Finally, by using our approach, we derive a tight approximation to the MSE of the reconstructed field.
|
Relevant to our work is the literature on spectral analysis, where, however, several studies deal with regularly sampled signals (e.g., @cite_8 and references therein). An excellent guide to irregular sampling is @cite_9 , which presents a large number of techniques, algorithms, and applications. Reconstruction techniques for irregularly or randomly sampled signals can be found in @cite_11 @cite_32 @cite_1 , just to name few. In particular, Feichtinger and Gr "ochenig in @cite_1 provide an error analysis of an iterative reconstruction algorithm taking into account round-off errors, jitters, truncation errors and aliasing. From the theoretical point of view, irregular sampling has been studied in @cite_11 @cite_32 @cite_37 @cite_0 @cite_1 @cite_27 @cite_12 and references therein.
|
{
"abstract": [
"Recently, it has been observed that a sparse trigonometric polynomial, i.e., having only a small number of nonzero coefficients, can be reconstructed exactly from a small number of random samples using basis pursuit (BP) or orthogonal matching pursuit (OMP). In this paper, it is shown that recovery by a BP variant is stable under perturbation of the samples values by noise. A similar partial result for OMP is provided. For BP, in addition, the stability result is extended to (nonsparse) trigonometric polynomials that can be well approximated by sparse ones. The theoretical findings are illustrated by numerical experiments.",
"1. Basic Concepts. 2. Nonparametric Methods. 3. Parametric Methods for Rational Spectra. 4. Parametric Methods for Line Spectra. 5. Filter Bank Methods. 6. Spatial Methods. Appendix A: Linear Algebra and Matrix Analysis Tools. Appendix B: Cramer-Rao Bound Tools. Bibliography. References Grouped by Subject. Subject Index.",
"1. Introduction F. Marvasti. 2. An Introduction to Sampling Analysis P.L. Butzer, et al 3. Lagrange Interpolation and Sampling Theorems A.I. Zayed, P.L. Butzer. 4. Random Topics in Nonuniform Sampling F. Marvasti. 5. Iterative and Noniterative Recovery of Missing Samples for 1- D Band-Limited Signals P.J.S.G. Ferreira. 6. Numerical and Theoretical Aspects of Nonuniform Sampling of Band-Limited Images K. Grochenig, T. Strohmer. 7. The Nonuniform Discrete Fourier Transform S. Bagchi, S.K. Mitra. 8. Reconstruction of Stationary Processes Sampled at Random Times B. Lacaze. 9. Zero Crossings of Random Processes with Application to Estimation and Detection J. Barnett. 10. Magnetic Resonance Image Reconstruction from Nonuniformly Sampled k-Space Data F.T.A.W. Wajer, et al 11. Irregular and Sparse Sampling in Exploration Seismology A.J.W. Duijndam, et al 12. Randomized Digital Optimal Control W.L. de Koning, L.G. van Willigenburg. 13. Prediction of Band-Limited Signals from Past Samples and Applications to Speech Coding D.H. Muler, Y. Wu. 14. Frames, Irregular Sampling, and a Wavelet Auditory Model J.J. Benedetto, S. Scott. 15. Application of the Nonuniform Sampling to Motion Compensated Prediction A. Sharif, et al 16. Applications of Nonuniform Sampling to Nonlinear Modulation, A D and D A Techniques F. Marvasti, M. Sandler. 17. Applications to Error Correction Codes F. Marvasti. 18. Application of Nonuniform Sampling to Error Concealment M. Hasan, F. Marvasti. 19. Sparse Sampling in Array Processing S. Holm, et al 20. Fractional Delay Filters: Design and Applications V. Valimaki, T.I. Laakso.",
"In [10, 11, 12] we introduced a new family of algorithms for the reconstruction of a band—limited function from its irregularly sampled values. In this paper we carry out an error analysis of these algorithms and discuss their numerical stability. As special cases we also obtain more precise error estimates in the case of the regular sampling theorem.",
"Abstract We study the problem of reconstructing a multivariate trigonometric polynomial having only few non-zero coefficients from few random samples. Inspired by recent work of Candes, Romberg and Tao we propose to recover the polynomial by Basis Pursuit, i.e., by l 1 -minimization. In contrast to their work, where the sampling points are restricted to a grid, we model the random sampling points by a continuous uniform distribution on the cube, i.e., we allow them to have arbitrary position. Numerical experiments show that with high probability the trigonometric polynomial can be recovered exactly provided the number N of samples is high enough compared to the “sparsity”—the number of non-vanishing coefficients. However, N can be chosen small compared to the assumed maximal degree of the trigonometric polynomial. We present two theorems that explain this observation. One of them provides the analogue of the result of Candes, Romberg and Tao. The other one is a result toward an average case analysis and, unexpectedly connects to an interesting combinatorial problem concerning set partitions, which seemingly has not yet been considered before. Although our proofs follow ideas of they are simpler.",
"",
"This article discusses modern techniques for nonuniform sampling and reconstruction of functions in shift-invariant spaces. It is a survey as well as a research paper and provides a unified framework for uniform and nonuniform sampling and reconstruction in shift-invariant subspaces by bringing together wavelet theory, frame theory, reproducing kernel Hilbert spaces, approximation theory, amalgam spaces, and sampling. Inspired by applications taken from communication, astronomy, and medicine, the following aspects will be emphasized: (a) The sampling problem is well defined within the setting of shift-invariant spaces. (b) The general theory works in arbitrary dimension and for a broad class of generators. (c) The reconstruction of a function from any sufficiently dense nonuniform sampling set is obtained by efficient iterative algorithms. These algorithms converge geometrically and are robust in the presence of noise. (d) To model the natural decay conditions of real signals and images, the sampling theory is developed in weighted L p-spaces.",
"In many Applications one seeks to recover an entire function of exponential type from its non-uniformly spaced samples. Whereas the mathematical theory usually addresses the question of when such a function in L 2 (R) can be recovered, numerical methods operate with a finite-dimensional model. The numerical reconstruction or approximation of the original function amounts to the solution of a large linear system. We show that the solutions of a particularly efficient discrete model in which the data are fit by trigonometric polynomials converge to the solution of the original infinite-dimensional reconstruction problem. This legitimatizes the numerical computations and explains why the algorithms employed produce reasonable results. The main mathematical result is a new type of approximation theorem for entire functions of exponential type from a finite number of values. From another point of view our approach provides a new method for proving sampling theorems.",
"Summary. We present a new “second generation” reconstruction algorithm for irregular sampling, i.e. for the problem of recovering a band-limited function from its non-uniformly sampled values. The efficient new method is a combination of the adaptive weights method which was developed by the two first named authors and the method of conjugate gradients for the solution of positive definite linear systems. The choice of ”adaptive weights” can be seen as a simple but very efficient method of preconditioning. Further substantial acceleration is achieved by utilizing the Toeplitztype structure of the system matrix. This new algorithm can handle problems of much larger dimension and condition number than have been accessible so far. Furthermore, if some gaps between samples are large, then the algorithm can still be used as a very efficient extrapolation method across the gaps."
],
"cite_N": [
"@cite_37",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_0",
"@cite_27",
"@cite_12",
"@cite_11"
],
"mid": [
"2128739748",
"1514558801",
"607933207",
"1984783141",
"2067161429",
"403935824",
"2004421506",
"2015784160",
"2154625249"
]
}
|
Reconstruction of Multidimensional Signals from Irregular Noisy Samples
| 0 |
|
0804.3255
|
2117617716
|
We focus on a multidimensional field with uncorrelated spectrum and study the quality of the reconstructed signal when the field samples are irregularly spaced and affected by independent and identically distributed noise. More specifically, we apply linear reconstruction techniques and take the mean-square error (MSE) of the field estimate as a metric to evaluate the signal reconstruction quality. We find that the MSE analysis could be carried out by using the closed-form expression of the eigenvalue distribution of the matrix representing the sampling system. Unfortunately, such distribution is still unknown. Thus, we first derive a closed-form expression of the distribution moments, and we find that the eigenvalue distribution tends to the Marcenko-Pastur distribution as the field dimension goes to infinity. Finally, by using our approach, we derive a tight approximation to the MSE of the reconstructed field.
|
In the context of sensor networks, efficient techniques for spatial sampling are proposed in @cite_28 @cite_30 . In particular, in @cite_30 , an adaptive sampling is described, which allows the central data-collector to vary the number of active sensors, i.e., samples, according to the desired resolution level. Data acquisition is also studied in @cite_36 , where the authors consider a unidimensional field, uniformly sampled at the Nyquist frequency by low precision sensors. The authors show that the number of samples can be traded-off with the precision of sensors. The problem of the reconstruction of a bandlimited signal from an irregular set of samples at unknown locations is addressed in @cite_5 . There, different solution methods are proposed, and the conditions for which there exist multiple solutions or a unique solution are discussed. Differently from @cite_5 , we assume that the sink can either acquire or estimate the sensor locations and that sensors are randomly deployed.
|
{
"abstract": [
"In this work, we present a method for the selection of a subset of nodes in a wireless sensor network whose application is to reconstruct the image of a (spatially) bandlimited physical value (e.g., temperature). The selection method creates a sampling pattern based on blue noise masking and guarantees a near minimal number of activated sensors for a given signal-to-noise ratio. The selection method is further enhanced to guarantee that the sensor nodes with the least residual energy are the primary candidates for deselection, while enabling a tradeoff between sensor selection optimality and balanced load distribution. Simulation results show the effectiveness of these selection methods in improving signal-to-noise ratio and reducing the necessary number of active sensors compared with simpler selection approaches.",
"The purpose of this paper is to develop methods that can reconstruct a bandlimited discrete-time signal from an irregular set of samples at unknown locations. We define a solution to the problem using first a geometric and then an algebraic point of view. We find the locations of the irregular set of samples by treating the problem as a combinatorial optimization problem. We employ an exhaustive method and two descent methods: the random search and cyclic coordinate methods. The numerical simulations were made on three types of irregular sets of locations: random sets; sets with jitter around a uniform set; and periodic nonuniform sets. Furthermore, for the periodic nonuniform set of locations, we develop a fast scheme that reduces the computational complexity of the problem by exploiting the periodic nonuniform structure of the sample locations in the DFT.",
"We address the problem of deterministic oversampling of bandlimited sensor fields in a distributed communication-constrained processing environment, where it is desired for a central intelligent unit to reconstruct the sensor field to maximum pointwise accuracy.We show, using a dither-based sampling scheme, that is is possible to accomplish this using minimal inter-sensor communication with the aid of a multitude of low-precision sensors. Furthermore, we show the feasibility of having a flexible tradeoff between the average oversampling rate and the Analog to Digital (A D) quantization precision per sensor sample with respect to achieving exponential accuracy in the number of bits per Nyquist-period, thereby exposing a key underpinning \"conservation of bits\" principle. That is, we can distribute the bit budget per Nyquist-period along the amplitude-axis (precision of A D converter) and space (or time or space-time) using oversampling in an almost arbitrary discrete-valued manner, while retaining the same reconstruction error decay profile. Interestingly this oversampling is possible in a highly localized communication setting, with only nearest-neighbor communication, making it very attractive for dense sensor networks operating under stringent inter-node communication constraints. Finally we show how our scheme incorporates security as a by-product due to the presence of an underlying dither signal which can be used as a natural encryption device for security. The choice of the dither function enhances the security of the network.",
"Wireless sensor networks provide an attractive approach to spatially monitoring environments. Wireless technology makes these systems relatively flexible, but also places heavy demands on energy consumption for communications. This raises a fundamental trade-off: using higher densities of sensors provides more measurements, higher resolution and better accuracy, but requires more communications and processing. This paper proposes a new approach, called \"back-casting,\" which can significantly reduce communications and energy consumption while maintaining high accuracy. Back-casting operates by first having a small subset of the wireless sensors communicate their information to a fusion center. This provides an initial estimate of the environment being sensed, and guides the allocation of additional network resources. Specifically, the fusion center backcasts information based on the initial estimate to the network at large, selectively activating additional sensor nodes in order to achieve a target error level. The key idea is that the initial estimate can detect correlations in the environment, indicating that many sensors may not need to be activated by the fusion center. Thus, adaptive sampling can save energy compared to dense, non-adaptive sampling. This method is theoretically analyzed in the context of field estimation and it is shown that the energy savings can be quite significant compared to conventional approaches. For example, when sensing a piecewise smooth field with an array of 100 spl times 100 sensors, adaptive sampling can reduce the energy consumption by roughly a factor of 10 while providing the same accuracy achievable if all sensors were activated."
],
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_36",
"@cite_30"
],
"mid": [
"2162277092",
"2165221495",
"1597674521",
"2149119325"
]
}
|
Reconstruction of Multidimensional Signals from Irregular Noisy Samples
| 0 |
|
0804.3255
|
2117617716
|
We focus on a multidimensional field with uncorrelated spectrum and study the quality of the reconstructed signal when the field samples are irregularly spaced and affected by independent and identically distributed noise. More specifically, we apply linear reconstruction techniques and take the mean-square error (MSE) of the field estimate as a metric to evaluate the signal reconstruction quality. We find that the MSE analysis could be carried out by using the closed-form expression of the eigenvalue distribution of the matrix representing the sampling system. Unfortunately, such distribution is still unknown. Thus, we first derive a closed-form expression of the distribution moments, and we find that the eigenvalue distribution tends to the Marcenko-Pastur distribution as the field dimension goes to infinity. Finally, by using our approach, we derive a tight approximation to the MSE of the reconstructed field.
|
The field reconstruction at the sink node with spatial and temporal correlation among sensor measures is studied, for instance, in @cite_20 @cite_23 @cite_17 @cite_7 @cite_4 . Other interesting studies can be found in @cite_34 @cite_19 , which address the perturbations of regular sampling in shift-invariant spaces @cite_34 and the reconstruction of irregularly sampled images in presence of measurement noise @cite_19 .
|
{
"abstract": [
"We bound the number of sensors required to achieve a desired level of sensing accuracy in a discrete sensor network application (e.g. distributed detection). We model the state of nature being sensed as a discrete vector, and the sensor network as an encoder. Our model assumes that each sensor observes only a subset of the state of nature, that sensor observations are localized and dependent, and that sensor network output across different states of nature is neither identical nor independently distributed. Using a random coding argument we prove a lower bound on the 'sensing capacity' of a sensor network, which characterizes the ability of a sensor network to distinguish among all states of nature. We compute this lower bound for sensors of varying range, noise models, and sensing functions. We compare this lower bound to the empirical performance of a belief propagation based sensor network decoder for a simple seismic sensor network scenario. The key contribution of this paper is to introduce the idea of a sharp cut-off function in the number of required sensors, to the sensor network community.",
"Wireless Sensor Networks (WSN) are characterized by the dense deployment of sensor nodes that continuously observe physical phenomenon. Due to high density in the network topology, sensor observations are highly correlated in the space domain. Furthermore, the nature of the physical phenomenon constitutes the temporal correlation between each consecutive observation of a sensor node. These spatial and temporal correlations along with the collaborative nature of the WSN bring significant potential advantages for the development of efficient communication protocols well-suited for the WSN paradigm. In this paper, several key elements are investigated to capture and exploit the correlation in the WSN for the realization of advanced efficient communication protocols. A theoretical framework is developed to model the spatial and temporal correlations in WSN. The objective of this framework is to enable the development of efficient communication protocols which exploit these advantageous intrinsic features of the WSN paradigm. Based on this framework, possible approaches are discussed to exploit spatial and temporal correlation for efficient medium access and reliable event transport in WSN, respectively.",
"While high resolution, regularly gridded observations are generally preferred in remote sensing, actual observations are often not evenly sampled and have lower-than-desired resolution. Hence, there is an interest in resolution enhancement and image reconstruction. This paper discusses a general theory and techniques for image reconstruction and creating enhanced resolution images from irregularly sampled data. Using irregular sampling theory, we consider how the frequency content in aperture function-attenuated sidelobes can be recovered from oversampled data using reconstruction techniques, thus taking advantage of the high frequency content of measurements made with nonideal aperture filters. We show that with minor modification, the algebraic reconstruction technique (ART) is functionally equivalent to Grochenig's (1992) irregular sampling reconstruction algorithm. Using simple Monte Carlo simulations, we compare and contrast the performance of additive ART, multiplicative ART, and the scatterometer image reconstruction (SIR) (a derivative of multiplicative ART) algorithms with and without noise. The reconstruction theory and techniques have applications with a variety of sensors and can enable enhanced resolution image production from many nonimaging sensors. The technique is illustrated with ERS-2 and SeaWinds scatterometer data.",
"The problems of sensor configuration and activation for the detection of correlated random fields using large sensor arrays are considered. Using results that characterize the large-array performance of sensor networks in this application, the detection capabilities of different sensor configurations are analyzed and compared. The dependence of the optimal choice of configuration on parameters such as sensor signal-to-noise ratio (SNR), field correlation, etc., is examined, yielding insights into the most effective choices for sensor selection and activation in various operating regimes.",
"Perturbation theorems for regular sampling in shift-invariant spaces are derived using a generalized perturbation theorem for frames in a Hilbert space, which generalizes the irregular sampling theorem established by Chen Using this generalized irregular sampling theorem, estimates for the maximum perturbation are obtained. Some typical examples illustrate the result",
"We consider sensor networks that measure spatio-temporal correlated processes. An important task in such settings is the reconstruction at a certain node, called the sink, of the data at all points of the field. We consider scenarios where data is time critical, so delay results in distortion, or suboptimal estimation and control. For the reconstruction, the only data available to the sink are the values measured at the nodes of the sensor network, and knowledge of the correlation structure: this results in spatial distortion of reconstruction. Also, for the sake of power efficiency, sensor nodes need to transmit their data by relaying through the other network nodes: this results in delay, and thus temporal distortion of reconstruction if time critical data is concerned. We study data gathering for the case of Gaussian processes in one- and two-dimensional grid scenarios, where we are able to write explicit expressions for the spatial and time distortion, and combine them into a single total distortion measure. We prove that, for various standard correlation structures, there is an optimal finite density of the sensor network for which the total distortion is minimized. Thus, when power efficiency and delay are both considered in data gathering, it is useless from the point of view of accuracy of the reconstruction to increase the number of sensors above a certain threshold that depends on the correlation structure characteristics.",
"Wireless sensor networks have attracted attention from a diverse set of researchers, due to the unique combination of distributed, resource and data processing constraints. However, until now, the lack of real sensor network deployments have resulted in ad-hoc assumptions on a wide range of issues including topology characteristics and data distribution. As deployments of sensor networks become more widespread [1, 2], many of these assumptions need to be revisited.This paper deals with the fundamental issue of spatio-temporal irregularity in sensor networks We make the case for the existence of such irregular spatio-temporal sampling, and show that it impacts many performance issues in sensor networks. For instance, data aggregation schemes provide inaccurate results, compression efficiency is dramatically reduced, data storage skews storage load among nodes and incurs significantly greater routing overhead. To mitigate the impact of irregularity, we outline a spectrum of solutions. For data aggregation and compression, we propose the use of spatial interpolation of data (first suggested by in [3] and temporal signal segmentation followed by alignment. To reduce the cost of data-centric storage and routing, we propose the use of virtualization, and boundary detection."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_19",
"@cite_23",
"@cite_34",
"@cite_20",
"@cite_17"
],
"mid": [
"2159352259",
"2142355131",
"2159479457",
"2146217114",
"2133136008",
"2109742777",
"1978999584"
]
}
|
Reconstruction of Multidimensional Signals from Irregular Noisy Samples
| 0 |
|
0804.3255
|
2117617716
|
We focus on a multidimensional field with uncorrelated spectrum and study the quality of the reconstructed signal when the field samples are irregularly spaced and affected by independent and identically distributed noise. More specifically, we apply linear reconstruction techniques and take the mean-square error (MSE) of the field estimate as a metric to evaluate the signal reconstruction quality. We find that the MSE analysis could be carried out by using the closed-form expression of the eigenvalue distribution of the matrix representing the sampling system. Unfortunately, such distribution is still unknown. Thus, we first derive a closed-form expression of the distribution moments, and we find that the eigenvalue distribution tends to the Marcenko-Pastur distribution as the field dimension goes to infinity. Finally, by using our approach, we derive a tight approximation to the MSE of the reconstructed field.
|
We point out that our main contribution with respect to previous work on signal sampling and reconstruction is the probabilistic approach we adopt to analyze the quality level of a signal reconstructed from a set of irregular, noisy samples. Our analysis, however, applies to sampling systems where the field reconstruction is performed in a centralized manner. Finally, we highlight that our previous work @cite_3 assumes that sensors are uniformly distributed over the spatial observation interval and may be displaced around a known average location. The effects of noisy measures and jittered positions are analyzed when linear reconstruction techniques are employed. However, only the unidimensional case is studied and semi-analytical derivations of the MSE of the reconstructed field are obtained. In @cite_6 , instead, sensors are assumed to be fixed, and the objective is to evaluate the performance of a linear reconstruction technique in the presence of quasi-equally spaced sensor layouts.
|
{
"abstract": [
"We consider wireless sensor networks whose nodes are randomly deployed and, thus, provide an irregular sampling of the sensed field. The field is assumed to be bandlimited; a sink node collects the data gathered by the sensors and reconstructs the field by using a technique based on linear filtering. By taking the mean square error (MSE) as performance metric, we evaluate the effect of quasi-equally spaced sensor layouts on the quality of the reconstructed signal. The MSE is derived through asymptotic analysis for different sensor spatial distributions, and for two of them we are able to obtain an approximate closed form expression. The case of uniformly distributed sensors is also considered for the sake of comparison. The validity of our asymptotic analysis is shown by comparison against numerical results and it is proven to hold even for a small number of nodes. Finally, with the help of a simple example, we show the key role that our results play in the deployment of sensor networks.",
"We consider the problem of obtaining a high quality estimates of band-limited sensor fields when sensor measurements are noisy and the nodes are irregularly deployed and subject to random motion. We consider the mean square error (MSE) of the estimate and we analytically derive the performance of several reconstruction estimation techniques based on linear filtering. For each technique, we obtain the mean value of the MSE, as well as its asymptotic expression in the case where the field bandwidth and the number of sensors grow to infinity, while their ratio is kept constant. Our results provide useful guidelines for the design of sensor networks when many system parameters have to be traded off."
],
"cite_N": [
"@cite_6",
"@cite_3"
],
"mid": [
"2053334136",
"2109914635"
]
}
|
Reconstruction of Multidimensional Signals from Irregular Noisy Samples
| 0 |
|
0804.3255
|
2117617716
|
We focus on a multidimensional field with uncorrelated spectrum and study the quality of the reconstructed signal when the field samples are irregularly spaced and affected by independent and identically distributed noise. More specifically, we apply linear reconstruction techniques and take the mean-square error (MSE) of the field estimate as a metric to evaluate the signal reconstruction quality. We find that the MSE analysis could be carried out by using the closed-form expression of the eigenvalue distribution of the matrix representing the sampling system. Unfortunately, such distribution is still unknown. Thus, we first derive a closed-form expression of the distribution moments, and we find that the eigenvalue distribution tends to the Marcenko-Pastur distribution as the field dimension goes to infinity. Finally, by using our approach, we derive a tight approximation to the MSE of the reconstructed field.
|
The goal of this work is to provide an analytical study on the reconstruction quality of a multidimensional physical field, with uncorrelated spectrum. The field samples are (i) irregularly spaced, since they are gathered by a randomly deployed sensor network and (ii) affected by i.i.d. noise. The sink node receives the field samples and runs the reconstruction algorithm in a centralized manner. Our major contributions with respect to previous work are as follows. [1.] Given a @math -dimensional problem formulation, we obtain analytical expressions for the moments of the eigenvalue distribution of the reconstruction matrix. Using the expressions of the moments, we show that the eigenvalue distribution tends to the Mar c enko-Pastur distribution @cite_26 as the field dimension @math . [2.] We apply our results to the study of the quality of a reconstructed field and derive a tight approximation to the MSE of the estimated field.
|
{
"abstract": [
"In this paper we study the distribution of eigenvalues for two sets of random Hermitian matrices and one set of random unitary matrices. The statement of the problem as well as its method of investigation go back originally to the work of Dyson [i] and I. M. Lifsic [2], [3] on the energy spectra of disordered systems, although in their probability character our sets are more similar to sets studied by Wigner [4]. Since the approaches to the sets we consider are the same, we present in detail only the most typical case. The corresponding results for the other two cases are presented without proof in the last section of the paper. §1. Statement of the problem and survey of results We shall consider as acting in iV-dimensiona l unitary space v, a selfadjoint operator BN (re) of the form"
],
"cite_N": [
"@cite_26"
],
"mid": [
"2060581589"
]
}
|
Reconstruction of Multidimensional Signals from Irregular Noisy Samples
| 0 |
|
0804.0599
|
2952703399
|
Symmetries are intrinsic to many combinatorial problems including Boolean Satisfiability (SAT) and Constraint Programming (CP). In SAT, the identification of symmetry breaking predicates (SBPs) is a well-known, often effective, technique for solving hard problems. The identification of SBPs in SAT has been the subject of significant improvements in recent years, resulting in more compact SBPs and more effective algorithms. The identification of SBPs has also been applied to pseudo-Boolean (PB) constraints, showing that symmetry breaking can also be an effective technique for PB constraints. This paper extends further the application of SBPs, and shows that SBPs can be identified and used in Maximum Satisfiability (MaxSAT), as well as in its most well-known variants, including partial MaxSAT, weighted MaxSAT and weighted partial MaxSAT. As with SAT and PB, symmetry breaking predicates for MaxSAT and variants are shown to be effective for a representative number of problem domains, allowing solving problem instances that current state of the art MaxSAT solvers could not otherwise solve.
|
Symmetries are a well-known research topic, that serve to tackle complexity in many combinatorial problems. The first ideas on symmetry breaking were developed in the 90s @cite_11 @cite_13 , by relating symmetries with the graph automorphism problem, and by proposing the first approach for generating symmetry breaking predicates. This work was later extended and optimized for propositional satisfiability @cite_10 .
|
{
"abstract": [
"Many important tasks in circuit design and verification can be performed in practice via reductions to Boolean Satisfiability (SAT), making SAT a fundamental EDA problem. However such reductions often leave out application-specific structure, thus handicapping EDA tools in their competition with creative engineers. Successful attempts to represent and utilize additional structure on Boolean variables include recent work on 0--1 Integer Linear Programming (ILP) and on symmetries in SAT. Those extensions gracefully accommodate well-known advances in SAT-solving, but their combined use has not been attempted previously. Our work shows (i) how one can detect and use symmetries in instances of 0--1 ILP, and (ii) what benefits this may bring.",
"",
"Constraint satisfaction problems (CSP) are a class of combinatorial problems that can be solved efficiently by combining consistency methods such as arc-consistency together with a backtracking search. However these techniques are not adapted to symmetrical CSP. In fact one can exhibit rather small CSP that cannot be solved with consistency techniques. The relevance of this symmetry problem to real world applications is very strong since it can prevent a CSP solver to solve even small instances of real world problems. This paper describes a general solution for this kind of problems. Both a theoretical study and experimental results using the constraint-based library PECOS are provided."
],
"cite_N": [
"@cite_10",
"@cite_13",
"@cite_11"
],
"mid": [
"1966447968",
"1567586152",
"1524973174"
]
}
|
Symmetry Breaking for Maximum Satisfiability ⋆
|
Symmetry breaking is a widely used technique for solving combinatorial problems. Symmetries have been used with great success in Satisfiability (SAT) [6,1], and are regarded as an essential technique for solving specific classes of problem instances. Symmetries have also been widely used for solving constraint satisfaction problems (CSPs) [8].
More recent work has shown how to apply symmetry breaking in pseudo-Boolean (PB) constraints [2] and also in soft constraints [18]. It should be noted that symmetry breaking is viewed as an effective problem solving technique, either for SAT, PB or CP, that is often used as an alternative technique, to be applied when default algorithms are unable to solve a given problem instance.
In recent years there has been a growing interest in algorithms for MaxSAT and variants [12,13,20,10,11,14], in part because of the wide range of potential applications. MaxSAT and variants represent a more general framework than either SAT or PB, and so can naturally be used in many practical applications. The interest in MaxSAT and variants motivated the development of a new generation of MaxSAT algorithms, remarkably more efficient than early MaxSAT algorithms [19,4]. Despite the observed improvements, there are many problems still too complex for MaxSAT algorithms to solve [3]. Natural lines of research for improving MaxSAT algorithms include studying techniques known to be effective for either SAT, PB or CP. One concrete example is symmetry breaking. Despite its success in SAT, PB and CP, the usefulness of symmetry breaking for MaxSAT and variants has not been thoroughly studied.
This paper addresses the problem of using symmetry breaking in MaxSAT and in its most well-known variants, partial MaxSAT, weighted MaxSAT and weighted partial MaxSAT. The work extends past recent work on computing symmetries for SAT [1] and PB constraints [2] by computing automorphism on colored graphs obtained from CNF or PB formulas, and by showing how symmetry breaking predicates [6,1] can be exploited. The experimental results show that symmetry breaking is an effective technique for MaxSAT and variants, allowing solving problem instances that state of the art MaxSAT solvers could not otherwise solve.
The paper is organized as follows. The next section introduces the notation used throughout the paper, provides a brief overview of MaxSAT and variants, and also summarizes the work on symmetry breaking for SAT and PB constraints. Afterwards, the paper describes how to apply symmetry breaking in MaxSAT and variants. Experimental results, obtained on representative problem instances from the MaxSAT evaluation [3] and also from practical applications [1], demonstrate that symmetry breaking allows solving problem instances that could not be solved by any of the available state of the art MaxSAT solvers. The paper concludes by summarizing related work, by overviewing the main contributions, and by outlining directions for future work.
Preliminaries
This section introduces the notation used through the paper, as well as the MaxSAT problem and its variants. An overview of symmetry identification and symmetry breaking is also presented.
Maximum Satisfiability
The paper assumes the usual definitions for SAT. A propositional formula is represented in Conjunctive Normal Form (CNF). A CNF formula ϕ consists of a conjunction of clauses, where each clause ω is a disjunction of literals, and a literal l is either a propositional variable x or its complementx. Variables can be assigned a propositional value, either 0 or 1. A literal l j = x j assumes value 1 if x j = 1 and assumes value 0 if x j = 0. Conversely, literal l j =x j assumes value 1 if x j = 0 and value 0 when x j = 1. For each assignment of values to the variables, the value of formula ϕ is computed with the rules of propositional logic. A clause is said to be satisfied if at least one of its literals assumes value 1. If all literals of a clause assume value 0, then the clause is unsatisfied. The propositional satisfiability (SAT) problem consists in deciding whether there exists an assignment to the variables such that ϕ is satisfied.
Given a propositional formula ϕ, the MaxSAT problem is defined as finding an assignment to variables in ϕ such that the number of satisfied clauses is maximized. (MaxSAT can also be defined as finding an assignment that minimizes the number of unsatisfied clauses.) Well-known variants of MaxSAT include partial MaxSAT, weighted MaxSAT and weighted partial MaxSAT.
For partial MaxSAT, a propositional formula ϕ is described by the conjunction of two CNF formulas ϕ s and ϕ h , where ϕ s represents the soft clauses and ϕ h represents the hard clauses. The partial MaxSAT problem over a propositional formula ϕ = ϕ h ∧ ϕ s consists in finding an assignment to the problem variables such that all hard clauses (ϕ h ) are satisfied and the number of satisfied soft clauses (ϕ s ) is maximized.
For weighted MaxSAT, each clause in the CNF formula is associated to a nonnegative weight. A weighted clause is a pair (ω, c) where ω is a classical clause and c is a natural number corresponding to the cost of unsatisfying ω. Given a weighted CNF formula ϕ, the weighted MaxSAT problem consists in finding an assignment to problem variables such that the total weight of the unsatisfied clauses is minimized, which implies that the total weight of the satisfied clauses is maximized. For the weighted partial MaxSAT problem, the formula is the conjunction of a weighted CNF formula (soft clauses) and a classical CNF formula (hard clauses). The weighted partial MaxSAT problem consists in finding an assignment to the variables such that all hard clauses are satisfied and the total weight of satisfied soft clauses is maximized. Observe that, for both partial MaxSAT and weighted partial MaxSAT, hard clauses can be represented as weighted clauses. For these clauses one can consider that the weight is greater than the sum of the weights of the soft clauses.
MaxSAT and variants find a wide range of practical applications, that include scheduling, routing, bioinformatics, and design automation. Moreover, MaxSAT can be used for solving pseudo-Boolean optimization [11]. The practical applications of MaxSAT motivated recent interest in developing more efficient algorithms. The most efficient algorithms for MaxSAT and variants are based on branch and bound search, using dedicated bounding and inference techniques [12,13,10,11]. Lower bounding techniques include for example the use of unit propagation for identifying necessarily unsatisfied clauses, whereas inference techniques can be viewed as restricted forms of resolution, with the objective of simplifying the problem instance to solve.
Symmetry Breaking
Given a problem instance, a symmetry is an operation that preserves the constraints, and therefore also preserves the solutions [5]. For a set of symmetric states, it is possible to obtain the whole set of states from any of the states. Hence, symmetry breaking predicates may eliminate all but one of the equivalent states. Symmetry breaking is expected to speed up the search as the search space gets reduced. For specific problems where symmetries may be easily found this reduction may be significant. Nonetheless, the elimination of symmetries necessarily introduces overhead that is expected to be negligible when compared with the benefits it may provide.
The elimination of symmetries has been extensively studied in CP and SAT [16,6]. The most well-know strategy for eliminating symmetries in SAT consists in adding symmetry breaking predicates (SBPs) to the CNF formula [6]. SBPs are added to the formula before the search starts. The symmetries may be identified for each specific problem, and in that case it is required that the symmetries in the problem are identified when creating the encoding. Alternatively, one may give a formula to a specialized tool for detecting all the symmetries [1]. The resulting SBPs are intended to merge symmetries in equivalent classes. In case all symmetries are broken, only one assignment, instead of n assignments, may satisfy a set of constraints, being n the number of elements in a given equivalent class.
Other approaches include remodeling the problem [17] and breaking symmetries during search [9]. Remodeling the problem implies creating a different encoding, e.g. obtained by defining a different set of variables, in order to create a problem with less symmetries or even none at all. Alternatively, the search procedure may be adapted for adding SBPs as the search proceeds to ensure that any assignment symmetric to one assignment already considered will not be explored in the future, or by performing checks that symmetric equivalent assignments have not yet been visited.
Currently available tools for detecting and breaking symmetries for a given formula are based on group theory. From each formula a group is extracted, where a group is a set of permutations. A permutation is a one-to-one correspondence between a set and itself. Each symmetry defines a permutation on a set of literals. In practice, each permutation is represented by a product of disjoint cycles. Each cycle (l 1 l 2 . . . l m ) with size m stands for the permutation that maps l i on l i+1 (with 1 ≤ i ≤ m − 1) and l m on l 1 . Applying a permutation to a formula will produce exactly the same formula.
Example 1. Consider the following CNF formula:
ϕ = (x 1 ∨ x 2 ) ∧ (x 1 ∨ x 2 ) ∧ (x 2 ) ∧ (x 3 ∨ x 2 ) ∧ (x 3 ∨ x 2 ) The permutations identified for ϕ are (x 3x3 ) and (x 1 x 3 )(x 1x3 ). (The permutation (x 1x1 ) is implicit.)
The formula resulting from the permutation (x 3x3 ) is obtained by replacing every occurrence of x 3 byx 3 and every occurrence ofx 3 by x 3 . Clearly, the obtained formula is equal to the original formula. The same happens when applying the permutation (x 1 x 3 )(x 1x3 ): replacing x 1 by x 3 , x 3 by x 1 ,x 1 byx 3 andx 3 byx 1 produces the same formula.
Symmetry Breaking for MaxSAT
This section describes how to apply symmetry breaking in MaxSAT. First, the construction process for the graph representing a CNF formula is briefly reviewed [6,1], as it will be modified later in this section. Afterwards, plain MaxSAT is considered. The next step is to address partial, weighted and weighted partial MaxSAT.
From CNF Formulas to Colored Graphs
Symmetry breaking for MaxSAT and variants requires a few modifications to the approach used for SAT [6,1]. This section summarizes the basic approach, which is then extended in the following sections.
Given a graph, the graph automorphism problem consists in finding isomorphic groups of edges and vertices with a one-to-one correspondence. In case of graphs with colored vertices, the correspondence is made between vertices with the same color. It is well-known that symmetries in SAT can be identified by reduction to a graph automorphism problem [6,1]. The propositional formula is represented as an undirected graph with colored vertices, such that the automorphism in the graph corresponds to a symmetry in the propositional formula. Given a propositional formula ϕ, a colored undirected graph is created as follows:
-For each variable x j ∈ ϕ add two vertices to represent x j andx j . All vertices associated with variables are colored with color 1; -For each variable x j ∈ ϕ add an edge between the vertices representing x j andx j ; -For each binary clause ω i = (l j ∨ l k ) ∈ ϕ, add an edge between the vertices representing l j and l k ; -For each non-binary clause ω i ∈ ϕ create a vertex colored with 2; -For each literal l j in a non-binary clause ω i , add an edge between the corresponding vertices.
Example 2. Figure 1 shows the colored undirected graph associated with the CNF formula of Example 1. Vertices with shape • represent color 1 and vertices with shape ⋄ represent color 2. Vertex 1 corresponds to x 1 , 2 to x 2 , 3 to x 3 , 4 tox 1 , 5 tox 2 , 6 tō x 3 and 7 to unit clause (x 2 ). Edges 1-2, 2-3, 2-4 and 2-6 represent binary clauses and edges 1-4, 2-5 and 3-6 link complemented literals.
Plain Maximum Satisfiability
Let ϕ represent the CNF formula of a MaxSAT instance. Moreover, let ϕ sbp be the CNF formula for the symmetry-breaking predicates obtained with a CNF symmetry tool (e.g. Shatter 3 ). All clauses in ϕ are effectively soft clauses, for which the objective is to maximize the number of satisfied clauses. In contrast, the clauses in ϕ sbp are hard clauses, which must necessarily be satisfied. As a result, the original MaxSAT problem is transformed into a partial MaxSAT problem, where ϕ denotes the soft clauses and ϕ sbp denotes the hard clauses. The solution of the partial MaxSAT problem corresponds to the solution of the original MaxSAT problem.
Example 3.
For the CNF formula of Example 1, the generated SBP predicates (by Shatter) are: ϕ sbp = (x 3 ) ∧ (x 1 ∨ x 3 ) As result, the resulting instance of partial MaxSAT will be ϕ ′ = (ϕ h ∧ ϕ s ) = (ϕ sbp ∧ ϕ). Moreover, x 3 = 0 and x 1 = 0 are necessary assignments, and so variables x 1 and x 3 can be ignored for maximizing the number of satisfied soft clauses.
Observe that the hard clauses represented by ϕ sbp do not change the solution of the original MaxSAT problem. Indeed, the construction of the symmetry breaking predicates guarantees that the maximum number of satisfied soft clauses remains unchanged by the addition of the hard clauses. Proof: (Sketch) The proof follows from the fact that symmetries map models into models and non-models into non-models (see Proposition 2.1 in [6]). Consider the clauses as an ordered sequence ω 1 , . . . , ω m . Given a symmetry, a clause in position i will be mapped to a clause in another position j. Now, given any assignment, if the clause in position i is satisfied (unsatisfied), then by applying the symmetry, the clause in position j is now satisfied (unsatisfied). Thus the number of satisfied (unsatisfied) clauses is unchanged.
Partial and Weighted Maximum Satisfiability
For partial MaxSAT, the generation of SBPs needs to be modified. The graph representation of the CNF formula must take into account the existence of hard and soft clauses, which must be distinguished by a graph automorphism algorithm. Symmetric states for problem instances with hard and soft clauses establish a correspondence either between hard clauses or between soft clauses. In other words, when applying a permutation hard clauses can only be replaced by other hard clauses, and soft clauses by other soft clauses. In order to address this issue, the colored graph generation needs to be modified. In contrast to the MaxSAT case, binary clauses are not distinguished from other clauses, and are represented as vertices in the colored graph. Clauses can now have one of two colors. A vertex with color 2 is associated with each soft clause, and a vertex with color 3 is associated with each hard clause. This modification ensures that any identified automorphism guarantees that soft clauses correspond only to soft clauses, and hard clauses correspond only to hard clauses. Moreover, the procedure for the generation of SBPs from the groups found by a graph automorphism tool remains unchanged, and the SBPs can be added to the original instance as new hard clauses. The resulting instance is also an instance of partial MaxSAT. Correctness of this approach follows form the correctness of the plain MaxSAT case.
The solution for weighted MaxSAT and for weighted partial MaxSAT is similar to the partial MaxSAT case, but now clauses with different weights are represented by vertices with different colors. This guarantees that the groups found by the graph automorphism tool take into consideration the weight of each clause. Let {c 1 , c 2 , . . . , c k } denote the distinct clause weights in the CNF formula. Each clause of weight c i is associated with a vertex of color i + 1 in the colored graph. In case there exist hard clauses, an additional color k + 2 is used, and so each hard clause is represented by a vertex with color k + 2 in the colored graph. Associating distinct clause weights with distinct colors guarantees that the graph automorphism algorithm can only make the correspondence between clauses with the same weight. Moreover, the identified SBPs result in new hard clauses that are added to the original problem. For either weighted MaxSAT or weighted partial MaxSAT, the result is an instance of weighted partial MaxSAT. As before, correctness of this approach follows form the correctness of the plain MaxSAT case.
Example 4. Consider the following weighted partial MaxSAT instance: 9) for which the last two clauses are hard. Figure 2 shows the colored undirected graph associated with the formula. Clauses with different weights are represented with different colors (shown in the figure with different vertex shapes). A graph automorphism algorithm can then be used to generate the symmetry breaking predicates ϕ sbp = (x 1 ) ∧ (x 3 ), consisting of two hard clauses. As a result, the assignments x 1 = 0 and x 3 = 0 become necessary. Table 1 summarizes the problem transformations described in this section, where MS represents plain MaxSAT, PMS represents partial MaxSAT, WMS represents weighted MaxSAT, and WPMS represents weighted partial MaxSAT. The use of SBPs introduces a number of hard clauses, and so the resulting problems are either partial MaxSAT or weighted partial MaxSAT.
ϕ = (x 1 ∨ x 2 , 1) ∧ (x 1 ∨ x 2 , 1) ∧ (x 2 , 5) ∧ (x 3 ∨ x 2 , 9) ∧ (x 3 ∨ x 2 ,
Experimental Results
The experimental setup has been organized as follows. First, all the instances from the first and second MaxSAT evaluations (2006 and 2007) [3] were run. These results allowed selecting relevant benchmark families, for which symmetries occur and which require a non-negligible amount of time for being solved by both approaches (with or without SBPs). Afterwards, the instances for which both approaches aborted were removed from the tables of results. This resulted in selecting the hamming and the MANN instances for plain MaxSAT, the ii32 and again the MANN instances for partial MaxSAT, the c-fat500 instances for weighted MaxSAT and the dir and log instances for weighted partial MaxSAT.
Besides the instances that participated in the MaxSAT competition, we have included additional problem instances (hole, Urq and chnl). The hole instances refer to the well-known pigeon hole problem, the Urq instances represent randomized instances based on expander graphs and the chnl instances model the routing of wires in the channels of field-programmable integrated circuits. These instances refer to problems that can be naturally encoded as MaxSAT problems and are known to be highly symmetric [1]. The approach outlined above was also followed for selecting the instances to be included in the tables of results.
We have run different publicly available MaxSAT solvers, namely MINIMAXSAT 4 , TOOLBAR 5 and MAXSATZ 6 . (MAXSATZ accepts only plain MaxSAT instances.) It has been observed that MINIMAXSAT behavior is similar to TOOLBAR and MAXSATZ, albeit being in general more robust. For this reason, the results focus on MINIMAXSAT. Tables 2 and 3 provide the results obtained. Table 2 refers to plain MaxSAT instances and Table 3 refers to partial MaxSAT (PMS), weighted MaxSAT (WMS) and weighted partial MaxSAT (WPMS) instances. For each instance, the results shown include the number of clauses added as a result of SBPs (#ClsSbp), the time required for solving the original instances (OrigT), i.e. without SBPs, and the time required for breaking the symmetries plus the time required for solving the extended formula afterwards (SbpT). In practice, the time required for generating SBPs is negligible. The results were obtained on a Intel Xeon 5160 server (3.0GHz, 1333Mhz, 4MB) running Red Hat Enterprise Linux WS 4.
The experimental results allow establishing the following conclusions:
-The inclusion of symmetry breaking is essential for solving a number of problem instances. We should note that all the plain MaxSAT instances in Table 2 for which MINIMAXSAT aborted, are also aborted by TOOLBAR and MAXSATZ. After adding SBPs all these instances become easy to solve by any of the solvers. For the aborted partial, weighted and weighted partial MaxSAT instances in Table 3 this is not always the case, since a few instances aborted by MINIMAXSAT could be solved by TOOLBAR without SBPs. However, the converse is also true, as there are instances that were initially aborted by TOOLBAR (although solved by MINIMAXSAT) that are solved by TOOLBAR after adding SBPs. Overall, the inclusion of SBPs should be considered when a hard problem instance is known to exhibit symmetries. This does not necessarily imply that after breaking symmetries the instance becomes trivial to solve, and there can be cases where the new clauses may degrade performance. However, in a significant number of cases, highly symmetric problems become much easier to solve after adding SBPs. In many of these cases the problem instances become trivial to solve.
Conclusions
This paper shows how symmetry breaking can be used in MaxSAT and in its most wellknown variants, including partial MaxSAT, weighted MaxSAT, and weighted partial MaxSAT. Experimental results, obtained on representative instances from the MaxSAT evaluation [3] and practical instances [1], demonstrate that symmetry breaking allows solving problem instances that no state of the art MaxSAT solver could otherwise solve. For all problem instances considered, the computational effort of computing symmetries is negligible. Nevertheless, and as is the case with related work for SAT and PB constraints, symmetry breaking should be considered as an alternative problem solving technique, to be used when standard techniques are unable to solve a given problem instance.
The experimental results motivate additional work on symmetry breaking for MaxSAT. The construction of the colored graph may be improved by focusing on possible relations among the different clause weights. Moreover, the use of conditional symmetries could be considered [7,18].
| 3,715 |
0804.0802
|
2952808759
|
The class QMA(k), introduced by , consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k)=QMA(2) for k>=2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. First, we give a protocol by which a verifier can be convinced that a 3SAT formula of size n is satisfiable, with constant soundness, given O(sqrt(n)) unentangled quantum witnesses with O(log n) qubits each. Our protocol relies on the existence of very short PCPs. Second, we show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k)=QMA(2) for all k>=2. Third, we prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one.
|
The class @math , or Quantum Merlin-Arthur, consists of all languages that admit a proof protocol in which Merlin sends Arthur a polynomial-size quantum state @math , and then Arthur decides whether to accept or reject in quantum polynomial time. This class was introduced by Knill @cite_9 , Kitaev @cite_24 , and Watrous @cite_23 as a quantum analogue of @math . By now we know a reasonable amount about @math : for example, it allows amplification of success probabilities, is contained in @math , and has natural complete promise problems. (See Aharonov and Naveh @cite_19 for a survey.)
|
{
"abstract": [
"Introduction Classical computation Quantum computation Solutions Elementary number theory Bibliography Index.",
"In this article we introduce a new complexity class called PQMA_log(2). Informally, this is the class of languages for which membership has a logarithmic-size quantum proof with perfect completeness and soundness which is polynomially close to 1 in a context where the verifier is provided a proof with two unentangled parts. We then show that PQMA_log(2) = NP. For this to be possible, it is important, when defining the class, not to give too much power to the verifier. This result, when compared to the fact that QMA_log = BQP, gives us new insight on the power of quantum information and the impact of entanglement.",
"Does the notion of a quantum randomized or nondeterministic algorithm make sense, and if so, does quantum randomness or nondeterminism add power? Although reasonable quantum random sources do not add computational power, the discussion of quantum randomness naturally leads to several definitions of the complexity of quantum states. Unlike classical string complexity, both deterministic and nondeterministic quantum state complexities are interesting. A notion of is introduced for decision problems. This notion may be a proper extension of classical nondeterminism.",
"The additivity conjecture of quantum information theory implies that entanglement cannot, even in principle, help to funnel more classical information through a quantum-communication channel. A counterexample shows that this conjecture is false."
],
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_9",
"@cite_23"
],
"mid": [
"2084019889",
"2950395077",
"1627385003",
"1568529095"
]
}
|
The Power of Unentanglement
| 0 |
|
0804.0802
|
2952808759
|
The class QMA(k), introduced by , consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k)=QMA(2) for k>=2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. First, we give a protocol by which a verifier can be convinced that a 3SAT formula of size n is satisfiable, with constant soundness, given O(sqrt(n)) unentangled quantum witnesses with O(log n) qubits each. Our protocol relies on the existence of very short PCPs. Second, we show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k)=QMA(2) for all k>=2. Third, we prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one.
|
In 2003, Kobayashi, Matsumoto, and Yamakami @cite_20 defined a generalization of @math called @math . Here there are @math Merlins, who send Arthur @math quantum proofs @math respectively that are guaranteed to be unentangled with each other. (Thus @math .) Notice that in the classical case, this generalization is completely uninteresting: we have @math for all @math , since we can always simulate @math Merlins by a single Merlin who sends Arthur a concatenation of the @math proofs. In the quantum case, however, a single Merlin could cheat by the @math proofs, and we know of no general way to detect such entanglement.
|
{
"abstract": [
"This paper introduces quantum multiple-Merlin''-Arthur proof systems in which Arthur receives multiple quantum proofs that are unentangled with each other. Although classical multi-proof systems are obviously equivalent to classical single-proof systems (i.e., usual Merlin-Arthur proof systems), it is unclear whether or not quantum multi-proof systems collapse to quantum single-proof systems (i.e., usual quantum Merlin-Arthur proof systems). This paper presents a necessary and sufficient condition under which the number of quantum proofs is reducible to two. It is also proved that, in the case of perfect soundness, using multiple quantum proofs does not increase the power of quantum Merlin-Arthur proof systems."
],
"cite_N": [
"@cite_20"
],
"mid": [
"2951054288"
]
}
|
The Power of Unentanglement
| 0 |
|
0804.0802
|
2952808759
|
The class QMA(k), introduced by , consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k)=QMA(2) for k>=2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. First, we give a protocol by which a verifier can be convinced that a 3SAT formula of size n is satisfiable, with constant soundness, given O(sqrt(n)) unentangled quantum witnesses with O(log n) qubits each. Our protocol relies on the existence of very short PCPs. Second, we show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k)=QMA(2) for all k>=2. Third, we prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one.
|
When we try to understand @math , we encounter at least three basic questions. First, do multiple quantum proofs ever actually help? That is, can we find some sort of evidence that @math for some @math ? Second, can @math protocols be amplified to exponentially small error? Third, are two Merlins the most we ever need? That is, does @math for all @math ? The second and third questions are motivated, in part, by an analogy to classical ---where the Parallel Repetition Theorem of Raz @cite_14 and the @math theorem of Ben- @cite_18 turned out to be crucial for understanding the class @math .
|
{
"abstract": [
"Quite complex cryptographic machinery has been developed based on the assumption that one-way functions exist, yet we know of only a few possible such candidates. It is important at this time to find alternative foundations to the design of secure cryptography. We introduce a new model of generalized interactive proofs as a step in this direction. We prove that all NP languages have perfect zero-knowledge proof-systems in this model, without making any intractability assumptions. The generalized interactive-proof model consists of two computationally unbounded and untrusted provers, rather than one, who jointly agree on a strategy to convince the verifier of the truth of an assertion and then engage in a polynomial number of message exchanges with the verifier in their attempt to do so. To believe the validity of the assertion, the verifier must make sure that the two provers can not communicate with each other during the course of the proof process. Thus, the complexity assumptions made in previous work, have been traded for a physical separation between the two provers. We call this new model the multi-prover interactive-proof model, and examine its properties and applicability to cryptography.",
"We show that a parallel repetition of any two-prover one-round proof system (MIP(2,1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the total number of possible answers of the two provers. The dependency on the total number of possible answers is logarithmic, which was recently proved to be almost the best possible [U. Feige and O. Verbitsky, Proc.11th Annual IEEE Conference on Computational Complexity, IEEE Computer Society Press, Los Alamitos, CA, 1996, pp. 70--76]."
],
"cite_N": [
"@cite_18",
"@cite_14"
],
"mid": [
"2080578129",
"1970259241"
]
}
|
The Power of Unentanglement
| 0 |
|
0804.0802
|
2952808759
|
The class QMA(k), introduced by , consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k)=QMA(2) for k>=2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. First, we give a protocol by which a verifier can be convinced that a 3SAT formula of size n is satisfiable, with constant soundness, given O(sqrt(n)) unentangled quantum witnesses with O(log n) qubits each. Our protocol relies on the existence of very short PCPs. Second, we show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k)=QMA(2) for all k>=2. Third, we prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one.
|
First, in their original paper on @math , @cite_20 proved that a positive answer to the second question implies a positive answer to the third. That is, if @math protocols can be amplified, then @math for all @math .
|
{
"abstract": [
"This paper introduces quantum multiple-Merlin''-Arthur proof systems in which Arthur receives multiple quantum proofs that are unentangled with each other. Although classical multi-proof systems are obviously equivalent to classical single-proof systems (i.e., usual Merlin-Arthur proof systems), it is unclear whether or not quantum multi-proof systems collapse to quantum single-proof systems (i.e., usual quantum Merlin-Arthur proof systems). This paper presents a necessary and sufficient condition under which the number of quantum proofs is reducible to two. It is also proved that, in the case of perfect soundness, using multiple quantum proofs does not increase the power of quantum Merlin-Arthur proof systems."
],
"cite_N": [
"@cite_20"
],
"mid": [
"2951054288"
]
}
|
The Power of Unentanglement
| 0 |
|
0804.0802
|
2952808759
|
The class QMA(k), introduced by , consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k)=QMA(2) for k>=2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. First, we give a protocol by which a verifier can be convinced that a 3SAT formula of size n is satisfiable, with constant soundness, given O(sqrt(n)) unentangled quantum witnesses with O(log n) qubits each. Our protocol relies on the existence of very short PCPs. Second, we show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k)=QMA(2) for all k>=2. Third, we prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one.
|
Second, Liu, Christandl, and Verstraete @cite_13 gave a natural problem from quantum chemistry, called @math , which is in @math but is not known to be in @math .
|
{
"abstract": [
"This paper introduces a new technique for removing existential quantifiers over quantum states. Using this technique, we show that there is no way to pack an exponential number of bits into a polynomial-size quantum state, in such a way that the value of any one of those bits can later be proven with the help of a polynomial-size quantum witness. We also show that any problem in QMA with polynomial-size quantum advice, is also in PSPACE with polynomial-size classical advice. This builds on our earlier result that BQP qpoly is contained in PP poly, and offers an intriguing counterpoint to the recent discovery of Raz that QIP qpoly = ALL. Finally, we show that QCMA qpoly is contained in PP poly and that QMA rpoly = QMA poly."
],
"cite_N": [
"@cite_13"
],
"mid": [
"1560596808"
]
}
|
The Power of Unentanglement
| 0 |
|
0804.0802
|
2952808759
|
The class QMA(k), introduced by , consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k)=QMA(2) for k>=2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. First, we give a protocol by which a verifier can be convinced that a 3SAT formula of size n is satisfiable, with constant soundness, given O(sqrt(n)) unentangled quantum witnesses with O(log n) qubits each. Our protocol relies on the existence of very short PCPs. Second, we show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k)=QMA(2) for all k>=2. Third, we prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one.
|
Third, Blier and Tapp @cite_17 recently (and independently of us) gave an interesting @math protocol for an @math -complete problem, namely @math . In this protocol, Arthur verifies that an @math -vertex graph @math is @math -colorable, using two unentangled witnesses with only @math qubits each. There is a crucial caveat, though: if @math is @math -colorable, then Arthur can only detect this with probability @math rather than constant probability. Indeed, if the soundness gap were constant rather than @math , then Blier and Tapp's protocol could presumably be scaled up by an exponential to show @math !
|
{
"abstract": [
"In this paper, we show that all languages in NP have logarithmic-size quantum proofs which can be verified provided that two unentangled copies are given. More formally, we introduce the complexity class QMAlog(2) and show that 3COL E QMAlog(2). To obtain this strong and surprising result we have to relax the usual requirements: the completeness is one but the soundness is 1-1 poly. Since the natural classical equivalent of QMAlog(2) is uninteresting (it would be equal to P), this result, like many others, stresses the fact that quantum information is fundamentally different from classical information. It also contributes to our understanding of entanglement since QMAlog = BQP[7]."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2115164346"
]
}
|
The Power of Unentanglement
| 0 |
|
0803.0653
|
2949518220
|
The existence of errors or inconsistencies in the configuration of security components, such as filtering routers and or firewalls, may lead to weak access control policies -- potentially easy to be evaded by unauthorized parties. We present in this paper a proposal to create, manage, and deploy consistent policies in those components in an efficient way. To do so, we combine two main approaches. The first approach is the use of an aggregation mechanism that yields consistent configurations or signals inconsistencies. Through this mechanism we can fold existing policies of a given system and create a consistent and global set of access control rules -- easy to maintain and manage by using a single syntax. The second approach is the use of a refinement mechanism that guarantees the proper deployment of such a global set of rules into the system, yet free of inconsistencies.
|
However, their work does not fix, from our point of view, clear semantics, and their concept of role becomes ambiguous as we pointed out in @cite_10 . Another work based on policy refinement is the RBNS model @cite_16 . However, and although the authors claim that their work is based on the RBAC model @cite_13 , it seems that they only keep from this model only the concept of role. Indeed, the specification of network entities and role and permission assignments are not rigorous and does not fit any reality @cite_10 .
|
{
"abstract": [
"Security administration of large systems is complex, but it can be simplified by a role-based access control approach. This article explains why RBAC is receiving renewed attention as a method of security administration and review, describes a framework of four reference models developed to better understand RBAC and categorizes different implementations, and discusses the use of RBAC to manage itself.",
"",
"Current firewall configuration languages have no well founded semantics. Each firewall implements its own algorithm that parses specific proprietary languages. The main consequence is that network access control policies are difficult to manage and most firewalls are actually wrongly configured. In this paper, we present an access control language based on XML syntax whose semantics is interpreted in the access control model Or-BAC (Organization Based Access Control). We show how to use this language to specify high-level network access control policies and then to automatically derive concrete access control rules to configure specific firewalls through a translation process. Our approach provides clear semantics to network security policy specification, makes management of such policy easier for the administrator and guarantees portability between firewalls."
],
"cite_N": [
"@cite_13",
"@cite_16",
"@cite_10"
],
"mid": [
"2166602595",
"",
"1549053995"
]
}
|
Aggregating and Deploying Network Access Control Policies
|
In order to defend the resources of an information system against unauthorized actions, a security policy must be defined by the administrator of an information system, i.e. a set of rules stating what is permitted and what is prohibited in a system during normal operations. Once specified the complete set of prohibitions and permissions, the administrator must decide which security mechanisms to use in order to enforce the security policy. This enforcement consists in distributing the security rules expressed in this policy over different security components, such as filtering routers and firewalls. This implies cohesion of the security functions supplied by these components. Indeed, security rules deployed over different components must be consistent, addressing the same decisions under equivalent conditions, and not repeating the same actions more than once.
A first solution to ensure these requirements is by applying a formal security model to express network security policies. In [11], for example, an access control language based on XML syntax and supported by the access control model Or-BAC [1] is proposed for specifying access control meta-rules and, then, refined into different firewall configuration rules through XSLT transformations. In [14], another top-down proposal based on the RBAC model [17] is also suggested for such a purpose. However, and although the use of formal models ensures cohesion, completeness and optimization as built-in properties, in most cases, administrators are usually reluctant to define a whole security policy from scratch, and they expect to recycle existing configurations previously deployed over a given system.
A second solution to guarantee consistent and nonredundant firewall configurations consists in analyzing and fixing rules already deployed. In [13], for example, a taxonomy of conflicts in security policies is presented, and two main categories are proposed: (1) intra-firewall anomalies, which refer to those conflicts that might exist within the local set of rules of a given firewall; (2) inter-firewall anomalies, which refer to those conflicts that might exist between the configuration rules of different firewalls that match the same traffic. The authors in [13] propose, moreover, an audit mechanism in order to discover and warn about these anomalies. In [2,3], we pointed out to some existing limitations in [13], and presented an alternative set of anomalies and audit algorithms that detect, report, and eliminate those intra-and inter-component inconsistencies existing on distributed security setups -where both firewalls and NIDSs are in charge of the network security policy.
The main drawback of the first solution, i.e., refinement processes such as [11,14], relies on the necessity of formally writing a global security policy from scratch, as well as a deep knowledge of a given formal model. This reason might explain why this solution is not yet widely used, and most of the times policies are simply deployed based on the expertise and flair of security administrators. The main drawback of the second solution, i.e., audit processes such as [13,2] for analyzing local and distributed security setups, relies on the lack of knowledge about the deployed policy from a global point of view -which is very helpful for maintenance and troubleshooting tasks.
In this paper we propose to combine both solutions to better guarantee the requirements specified for a given network access control policy. Our procedure consists of two main steps. In the first step, the complete set of local policies -already deployed over each firewall of a given system -are aggregated, and a global security policy is derived. It is then possible to update, analyze, and redeploy such a global security policy into several local policiesyet free of anomalies -in a further second step. We need, moreover, a previous step for retrieving all those details of the system's topology which might be necessary during the aggregation and deployment processes (cf. Section 2). The use of automatic network tools, such as [18], may allow us to automatically generate this information, and properly manage any change within the system. The rest of this paper has been organized as follows. We first present in Section 2 the formalism we use to specify filtering rules, an the network model we use to represent the topology of the system. We describe in Section 3 our mechanisms to aggregate and deploy firewall configuration rules, and prove the correctness of such mechanisms. We present some related work in Section 4, and close the paper in Section 5 with some conclusions and future work.
Rules, Topology and Anomalies
We recall in this section some of the definitions previously introduced in [2,3]. We first define a filtering rule in the form R i : {cnd i } → decision i , where i is the relative position of the rule within the set of rules, decision i is a boolean expression in {accept, deny}, and {cnd i } is a conjunctive set of condition attributes (protocol, source, destination, and so on), such that {cnd i } equals A 1 ∧ A 2 ∧ ... ∧ A p , and p is the number of condition attributes of a given filtering rule.
We define now a set of functions to determine which firewalls of the system are crossed by a given packet knowing its source and destination. Let F be a set of firewalls and let Z be a set of zones. We assume that each pair of zones in Z are mutually disjoint, i.e., if z i ∈ Z and z j ∈ Z then z i ∩ z j = ∅. We define the predicates connected(f 1 , f 2 ) (which becomes true whether there exists, at least, one interface connecting firewall f 1 to firewall f 2 ) and adjacent(f, z) (which becomes true whether the zone z is interfaced to firewall f ). We then define a set of paths, P , as follows. If f ∈ F then [f ] ∈ P is an atomic path. Similarly, if [p.f 1 ] ∈ P (be "." a concatenation functor) and f 2 ∈ F , such that f 2 / ∈ p and connected(f 1 , f 2 ), then [p.f 1 .f 2 ] ∈ P . Let us now define functions f irst, last, and tail from P in F such that if p is a path, then f irst(p) corresponds to the first firewall in the path, last(p) corre-sponds to the last firewall in the path, and tail(f, p) corresponds to rest of firewalls in the path after firewall f . We also define the order functor between paths as p 1 ≤ p 2 , such that path p 1 is shorter than p 2 , and where all the firewalls within p 1 are also within p 2 . We define functions route such that p ∈ route(z 1 , z 2 ) iff path p connects zone z 1 to zone z 2 , i.e., p ∈ route(z 1 , z 2 ) iff adjacent(f irst(p), z 1 ) and adjacent(last(p), z 2 ); and minimal route (or M R for short), such that p ∈ M R(z 1 , z 2 ) iff the following conditions hold: (1) p ∈ route(z 1 , z 2 ); (2) there does not exist p ′ ∈ route(z 1 , z 2 ) such that p ′ < p.
Let us finally close this section by overviewing the complete set of anomalies defined in our previous work [2,3]:
Intra-firewall anomalies
• Shadowing -A configuration rule R i is shadowed in a set of configuration rules R whether such a rule never applies because all the packets that R i may match, are previously matched by another rule, or combination of rules, with higher priority.
• Redundancy -A configuration rule R i is redundant in a set of configuration rules R whether the following conditions hold: (1) R i is not shadowed by any other rule or set of rules; (2) when removing R i from R, the security policy does not change.
Inter-firewall anomalies
• Irrelevance -A configuration rule R i is irrelevant in a set of configuration rules R if one of the following conditions holds: (1) Both source and destination address are within the same zone; (2) The firewall is not within the minimal route that connects the source zone to the destination zone.
• Full/Partial-redundancy -A redundancy anomaly 1 occurs between two firewalls whether the firewall closest to the destination zone blocks (completely or partially) traffic that is already blocked by the first firewall.
• Full/Partial-shadowing -A shadowing anomaly occurs between two firewalls whether the one closest to the destination zone does not block traffic that is already blocked by the first firewall.
• Full/Partial-misconnection -A misconnection anomaly occurs between two firewalls whether the closest firewall to the source zone allows all the trafficor just a part of it -that is denied by the second one.
Proposed Mechanisms
The objective of our proposal is the following. From a set F of firewalls initially deployed over a set Z of zones, and if neither intra-nor inter-firewall anomalies apply over such a setup, we aim to derive a single global security police setup R, also free of anomalies. Then, this set of rules R can be maintained and updated 2 as a whole, as well as redeployed over the system through a further refinement process. We present in the following the main processes of our proposal.
Aggregation of Policies
Our aggregation mechanism works as follows. During an initial step (not covered in this paper) it gathers all those details of the system's topology which might be necessary during the rest of stages. The use of network tools, such as [18], allows us to properly manage this information, like the set F of firewalls, the set of configurations rules f [rules] of each firewall f ∈ F , the set Z of zones of the system, and some other topological details defined in Section 2. An analysis of intra-firewall anomalies is then performed within the first stage of the aggregation process, in order to discover and fix any possible anomaly within the local configuration of each firewall f ∈ F . In the next step, an analysis of inter-firewall anomalies is performed at the same time that the aggregation of polices into R also does. If an anomaly within the initial setup is discovered, then an aggregation error warns the officer and the process quits. Conversely, if no inter-firewall anomalies are found, then a global set of rules R is generated and so returned as a result of the whole aggregation process. We present in Algorithm 1 our proposed aggregation process. The input data is a set F of firewalls whose configurations we want to fold into a global set of rules R. For reasons of clarity, we assume in our algorithm that one can access the elements of a set as a linked-list through the operator element i . We also assume one can add new values to the list as any other normal variable does (element i ← value), as well as to both remove and initialize elements through the addition of an empty set (element i ← ∅). The internal order of elements from the linked-list, moreover, keeps with the relative ordering of elements.
The aggregation process consists of two main phases. During the first phase (cf. lines 2 and 3 of Algorithm 1), and through an iterative call to the auxiliary function policyrewriting (cf. Algorithm 4), it analyzes the complete set F of firewalls, in order to discover and remove any possible intra-firewall anomaly. Thus, after this first stage, no useless rules in the local configuration of any firewall f ∈ F might exist. We refer to Section 3.2 for a more detailed description of this function. 2 These operations are not covered in the paper.
Algorithm 1: aggregation(F )
/ * Phase 1 * / 1 foreach f 1 ∈ F do 2 policy-rewriting (f 1 [rules]); 3 / * Phase 2 * / 4 R ← ∅; 5 i ← ∅; 6 foreach f 1 ∈ F do 7 foreach r 1 ∈ f 1 [rules] do 8 Zs ← {z ∈ Z | z ∩ source (r 1 ) = ∅}; 9 Z d ← {z ∈ Z | z ∩ destination (r 1 ) = ∅}; 10 foreach z 1 ∈ Z s do 11 foreach z 2 ∈ Z d do 12 if (z 1 = z 2 ) or (f 1 / ∈ MR(z 1 , z 2 )) then 13 aggregationError (); 14 return ∅; 15 else if (r 1 [decision] ="accept") then 16 foreach f2 ∈ MR(z1, z2) do 17 f2rd ← ∅; 18 f2rd ← {r2 ∈ f2 | r1 ∽ r2 ∧ 19 r2[decision] ="deny"}; 20 if (¬empty(f 2 rd)) then 21 aggregationError (); 22 return ∅; 23 else 24 f2ra ← ∅; 25 f2ra ← {r2 ∈ f2 | r1 ∽ r2 ∧ 26 r2[decision] ="accept"}; 27 foreach r 2 ∈ f 2 ra do 28 R i ← R i ∪ r 2 ; 29 R i [source] ← z 1 ; 30 R i [destination] ← z 2 ; 31 i ← (i + 1); 32 r 2 ← ∅; 33 else if (f1 =first (MR (z1, z2))) then 34 f 3 r ← ∅; 35 foreach f3 ∈ tail(f1, MR(z1, z2)) do 36 f3r ← {r3 ∈ f3|r1 ∽ r3} ∪ f3r; 37
if (¬empty(f 3 r)) then 38 aggregationError (); 39 return ∅; During the second phase (cf. lines 5-51 of Algorithm 1), the aggregation of firewall configurations is performed as follows. For each permission configured in a firewall f ∈ F , the process folds the whole chain 3 of permissions within the components on the minimal route from the source zone to the destination zone; and for each prohibition, it directly keeps such a rule, assuming it becomes to the closest firewall to the source, and no more prohibitions should be placed on the minimal route from the source zone to the destination zone. Moreover, and while the aggregation of policies is being performed, an analysis of inter-firewall anomalies is also applied in parallel. Then, if any interfirewall anomaly is detected during the aggregation of rules R ← aggregation(F ), a message of error is raised and the process quits.
40 else 41 R i ← R i ∪ r 1 ; 42 R i [source] ← z 1 ; 43 R i [destination] ← z 2 ; 44 i ← (i + 1); 45 r 1 ← ∅;
Let us for example assume that during the aggregation process, a filtering rule r i ∈ f i [rules] presents an interfirewall irrelevance, i.e., r i is a rule that applies to a source zone z 1 and a destination zone z 2 (such that s = z 1 ∩ source(r i ) = ∅, d = z 2 ∩ destination(r i ) = ∅) and either z 1 and z 2 are the same zone, or firewall f i is not in the path [f 1 , f 2 , ..., f k ] ∈ M R(z 1 , z 2 ). In this case, we can observe that during the folding process specified by Algorithm 1, the statement of line 13, i.e., (z 1 = z 2 ) or (f i / ∈ M R(z 1 , z 2 )), becomes true and, then, the aggregation process finishes with an error and returns an empty set of rules (cf. statements of lines 14 and 15). Similarly, let us assume that r i ∈ f i [rules] presents an inter-firewall redundancy, i.e., r i is a prohibition that applies to a source zone z 1 and a destination zone z 2 (such that s = z 1 ∩ source(r i ) = ∅, d = z 2 ∩ destination(r i ) = ∅, and [f 1 , f 2 , ..., f k ] ∈ M R(z 1 , z 2 )) and firewall f i is not the first component in M R(z 1 , z 2 ). In this case, we can observe that during the folding process specified by Algorithm 1, the statement of line 34, i.e., f i = f irst(M R(z 1 , z 2 )), becomes f alse and, then, the aggregating process finishes with an error and returns an empty set of rules.
Let us now assume that r i ∈ f i [rules] presents an interfirewall shadowing, i.e., r i is a permission that applies to a source zone z 1 and a destination zone z 2 such that there exists an equivalent prohibition r j that belongs to a firewall f j which, in turn, is closer to the source zone z 1 in M R(z 1 , z 2 ). In this case, we can observe that during the folding process specified by Algorithm 1, the statement of line 38 detects that, after a prohibition in the first firewall of M R(z 1 , z 2 ), i.e., f j = f irst(M R(z 1 , z 2 )), there is, at least, a permission r i that correlates the same attributes. Then, the aggregating process finishes with an error and returns an empty set of rules. Let us finally assume that r i ∈ f i [rules] presents an inter-firewall misconnection, i.e., 3 The operator "∽" is used within Algorithm 1 to denote that two rules r i and r j are correlated if every attribute in r i has a non empty intersection with the corresponding attribute in r j . r i is a prohibition that applies to a source zone z 1 and a destination zone z 2 such that there exists, at least, a permission r j that belongs to a firewall f j closer to the source zone z 1 in M R(z 1 , z 2 ). In this case, we can observe that during the folding process specified by Algorithm 1, the statement of line 21 detects this anomaly and, then, the process finishes with an error and returns an empty set of rules.
It is straightforward then to conclude that whether no inter-firewall anomalies apply to any firewall f ∈ F , our aggregation process returns a global set of filtering rules R with the union of all the filtering rules previously deployed over F . It is yet necessary to perform a post-process of R, in order to avoid the redundancy of all permissions, i.e., accept rules, gathered during the aggregating process. In order to do so, the aggregation process calls at the end of the second phase (cf. line 50 of Algorithm 1) to the auxiliary function policy-rewriting (cf. Algorithm 4). We offer in the following a more detailed description of this function.
Policy Rewriting
We recall in this section our audit process to discover and remove rules that never apply or are redundant in local firewall policies [9,10]. The process is based on the analysis of relationships between the set of configuration rules of a local policy. Through a rewriting of rules, it derives from an initial set R to an equivalent one T r(R) completely free of dependencies between attributes, i.e., without either redundant or shadowed rules. The whole process is split in three main functions (cf. algorithms 2, 3 and 4).
The first function, exclusion (cf. Algorithm 2), is an auxiliary process which performs the exclusion of attributes between two rules. It receives as input two rules, A and B, and returns a third rule, C, whose set of condition attributes is the exclusion of the set of conditions from A over B. We represent the attributes of each rule in the form of Rule[cnd] 4 as a boolean expression over p possible attributes (such as source, destination, protocol, ports, and so on). Similarly, we represent the decision of the rule in the form Rule[decision] as a boolean variable whose values are in {accept, deny}. Moreover, we use two extra elements for each rule, in the form Rule[shadowing] and Rule[redundancy], as two boolean variables in {true, f alse} to store the reason for why a rule may disappear during the process.
The second function, testRedundancy (cf. Algorithm 3), is a boolean function in {true, f alse} which, in turn, applies the transformation exclusion (cf. Algorithm 2) over a set of configuration rules to check whether the first rule is redundant, i.e., applies the same policy, regarding the rest of rules.
Finally, the third function, policy-rewriting (cf. Algorithm 4), performs the whole process of detecting and removing the complete set of intra-firewall anomalies. It receives as input a set R of rules, and performs the audit process in two different phases. During the first phase, any possible shadowing between rules with different decision values is marked and removed by iteratively applying function exclusion (cf. Algorithm 2). The resulting set of rules obtained after the execution of the first phase is again analyzed when applying the second phase.
(A p ∩ B p ) = ∅) then C[cnd] ← C[cnd] ∪ 7 {(B 1 − A 1 ) ∧ B 2 ∧ ... ∧ Bp, 8 (A 1 ∩ B 1 ) ∧ (B 2 − A 2 ) ∧ ... ∧ Bp, 9 (A 1 ∩ B 1 ) ∧ (A 2 ∩ B 2 ) ∧ (B 3 − A 3 ) ∧ ... ∧ Bp, 10 ... 11 (A 1 ∩ B 1 ) ∧ ... ∧ (A p−1 ∩ B p−1 ) ∧ (Bp − Ap)};
Each rule is first analyzed, through a call to function testRedundancy (cf. Algorithm 3), to those rules written after the checked rule but that can apply the same decision to the same traffic. If such a test of redundancy becomes true, the rule is marked as redundant and then removed. Otherwise, its attributes are then excluded from the rest of equivalent rules but with less priority in the order. In this way, if any shadowing between rules with the same decision remained undetected during the first phase, it is then marked and removed. Based on the processes defined in algorithms 2, 3, and 4, we can prove 5 the following theorem: Theorem 1 Let R be a set of filtering rules and let T r(R) be the resulting filtering rules obtained by applying Algorithm 4 to R. Then the following statements hold: (1) R and T r(R) are equivalent; (2) Ordering the rules in T r(R) is no longer relevant; (3) T r(R) is free from both shadowing and redundancy.
Deployment of Rules
We finally present in Algorithm 5 our proposed refinement mechanism for the deployment of an updated global set of rules. The deployment strategy defined in the algorithm is the following. Let F be the set of firewalls that partitions the system into the set Z of zones. Let R be the set of configuration rules resulting from the maintenance of a given global set of rules obtained from the aggregation process presented in Section 3.1 (cf. Algorithm 1). Let r ∈ R be a configuration rule that applies to a source zone z 1 and a destination zone z 2 , such that s = z 1 ∩ source(r) = ∅ and d = z 2 ∩ destination(r) = ∅. Let r ′ be a rule identical to r except that source(r ′ ) = s and destination(r ′ ) = d. Let us finally assume that [f 1 , f 2 , . . . , f k ] ∈ M R(z 1 , z 2 ). Then, any rule r ∈ R is deployed over the system as follows:
• If r[decision] = accept then deploy a permission r ′ on every firewall on the minimal route from source s to destination d.
• If r[decision] = deny then deploy a single 6 prohibition r ′ on the most-upstream firewall (i.e., the closest firewall to the source) of the minimal route from source s to destination d. If such a firewall does not exist, then generate a deployment error message.
Algorithm 5: deployment(R,Z)
policy-rewriting (R); It is straightforward now to prove that the deployment of a given set of rules R through Algorithm 5 is free of either intra-and/or inter-firewall anomalies (cf. Section 2). On the one hand, during the earliest stage of Algorithm 5, the complete set of rules in R is analyzed and, if necessary, fixed with our policy-rewriting process (cf. Section 3.2, Algorithm 4). Then, by Theorem 1, we can guarantee that neither shadowed nor redundant rules might exist in R. Moreover, it also allows us to guarantee that the order between rules in R is not relevant. On the other hand, the use of the deployment strategy defined above allows us to guarantee that the resulting setup is free of inter-firewall anomalies. First, since each permission r a in R opens a flow of permissions over all the firewalls within the minimal routes from the source to the destination pointed by r a , and since any other rule r ′ in R cannot match the same traffic that r a matches, we can guarantee that neither inter-firewall shadowing nor inter-firewall misconnection can appear in the resulting setup. Second, since each prohibition r d in R is deployed just once in the closest firewall to the source pointed by r d , and since any other rule r ′ in R cannot match the same traffic that r d matches, we can guarantee that any inter-firewall redundancy can appear in the resulting setup.
1 foreach r 1 ∈ R do 2 Z s ← {z ∈ Z | z ∩ source (r 1 ) = ∅}; 3 Z d ← {z ∈ Z | z ∩ destination (r 1 ) = ∅}; 4 foreach z 1 ∈ Z s do 5 foreach z 2 ∈ Z d do 6 if r 1 [decision] ="accept" then 7 foreach f 1 ∈ MR(z 1 , z 2 ) do 8 r ′ 1 ← r; 9 r ′ 1 [source] ← Z 1 ; 10 r ′ 1 [destination] ←
does).
Although in [4] the authors pointed out to this problematic, claiming that they break down the initial set of rules into an equivalent set of rules free of overlaps between rules, no specific algorithms have been provided for solving it. From our point of view, the proposal presented in [20] best addresses such a problem, although it also presents some limitations. For instance, we can easily find situations where the proposal presented in [20] reports partial redundancies instead of a single full redundancy. Moreover, neither [13] nor [20] address, as we do in this paper by extending the approach presented in [2,11], a folding process for combining both analysis and refinement strategies.
Conclusions
The existence of errors or anomalies in the configuration of network security components, such as filtering routers or firewalls, is very likely to degrade the security policy of a system [12]. This is a serious problem which must be solved since, if not handled correctly, it can lead to unauthorized parties to get the control of such a system. We introduced in Section 1 two main strategies to set firewall configurations free of errors. The first approach is to apply a formal security model -such as the formal model we presented in [11] -to express the security policy of the access control for the network, and to generate the specific syntax for each given firewall from this formal policy -for instance, by using XSLT transformations from the formal policy to generate specific Netfilter configuration rules [19]. A second approach is to apply an analysis process of existing configurations, in order to detect configuration errors and to properly eliminate them. In [2,3], for instance, we presented an audit process based on this second strategy to set a distributed security scenario free of misconfiguration.
We presented in Section 3 how to combine both approaches in order to better guarantee the requirements specified for a given network access control policy. Thus, from an initial bottom-up approach, we can analyze existing configurations already deployed into a given system, in order to detect and correct potential anomalies or configuration errors. Once verified those setups, we offer to the administrator a folding mechanism to aggregate the different configurations into a global security policy to, finally, express by using a sole formal model, the security policy as a whole. The security officer can then perform maintenance tasks over such a single point, and then, unfold the changes into the existing security components of the system.
As work in progress, we are actually evaluating the implementation of the strategy presented in this paper by combining both the refinement process presented in [11] and the audit mechanism presented in [2,3] (both of them implemented through a scripting language as a web service [7]). Although this first research prototype demonstrates the effectiveness of our approach, more evaluations should be done to study the real impact of our proposal for the maintenance and deployment of complex production scenarios. We plan to address these evaluations and discuss the results in a forthcoming paper.
On the other hand, and as future work, we are currently studying how to extend our approach in the case where the security architecture includes not only firewalls but also IDS/IPS, and IPSec devices. Though there is a real similarity between the parameters of those devices' rules (as we partially show in [2,3] for the analysis of anomalies), more investigation has to be done in order to extend the approach presented in this paper. In parallel to this work, we are also considering to extend our approach to the managing of stateful policies.
| 4,954 |
0802.2543
|
2950054134
|
Unexpected increases in demand and most of all flash crowds are considered the bane of every web application as they may cause intolerable delays or even service unavailability. Proper quality of service policies must guarantee rapid reactivity and responsiveness even in such critical situations. Previous solutions fail to meet common performance requirements when the system has to face sudden and unpredictable surges of traffic. Indeed they often rely on a proper setting of key parameters which requires laborious manual tuning, preventing a fast adaptation of the control policies. We contribute an original Self-* Overload Control (SOC) policy. This allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time. Our policy does not require any prior information on the incoming traffic or manual configuration of key parameters. We ran extensive simulations under a wide range of operating conditions, showing that SOC rapidly adapts to time varying traffic and self-optimizes the resource utilization. It admits as many new sessions as possible in observance of the agreements, even under intense workload variations. We compared our algorithm to previously proposed approaches highlighting a more stable behavior and a better performance.
|
The application of the autonomic computing paradigm to the problem of overload control in web systems poses some key problems concerning the design of the monitoring module. The authors of @cite_12 propose a technique for learning dynamic patterns of web user behavior. A finite state machine representing the typical user behavior is constructed on the basis of past history and used for prediction and prefetching techniques. In paper @cite_5 the problem of delay prediction is analyzed on the basis of a learning activity exploiting passive measurements of query executions. Such predictive capability is exploited to enhance traditional query optimizers.
|
{
"abstract": [
"The rapid growth of the Internet and support for interoperability protocols has increased the number of Web accessible sources, WebSources. Current wrapper mediator architectures need to be extended with a wrapper cost model (WCM) for WebSources that can estimate the response time (delays) to access sources as well as other relevant statistics. In this paper, we present a Web prediction tool (WebPT), a tool that is based on learning using query feedback from WebSources. The WebPT uses dimensions time of day, day, and quantity of data, to learn response times from a particular WebSource, and to predict the expected response time (delay) for some query. Experiment data was collected from several sources, and those dimensions that were significant in estimating the response time were determined. We then trained the WebPT on the collected data, to use the three dimensions mentioned above, and to predict the response time, as well as a confidence in the prediction. We describe the WebPT learning algorithms, and report on the WebPT learning for WebSources. Our research shows that we can improve the quality of learning by tuning the WebPT features, e.g., training the WebPT using a logarithm of the input training data; including significant dimensions in the WebPT; or changing the ordering of dimensions. A comparison of the WebPT with more traditional neural network (NN) learning has been performed, and we briefly report on the comparison. We then demonstrate how the WebPT prediction of delay may be used by a scrambling enabled optimizer. A scrambling algorithm identifies some critical points of delay, where it makes a decision to scramble (modify) a plan, to attempt to hide the expected delay by computing some other part of the plan that is unaffected by the delay. We explore the space of real delay at a WebSource, versus the WebPT prediction of this delay, with respect to critical points of delay in specific plans. We identify those cases where WebPT overestimation or underestimation of the real delay results in a penalty in the scrambling enabled optimizer, and those cases where there is no penalty. Using the experimental data and WebPT learning, we test how good the WebPT is in minimizing these penalties.",
"Autonomics or self-reorganization becomes pertinent for web-sites serving a large number of users with highly varying workloads. An important component of self-adaptation is to model the behaviour of users and adapt accordingly. This paper proposes a learning-automata based technique for model discovery. User access patterns are used to construct an FSM model of user behaviour that in turn is used for prediction and prefetching. The proposed technique uses a generalization algorithm to classify behaviour patterns into a small number of generalized classes. It has been tested on both synthetic and live data-sets and has shown a prediction hit-rate of up to 89 on a real web-site."
],
"cite_N": [
"@cite_5",
"@cite_12"
],
"mid": [
"2082493470",
"2098943191"
]
}
|
Self-* overload control for distributed web systems
| 0 |
|
0802.2543
|
2950054134
|
Unexpected increases in demand and most of all flash crowds are considered the bane of every web application as they may cause intolerable delays or even service unavailability. Proper quality of service policies must guarantee rapid reactivity and responsiveness even in such critical situations. Previous solutions fail to meet common performance requirements when the system has to face sudden and unpredictable surges of traffic. Indeed they often rely on a proper setting of key parameters which requires laborious manual tuning, preventing a fast adaptation of the control policies. We contribute an original Self-* Overload Control (SOC) policy. This allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time. Our policy does not require any prior information on the incoming traffic or manual configuration of key parameters. We ran extensive simulations under a wide range of operating conditions, showing that SOC rapidly adapts to time varying traffic and self-optimizes the resource utilization. It admits as many new sessions as possible in observance of the agreements, even under intense workload variations. We compared our algorithm to previously proposed approaches highlighting a more stable behavior and a better performance.
|
The cited proposals @cite_5 @cite_12 can partially contribute to improve the QoS of web systems, but differently from our work, none of them directly formulate a complete autonomic solution that at the same time gives directions on how to take measures, and make corresponding admission control decisions for web cluster architectures.
|
{
"abstract": [
"The rapid growth of the Internet and support for interoperability protocols has increased the number of Web accessible sources, WebSources. Current wrapper mediator architectures need to be extended with a wrapper cost model (WCM) for WebSources that can estimate the response time (delays) to access sources as well as other relevant statistics. In this paper, we present a Web prediction tool (WebPT), a tool that is based on learning using query feedback from WebSources. The WebPT uses dimensions time of day, day, and quantity of data, to learn response times from a particular WebSource, and to predict the expected response time (delay) for some query. Experiment data was collected from several sources, and those dimensions that were significant in estimating the response time were determined. We then trained the WebPT on the collected data, to use the three dimensions mentioned above, and to predict the response time, as well as a confidence in the prediction. We describe the WebPT learning algorithms, and report on the WebPT learning for WebSources. Our research shows that we can improve the quality of learning by tuning the WebPT features, e.g., training the WebPT using a logarithm of the input training data; including significant dimensions in the WebPT; or changing the ordering of dimensions. A comparison of the WebPT with more traditional neural network (NN) learning has been performed, and we briefly report on the comparison. We then demonstrate how the WebPT prediction of delay may be used by a scrambling enabled optimizer. A scrambling algorithm identifies some critical points of delay, where it makes a decision to scramble (modify) a plan, to attempt to hide the expected delay by computing some other part of the plan that is unaffected by the delay. We explore the space of real delay at a WebSource, versus the WebPT prediction of this delay, with respect to critical points of delay in specific plans. We identify those cases where WebPT overestimation or underestimation of the real delay results in a penalty in the scrambling enabled optimizer, and those cases where there is no penalty. Using the experimental data and WebPT learning, we test how good the WebPT is in minimizing these penalties.",
"Autonomics or self-reorganization becomes pertinent for web-sites serving a large number of users with highly varying workloads. An important component of self-adaptation is to model the behaviour of users and adapt accordingly. This paper proposes a learning-automata based technique for model discovery. User access patterns are used to construct an FSM model of user behaviour that in turn is used for prediction and prefetching. The proposed technique uses a generalization algorithm to classify behaviour patterns into a small number of generalized classes. It has been tested on both synthetic and live data-sets and has shown a prediction hit-rate of up to 89 on a real web-site."
],
"cite_N": [
"@cite_5",
"@cite_12"
],
"mid": [
"2082493470",
"2098943191"
]
}
|
Self-* overload control for distributed web systems
| 0 |
|
0802.2543
|
2950054134
|
Unexpected increases in demand and most of all flash crowds are considered the bane of every web application as they may cause intolerable delays or even service unavailability. Proper quality of service policies must guarantee rapid reactivity and responsiveness even in such critical situations. Previous solutions fail to meet common performance requirements when the system has to face sudden and unpredictable surges of traffic. Indeed they often rely on a proper setting of key parameters which requires laborious manual tuning, preventing a fast adaptation of the control policies. We contribute an original Self-* Overload Control (SOC) policy. This allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time. Our policy does not require any prior information on the incoming traffic or manual configuration of key parameters. We ran extensive simulations under a wide range of operating conditions, showing that SOC rapidly adapts to time varying traffic and self-optimizes the resource utilization. It admits as many new sessions as possible in observance of the agreements, even under intense workload variations. We compared our algorithm to previously proposed approaches highlighting a more stable behavior and a better performance.
|
The authors of @cite_18 also address a very important decision problem in the design of the monitoring module: the timing of performance control. They propose to adapt the time interval between successive decisions to the size of workload dependent system parameters, such as the processor queue length. The dynamic adjustment of this interval is of primary importance for threshold based policies for which a constant time interval between decisions may lead to an oscillatory behavior in high load scenarios as we show in Section . Simulations reveal that our algorithm is not subject to oscillations and shows a very little dependence on the time interval between decisions.
|
{
"abstract": [
"How to effectively allocate system resource to meet the service level agreement (SLA) of Web servers is a challenging problem. In this paper, we propose an improved scheme for autonomous timing performance control in Web servers under highly dynamic traffic loads. We devise a novel delay regulation technique called queue length model based feedback control utilizing server internal state information to reduce response time variance in presence of bursty traffic. Both simulation and experimental studies using synthesized workloads and real-world Web traces demonstrate the effectiveness of the proposed approach"
],
"cite_N": [
"@cite_18"
],
"mid": [
"2153466231"
]
}
|
Self-* overload control for distributed web systems
| 0 |
|
0802.2543
|
2950054134
|
Unexpected increases in demand and most of all flash crowds are considered the bane of every web application as they may cause intolerable delays or even service unavailability. Proper quality of service policies must guarantee rapid reactivity and responsiveness even in such critical situations. Previous solutions fail to meet common performance requirements when the system has to face sudden and unpredictable surges of traffic. Indeed they often rely on a proper setting of key parameters which requires laborious manual tuning, preventing a fast adaptation of the control policies. We contribute an original Self-* Overload Control (SOC) policy. This allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time. Our policy does not require any prior information on the incoming traffic or manual configuration of key parameters. We ran extensive simulations under a wide range of operating conditions, showing that SOC rapidly adapts to time varying traffic and self-optimizes the resource utilization. It admits as many new sessions as possible in observance of the agreements, even under intense workload variations. We compared our algorithm to previously proposed approaches highlighting a more stable behavior and a better performance.
|
The problem of designing adaptive component-level thresholds is analyzed in @cite_17 for a general context of autonomic computing. The mechanism proposed in the paper consists in monitoring the threshold values in use by keeping track of false alarms with respect to possible violations of service level agreements. A regression model is used to to fit the observed history. When a sufficiently confident fit is attained the thresholds are calculated accordingly. On the contrary if the required confidence is not attained, the thresholds are set to random values as if there was no history. A critical problem of this proposal is the fact that the most common threshold policies cause on off behaviors that often result in unacceptable performance. Our proposal is instead based on a probabilistic approach and on a learning technique, that dynamically creates a knowledge basis for the online evaluation of the best decision to make even for traffic situations that never occurred in the past history.
|
{
"abstract": [
"Threshold violations reported for system components signal undesirable conditions in the system. In complex computer systems, characterized by dynamically changing workload patterns and evolving business goals, the pre-computed performance thresholds on the operational values of performance metrics of individual system components are not available. This paper focuses on a fundamental enabling technology for performance management: automatic computation and adaptation of statistically meaningful performance thresholds for system components. We formally define the problem of adaptive threshold setting with controllable accuracy of the thresholds and propose a novel algorithm for solving it. Given a set of Service Level Objectives (SLOs) of the applications executing in the system, our algorithm continually adapts the per-component performance thresholds to the observed SLO violations. The purpose of this continual threshold adaptation is to control the average amounts of false positive and false negative alarms to improve the efficacy of the threshold-based management. We implemented the proposed algorithm and applied it to a relatively simple, albeit non-trivial, storage system. In our experiments we achieved a positive predictive value of 92 and a negative predictive value of 93 for component level performance thresholds"
],
"cite_N": [
"@cite_17"
],
"mid": [
"1875956924"
]
}
|
Self-* overload control for distributed web systems
| 0 |
|
0802.1362
|
2952078074
|
We analyze the computational complexity of market maker pricing algorithms for combinatorial prediction markets. We focus on Hanson's popular logarithmic market scoring rule market maker (LMSR). Our goal is to implicitly maintain correct LMSR prices across an exponentially large outcome space. We examine both permutation combinatorics, where outcomes are permutations of objects, and Boolean combinatorics, where outcomes are combinations of binary events. We look at three restrictive languages that limit what traders can bet on. Even with severely limited languages, we find that LMSR pricing is @math -hard, even when the same language admits polynomial-time matching without the market maker. We then propose an approximation technique for pricing permutation markets based on a recent algorithm for online permutation learning. The connections we draw between LMSR pricing and the vast literature on online learning with expert advice may be of independent interest.
|
@cite_17 study the computational complexity of finding acceptable trades among a set of bids in a Boolean combinatorial market. In their setting, the center is an who takes no risk, only matching together willing traders. They study a call market setting in which bids are collected together and processed once en masse. They show that the auctioneer matching problem is co-NP-complete when orders are divisible and @math -complete when orders are indivisible, but identify a tractable special case in which participants are restricted to bet on disjunctions of positive events or single negative events.
|
{
"abstract": [
"We consider a permutation betting scenario, where people wager on the final ordering of n candidates: for example, the outcome of a horse race. We examine the auctioneer problem of risklessly matching up wagers or, equivalently, finding arbitrage opportunities among the proposed wagers. Requiring bidders to explicitly list the orderings that they'd like to bet on is both unnatural and intractable, because the number of orderings is n! and the number of subsets of orderings is 2n!. We propose two expressive betting languages that seem natural for bidders, and examine the computational complexity of the auctioneer problem in each case. Subset betting allows traders to bet either that a candidate will end up ranked among some subset of positions in the final ordering, for example, \"horse A will finish in positions 4, 9, or 13-21\", or that a position will be taken by some subset of candidates, for example \"horse A, B, or D will finish in position 2\". For subset betting, we show that the auctioneer problem can be solved in polynomial time if orders are divisible. Pair betting allows traders to bet on whether one candidate will end up ranked higher than another candidate, for example \"horse A will beat horse B\". We prove that the auctioneer problem becomes NP-hard for pair betting. We identify a sufficient condition for the existence of a pair betting match that can be verified in polynomial time. We also show that a natural greedy algorithm gives a poor approximation for indivisible orders."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2072443840"
]
}
|
Complexity of Combinatorial Market Makers *
|
One way to elicit information is to ask people to bet on it. A prediction market is a common forum where people bet with each other or with a market maker [9,10,23,20,21]. A typical binary prediction market allows bets along one dimension, for example either for or against Hillary Clinton to win the 2008 US Presidential election. Thousands of such one-or small-dimensional markets exist today, each operating independently. For example, at the racetrack, betting on a horse to win does not directly impact the odds for that horse to finish among the top two, as logically it should, because the two bet types are handled separately.
A combinatorial prediction market is a central clearinghouse for handling logically-related bets defined on a combinatorial space. For example, the outcome space might be all n! possible permutations of n horses in a horse race, while bets are properties of permutations such as "horse A finishes 3rd" or "horse A beats horse B." Alternately, the outcome space might be all 2 50 possible state-by-state results for the Democratic candidate in the 2008 US Presidential election, while bets are Boolean statements such as "Democrat wins in Ohio and Florida but not in Texas."
Low liquidity marginalizes the value of prediction markets, and combinatorics only exacerbates the problem by dividing traders' attention among an exponential number of outcomes. A combinatorial matching market-the combinatorial generalization of a standard double auction-may simply fail to find any trades [11,4,5].
In contrast, an automated market maker is always willing to trade on every bet at some price. A combinatorial market maker implicitly or explicitly maintains prices across all (exponentially many) outcomes, thus allowing any trader at any time to place any bet, if transacted at the market maker's quoted price.
Hanson's [13,14] logarithmic market scoring rule market maker (LMSR) is becoming the de facto standard market maker for prediction markets. LMSR has a number of desirable properties, including bounded loss that grows logarithmically in the number of outcomes, infinite liquidity, and modularity that respects some independence relationships. LMSR is used by a number of companies, including inklingmarkets.com, Microsoft, thewsx.com, and yoonew.com, and is the subject of a number of research studies [7,15,8].
In this paper, we analyze the computational complexity of LMSR in several combinatorial betting scenarios. We examine both permutation combinatorics and Boolean combinatorics. We show that both computing instantaneous prices and computing payments of transactions are #P-hard in all cases we examine, even when we restrict participants to very simplistic and limited types of bets. For example, in the horse race analogy, if participants can place bets only of the form "horse A finishes in position N", then pricing these bets properly according to LMSR is #P-hard, even though matching up bets of the exact same form (with no market maker) is polynomial [4].
On a more positive note, we examine an approximation algorithm for LMSR pricing in permutation markets that makes use of powerful techniques from the literature on online learning with expert advice [3,19,12]. We briefly review this online learning setting, and examine the parallels that exist between LMSR pricing and standard algorithms for learning with expert advice. We then show how a recent algorithm for permutation learning [16] can be transformed into an approximation algorithm for pricing in permutation markets in which the market maker is guaranteed to have bounded loss.
RELATED WORK
Fortnow et al. [11] study the computational complexity of finding acceptable trades among a set of bids in a Boolean combinatorial market. In their setting, the center is an auctioneer who takes no risk, only matching together willing traders. They study a call market setting in which bids are collected together and processed once en masse. They show that the auctioneer matching problem is co-NP-complete when orders are divisible and Σ p 2 -complete when orders are indivisible, but identify a tractable special case in which participants are restricted to bet on disjunctions of positive events or single negative events.
Chen et al. [4] analyze the the auctioneer matching problem for betting on permutations, examining two bidding languages. Subset bets are bets of the form "candidate i finishes in positions x, y, or z" or "candidate i, j, or k finishes in position x." Pair bets are of the form "candidate i beats candidate j." They give a polynomial-time algorithm for matching divisible subset bets, but show that matching pair bets is NP-hard.
Hanson highlights the use of LMSR for Boolean combinatorial markets, noting that the subsidy required to run a combinatorial market on 2 n outcomes is no greater than that required to run n independent one-dimensional markets [13,14]. Hanson discusses the computational difficulty of maintaining LMSR prices on a combinatorial space, and proposes some solutions, including running market makers on overlapping subsets of events, allowing traders to synchronize the markets via arbitrage.
The work closest to our own is that of Chen, Goel, and Pen-nock [6], who study a special case of Boolean combinatorics in which participants bet on how far a team will advance in a single elimination tournament, for example a sports playoff like the NCAA college basketball tournament. They provide a polynomial-time algorithm for LMSR pricing in this setting based on a Bayesian network representation of prices. They also show that LMSR pricing is NP-hard for a very general bidding language. They suggest an approximation scheme based on Monte Carlo simulation or importance sampling.
We believe ours are the first non-trivial hardness results and worst-case bounded approximation scheme for LMSR pricing.
Logarithmic Market Scoring Rules
Proposed by Hanson [13,14], a logarithmic market scoring rule is an automated market maker mechanism that always maintains a consistent probability distribution over an outcome space Ω reflecting the market's estimate of the likelihood of each outcome. A generic LMSR offers a security corresponding to each possible outcome ω. The security associated to outcome ω pays off $1 if the outcome ω happens, and $0 otherwise. Let q = (qω)ω∈Ω indicate the number of outstanding shares for all securities. The LMSR market maker starts the market with some initial shares of securities, q 0 , which may be 0. The market keeps track of the outstanding shares of securities q at all times, and maintains a cost function
C(q) = b log X ω∈Ω e qω /b ,(1)
and an instantaneous price function for each security
pω(q) = e qω /b P τ ∈Ω e qτ /b ,(2)
where b is a positive parameter related to the depth of the market. The cost function captures the total money wagered in the market, and C(q 0 ) reflects the market maker's maximum subsidy to the market. The instantaneous price function pω(q) gives the current cost of buying an infinitely small quantity of the security for outcome ω, and is the partial derivative of the cost function, i.e. pω(q) = ∂C(q)/∂qω. We use p = (pω(q))ω∈Ω to denote the price vector. Traders buy and sell securities through the market maker. If a trader wishes to change the number of outstanding shares from q toq, the cost of the transaction that the trader pays is C(q) − C(q), which equals the integral of the price functions following any path from q toq.
When the outcome space is large, it is often natural to offer only compound securities on sets of outcomes. A compound security S pays $1 if one of the outcomes in the set S ⊂ Ω occurs and $0 otherwise. Such a security is the combination of all securities ω ∈ S. Buying or selling q shares of the compound security S is equivalent to buying or selling q shares of each security ω ∈ S. Let Θ denote the set of all allowable compound securities. Denote the outstanding shares of all compound securities as Q = (qS)S∈Θ. The cost function can be written as
C(Q) = b log X ω∈Ω e P S∈Θ:ω∈S q S /b = b log X ω∈Ω Y S∈Θ:ω∈S e q S /b .(3)
The instantaneous price of a compound security S is computed as the sum of the instantaneous prices of the securities that compose the compound security S,
pS(Q) = P ω∈S e qω /b P τ ∈Ω e qτ /b = P ω∈S e P S ′ ∈Θ:ω∈S ′ q S ′ /b P τ ∈Ω e P S ′ ∈Θ:τ ∈S ′ q S ′ /b = P ω∈S Q S ′ ∈Θ:ω∈S ′ e q S ′ /b P τ ∈Ω Q S ′ ∈Θ:τ ∈S ′ e q S ′ /b .(4)
Logarithmic market scoring rules are so named because they are based on logarithmic scoring rules. A logarithmic scoring rule is a set of reward functions
{sω(r) = aω + b log(rω) : ω ∈ Ω},
where r = (rω)ω∈Ω is a probability distribution over Ω, and aω is a free parameter. An agent who reports r is rewarded sω(r) if outcome ω happens. Logarithmic scoring rules are proper in the sense that when facing them a risk-neutral agent will truthfully report his subjective probability distribution to maximize his expected reward. A LMSR market can be viewed as a sequential version of logarithmic scoring rule, because by changing market prices from p top a trader's net profit is sω(p) − sω(p) when outcome ω happens. At any time, a trader in a LMSR market is essentially facing a logarithmic scoring rule.
LMSR markets have many desirable properties. They offer consistent pricing for combinatorial events. As market maker mechanisms, they provide infinite liquidity by allowing trades at any time. Although the market maker subsidizes the market, he is guaranteed a worst-case loss no greater than C(q 0 ), which is b log n if |Ω| = n and the market starts with 0 share of every security. In addition, it is a dominant strategy for a myopic risk-neutral trader to reveal his probability distribution truthfully since he faces a proper scoring rule. Even for forward-looking traders, truthful reporting is an equilibrium strategy when traders' private information is independent conditional on the true outcome [7].
Complexity of Counting
The well-known class NP contains questions that ask whether a search problem has a solution, such as whether a graph is 3-colorable. The class #P consists of functions that count the number of solutions of NP search questions, such as the number of 3-colorings of a graph.
A function g is #P-hard if, for every function f in #P, it is possible to compute f in polynomial time given an oracle for g. Clearly oracle access to such a function g could additionally be used to solve any NP problem, but in fact one can solve much harder problems too. Toda [24] showed that every language in the polynomial-time hierarchy can be solved efficiently with access to a #P-hard function.
To show a function g is a #P-hard function, it is sufficient to show that a function f reduces to g where f was previously known to be #P-hard. In this paper we use the following #P-hard functions to reduce from:
• Permanent: The permanent of an n-by-n matrix A = (ai,j) is defined as
perm(A) = X σ∈Ω n Y i=1 a i,σ(i) ,(5)
where Ω is the set of all permutations over {1, 2, ..., n}.
Computing the permanent of a matrix A containing 0-1 entries is #P-hard [25].
• #2-SAT: Counting the number of satisfying assignments of a formula given in conjunctive normal form with each clause having two literals is #P-hard [26].
• Counting Linear Extensions: Counting the number of total orders that extend a partial order given by a directed graph is #P-hard [2].
#P-hardness is the best we can achieve since all the functions in this paper can themselves be reduced to some other #P function.
LMSR FOR PERMUTATION BETTING
In this section we consider a particular type of market combinatorics in which the final outcome is a ranking over n competing candidates. Let the set of candidates be Nn = {1, . . . , n}, which is also used to represent the set of positions. In the setting, Ω is the set of all permutations over Nn. An outcome σ ∈ Ω is interpreted as the scenario in which each candidate i ends up in position σ(i). Chen et al. [4] propose two betting languages, subset betting and pair betting, for this type of combinatorics and analyze the complexity of the auctioneer's order matching problem for each.
In what follows we address the complexity of operating an LMSR market for both betting languages.
Subset Betting
As in Chen et al. [4], participants in a LMSR market for subset betting may trade two types of compound securities:
(1) a security of the form i|Φ where Φ ⊂ Nn is a subset of positions; and (2) a security Ψ|j where Ψ ⊂ Nn is a subset of candidates. The security i|Φ pays off $1 if candidate i stands at a position that is an element of Φ and $0 otherwise. Similarly, the security Ψ|j pays off $1 if any of the candidates in Ψ finishes at position j and $0 otherwise. For example, in a horse race, participants can trade securities of the form "horse A will come in the second, fourth, or fifth place", or "either horse B or horse C will come in the third place".
Note that owning one share of i|Φ is equivalent to owning one share of i|j for every j ∈ Φ, and similarly owning one share of Ψ|j is equivalent to owing one share of i|j for every i ∈ Ψ. We restrict our attention to a simplified market where securities traded are of the form i|j . We show that even in this simplified market it is #P-hard for the market maker to provide the instantaneous security prices, evaluate the cost function, or calculate payments for transactions, which implies that the running an LMSR market for the more general case of subset betting is also #P-hard.
Traders can trade securities i|j for all i ∈ Nn and j ∈ Nn with the market maker. Let qi,j be the total number of outstanding shares for security i|j in the market. Let Q = (qi,j)i∈N n ,j∈Nn denote the outstanding shares for all securities. The market maker keeps track of Q at all times. From Equation 4, the instantaneous price of security i|j is
pi,j(Q) = P σ∈Ω:σ(i)=j Q n k=1 e q k,σ(k) /b P τ ∈Ω Q n k=1 e q k,τ (k) /b ,(6)
and from Equation 3, the cost function for subset betting is
C(Q) = b log X σ∈Ω n Y k=1 e q k,σ(k) /b .(7)
We will show that computing instantaneous prices, the cost function, and/or payments of transactions for a subset betting market is #P-hard by a reduction from the problem of computing the permanent of a (0,1)-matrix.
Theorem 1. It is #P-hard to compute instantaneous prices in a LMSR market for subset betting. Additionally, it is #P-hard to compute the value of the cost function.
Proof. We show that if we could compute the instantaneous prices or the value of the cost function for subset betting for any quantities of shares purchased, then we could compute the permanent of any (0, 1)-matrix in polynomial time.
Let n be the number of candidates, A = (ai,j) be any n-byn (0,1)-matrix, and N = n! + 1. Note that Q n i=1 a i,σ(i) is either 0 or 1. From Equation 5, perm(A) ≤ n! and hence perm(A) mod N = perm(A). We show how to compute perm(A) mod N from prices in subset betting markets in which qi,j shares of i|j have been purchased, where qi,j is defined by
qi,j = ( b ln N if ai,j = 0, b ln(N + 1) if ai,j = 1(8)
for any i ∈ Nn and any j ∈ Nn.
Let B = (bi,j) be a n-by-n matrix containing entries of the form bi,j = e q i,j /b . Note that bi,j = N if ai,j = 0 and bi,j = N + 1 if ai,j = 1. Thus, perm(A) mod N = perm(B) mod N . Thus, from Equation 6, the price for i|j in the market is
pi,j(Q) = P σ∈Ω:σ(i)=j Q n k=1 b k,σ(k) P τ ∈Ω Q n k=1 b k,τ (k) = bi,j P σ∈Ω:σ(i)=j Q k =i b k,σ(k) P τ ∈Ω Q n k=1 b k,τ (k) = bi,j · perm(Mi,j) perm(B)
where Mi,j is the matrix obtained from B by removing the ith row and jth column. Thus the ability to efficiently compute prices gives us the ability to efficiently compute perm(Mi,j)/perm(B).
It remains to show that we can use this ability to compute perm(B). We do so by telescoping a sequence of prices. Let Bi be the matrix B with the first i rows and columns removed. From above, we have perm(B1)/perm(B) = p1,1(Q)/b1,1. Define Qm to be the (n−m)-by-(n−m) matrix (qi,j)i>m,j>m, that is, the matrix of quantities of securities (qi,j) with the first k rows and columns removed. In a market with only n−m candidates, applying the same technique to the matrix Qm, we can obtain perm(Bm+1)/perm(Bm) from market prices for m = 1, ..., (n−2). Thus by computing n − 1 prices, we can compute
" perm(B1) perm(B) « " perm(B2) perm(B1) « · · · " perm(Bn−1) perm(Bn−2) « = " perm(Bn−1) perm(B)
« .
Noting that Bn−1 only has one element, we thus can compute perm(B) from market prices. Consequently, perm(B) mod N gives perm(A).
Therefore, given a n-by-n (0, 1)-matrix A, we can compute the permanent of A in polynomial time using prices in n − 1 subset betting markets wherein an appropriate quantity of securities have been purchased.
Additionally, note that
C(Q) = b log X σ∈Ω n Y k=1 b k,σ(k) = b log perm(B) .
Thus if we can compute C(Q), we can also compute perm(A).
As computing the permanent of a (0, 1)-matrix is #P-hard, both computing market prices and computing the cost function in a subset betting market are #P-hard.
Corollary 2. Computing the payment of a transaction in a LMSR for subset betting is #P-hard.
Proof. Suppose the market maker starts the market with 0 share of every security. Denote Q 0 as the initial quantities of all securities. If the market maker can compute C(Q) − C(Q) for any quantitiesQ and Q, it can compute C(Q) − C(Q 0 ) for any Q. As C(Q 0 ) = b log n!, the market maker is able to compute C(Q). According to Theorem 1, computing the payment of a transaction is #P-hard.
Pair Betting
In contrast to subset betting, where traders bet on absolute positions for a candidate, pair betting allows traders to bet on the relative position of a candidate with respect to another. More specifically, traders buy and sell securities of the form i > j , where i and j are candidates. The security pays off $1 if candidate i ranks higher than candidate j (i.e., σ(i) < σ(j) where σ is the final ranking of candidates) and $0 otherwise. For example, traders may bet on events of the form "horse A beats horse B", or "candidate C receives more votes than candidate D".
As for subset betting, the current state of the market is determined by the total number of outstanding shares for all securities. Let qi,j denote the number of outstanding shares for i > j . Applying Equations 3 and 4 to the present context, we find that the instantaneous price of the security i, j is given by
pi,j(Q) = P σ∈Ω:σ(i)<σ(j) Q i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q i ′ ,j ′ /b P τ ∈Ω Q i ′ ,j ′ :τ (i ′ )<τ (j ′ ) e q i ′ ,j ′ /b ,(9)
and the cost function for pair betting is
C(Q) = b log X σ∈Ω Y i,j:σ(i)<σ(j) e q i,j /b .(10)
We will show that computing prices, the value of the cost function, and/or payments of transactions for pair betting is #P-hard via a reduction from the problem of computing the number of linear extensions to any partial ordering.
Theorem 3. It is #P-hard to compute instantaneous prices in a LMSR market for pair betting. Additionally, it is #P-hard to compute the value of the cost function.
Proof. Let P be a partial order over {1, . . . , n}. We recall that a linear (or total) order T is a linear extension of P if whenever x ≤ y in P it also holds that x ≤ y in T . We denote by N (P ) the number of linear extensions of P .
Recall that (i, j) is a covering pair of P if i ≤ j in P and there does not exist ℓ = i, j such that i ≤ ℓ ≤ j. Let {(i1, j1), (i2, j2), ... , (i k , j k )} be a set of covering pairs of P . Note that covering pairs of a partially ordered set with n elements can be easily obtained in polynomial time, and that their number is less than n 2 .
We will show that we can design a sequence of trades that, given a list of covering pairs for P , provide N (P ) through a simple function of market prices.
We consider a pair betting market over n candidates. We construct a sequence of k trading periods, and denote by q t i,j and p t i,j respectively the outstanding quantity of security i > j and its instantaneous price at the end of period t. At the beginning of the market, q 0 i,j = 0 for any i and j. At each period t, 0 < t ≤ k, b ln n! shares of security it > jt are purchased.
Let
Nt(i, j) = X σ∈Ω:σ(i)<σ(j) Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q t i ′ ,j ′ /b , and Dt = X σ∈Ω Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q t i ′ ,j ′ /b .
Note that according to Equation 9, p t it,jt = Nt(it, jt)/Dt.
For the first period, as only the security i1 > j1 is purchased, we get
D1 = X σ∈Ω:σ(i 1 )<σ(j 1 ) n! + X σ:σ(i 1 )>σ(j 1 ) 1 = (n!) 2 + n! 2 .
We now show that D k can be calculated inductively from D1 using successive prices given by the market. During period t, b ln n! shares of it > jt are purchased. Note also that the securities purchased are different at each period, so that q s it,jt = 0 if s < t and q s it,jt = b ln n! if s ≥ t. We have Nt(it, jt) = Nt−1(it, jt)e b ln(n!)/b = n!Nt−1(it, jt) .
Hence,
p t it,jt p t−1 it,jt = Nt(it, jt)/Dt Nt−1(it, jt)/Dt−1 = n!Dt−1 Dt ,
and therefore,
D k = (n!) k−1 k Y ℓ=2 p ℓ−1 i ℓ ,j ℓ p ℓ i ℓ ,j ℓ ! D1 .
So D k can be computed in polynomial time in n from the prices.
Alternately, since the cost function at the end of period k can be written as C(Q) = b log D k , D k can also be computed efficiently from the cost function in period k.
We finally show that given D k , we can compute N (P ) in polynomial time. Note that at the end of the k trading periods, the securities purchased correspond to the covering pairs of P , such that e q k i,j /b = n! if (i, j) is a covering pair of P and e q k i,j /b = 1 otherwise. Consequently, for a permutation σ that satisfies the partial order P , meaning that σ(i) ≤ σ(j) whenever i ≤ j in P, we have
Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q k i ′ ,j ′ /b = (n!) k .
On the other hand, if a permutation σ does not satisfy P , it does not satisfy at least one covering pair, meaning that there is a covering pair of P , (i, j), such that σ(i) > σ(j), so that
Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q k i ′ ,j ′ /b ≤ (n!) k−1 .
Since the total number of permutations is n!, the total sum of all terms in the sum D k corresponding to permutations that do not satisfy the partial ordering P is less than or equal to n!(n!) k−1 = (n!) k , and is strictly less than (n!) k unless the number of linear extensions is 0, while the total sum of all the terms corresponding to permutations that do satisfy P is N (P )(n!) k . Thus N (P ) =¨D k /(n!) k˝.
We know that computing the number of linear extensions of a partial ordering is #P-hard. Therefore, both computing the prices and computing the value of the cost function in pair betting are #P-hard. The proof is nearly identical to the proof of Corollary 2.
LMSR FOR BOOLEAN BETTING
We now examine an alternate type of market combinatorics in which the final outcome is a conjunction of event outcomes. Formally, let A be event space, consisting of N individual events A1, · · · , AN , which may or may not be mutually independent. We define the state space Ω be the set of all possible joint outcomes for the N events, so that its size is |Ω| = 2 N . A Boolean betting market allows traders to bet on Boolean formulas of these events and their negations. A security φ pays off $1 if the Boolean formula φ is satisfied by the final outcome and $0 otherwise. For example, a security A1 ∨ A2 pays off $1 if and only if at least one of events A1 and A2 occurs, while a security A1 ∧ A3 ∧ ¬A5 pays off $1 if and only if the events A1 and A3 both occur and the event A5 does not. Following the notational conventions of Fortnow et al. [11], we use ω ∈ φ to mean that the outcome ω satisfies the Boolean formula φ. Similarly, ω ∈ φ implies that the outcome ω does not satisfy φ.
In this section, we focus our attention to LMSR markets for a very simple Boolean betting language, Boolean formulas of two events. We show that even when bets are only allowed to be placed on disjunctions or conjunctions of two events, it is still #P-hard to calculate the prices, the value of the cost function, and payments of transactions in a Boolean betting market operated by a LMSR market maker.
Let X be the set containing all elements of A and their negations. In other words, each event outcome Xi ∈ X is either Aj or ¬Aj for some Aj ∈ A. We begin by considering the scenario in which traders may only trade securities Xi ∨ Xj corresponding to disjunctions of any two event outcomes.
Let qi,j be the total number of shares purchased by all traders for the security Xi ∨ Xj , which pays off $1 in the event of any outcome ω such that ω ∈ (Xi ∨ Xj ) and $0 otherwise. From Equation 4, we can calculate the instantaneous price for the security Xi ∨ Xj for any two event outcomes Xi, Xj ∈ X as
pi,j(Q) = P ω∈Ω:ω∈(X i ∨X j ) Q 1≤i ′ <j ′ ≤2N :ω∈(X i ′ ∨X j ′ ) e q i ′ ,j ′ /b P τ ∈Ω Q 1≤i ′ <j ′ ≤2N :τ ∈(X i ′ ∨X j ′ ) e q i ′ ,j ′ /b .(11)
Note that if Xi = ¬Xj , pi,j(Q) is always $1 regardless of how many shares of other securities have been purchased. According to Equation 3, the cost function is
C(Q) = b log X ω∈Ω Y 1≤i<j≤2N:ω∈(X i ∨X j ) e q i,j /b .(12)
Theorem 5 shows that computing prices and the value of the cost function in such a market is #P-hard, via a reduction from the #2-SAT problem. 1
Theorem 5. It is #P-hard to compute instantaneous prices in a LMSR market for Boolean betting when bets are 1 This can also be proved via a reduction from counting linear extensions using a similar technique to the proof of Theorem 3, but the reduction to #2-SAT is more natural. restricted to disjunctions of two event outcomes. Additionally, it is #P-hard to compute the value of the cost function in this setting.
Proof. Suppose we are given a 2-CNF (Conjunctive Normal Form) formula (Xi 1 ∨ Xj 1 ) ∧ (Xi 2 ∨ Xj 2 ) ∧ · · · ∧ (Xi k ∨ Xj k ) (13) with k clauses, where each clause is a disjunction of two literals (i.e. events and their negations). Assume any redundant terms have been removed.
The structure of the proof is similar to that of the pair betting case. We consider a Boolean betting markets with N events, and show how to construct a sequence of trades that provides, through prices or the value of the cost function, the number of satisfiable assignments for the 2-CNF formula.
We create k trading periods. At period t, a quantity b ln(2 N ) of the security Xi t ∨ Xj t is purchased. We denote by p t i,j and q t i,j respectively the price and outstanding quantities of the security Xi ∨ Xj at the end of period t. Suppose the market starts with 0 share of every security. Note that q s it,jt = 0 if s < t and q s it,
jt = b ln(2 N ) if s ≥ t. Let Nt(i, j) = X ω∈Ω:ω∈(X i ∨X j ) Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q t i ′ ,j ′ /b , and Dt = X ω∈Ω Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q t i ′ ,j ′ /b .
Thus, p t i,j = Nt(it, jt)/Dt.
Since only one security Xi 1 ∨ Xj 1 has been purchased in period 1, we get
D1 = X ω∈Ω:ω∈(X i 1 ∨X j 1 ) 2 N + X ω∈Ω:ω ∈(X i 1 ∨X j 1 ) 1 = 3 · 2 2N−2 + 2 N−2 .
We then show that D k can be calculated inductively from D1. As the only security purchased in period t is (Xi t ∨ Xj t ) in quantity b ln(2 N ), we obtain
Nt(it, jt) = Nt−1(it, jt)e b ln(2 N )/b = Nt−1(it, jt)2 N .
Therefore,
p t it,jt p t−1 it,jt = Nt(it, jt)/Dt Nt−1(it, jt)/Dt−1 = 2 N Dt−1 Dt ,
and we get
D k = (2 N ) k−1 k Y ℓ=2 p ℓ−1 i ℓ ,j ℓ p ℓ i ℓ ,j ℓ ! D1 .
In addition, since the cost function at the end of period k can be expressed as
C(Q) = b log D k ,
D k can also be computed efficiently from the cost function in period k.
We now show that we can deduce from D k the number of satisfiable assignments for the 2-CNF formula (Equation 13). Indeed, each term in the sum
X ω∈Ω Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q k i ′ ,j ′ /b
that corresponds to an outcome ω that satisfies the formula is exactly 2 kN , as exactly k terms in the product are 2 N and the rest is 1. On the contrary, each term in the sum that corresponds to an outcome ω that does not satisfy the 2-CNF formula will be at most 2 (k−1)N since at most k − 1 terms in the product will be 2 N and the rest will be 1. Since the total number of outcomes is 2 N , the total sum of all terms corresponding to outcomes that do not satisfy (13) is less than or equal to 2 N (2 (k−1)N ) = 2 kN , and is strictly less than 2 kN unless the number of satisfying assignments is 0. Thus the number of satisfying assignments is¨D k /2 kN˝.
We know that computing the number of satisfiable assignments of a 2-CNF formula is #P-hard. We have shown how to compute it in polynomial time using prices or the value of the cost function in a Boolean betting market of N events. Therefore, both computing prices and computing the value of the cost function in a Boolean betting market is #P-hard.
Corollary 6. Computing the payment of a transaction in a LMSR for Boolean betting is #P-hard when traders can only bet on disjunctions of two events.
The proof is nearly identical to the proof of Corollary 2.
If we impose that participants in a Boolean betting market may only trade securities corresponding to conjunctions of any two event outcomes, Ai ∧ Aj , the following Corollary gives the complexity results for this situation.
Corollary 7. It is #P-hard to compute instantaneous prices in a LMSR market for Boolean betting when bets are restricted to conjunctions of two event outcomes. Additionally, it is #P-hard to compute the value of the cost function in this setting, and #P-hard to compute the payment for a transaction.
Proof. Buying q shares of security Ai ∧ Aj is equivalent to selling q shares of ¬Ai ∨ ¬Aj . Thus if we can operate a Boolean betting market for securities of the type Ai ∧ Aj in polynomial time, we can also operate a Boolean betting market for securities of the type Ai ∨ Aj in polynomial time. The result then follows from Theorem 5 and Corollary 6.
AN APPROXIMATION ALGORITHM FOR SUBSET BETTING
There is an interesting relationship between logarithmic market scoring rule market makers and a common class of algorithms for online learning in an experts setting. In this section, we elaborate on this connection, and show how results from the online learning community can be used to prove new results about an approximation algorithm for subset betting.
The Experts Setting
We begin by describing the standard model of online learning with expert advice [19,12,27]. In this model, at each time t ∈ {1, · · · , T }, each expert i ∈ {1, · · · , n} receives a loss ℓi,t ∈ [0, 1]. The cumulative loss of expert i at time T is Li,T = P T t=1 ℓi,t. No statistical assumptions are made about these losses, and in general, algorithms are expected to perform well even if the sequence of losses is chosen by an adversary.
An algorithm A maintains a current weight wi,t for each expert i, where P n i=1 wi,t = 1. These weights can be viewed as distributions over the experts. The algorithm then receives its own instantaneous loss ℓA,t = P n i=1 wi,tℓi,t, which may be interpreted as the expected loss of the algorithm when choosing an expert according to the current distribution. The cumulative loss of A up to time T is then defined in the natural way as LA,T = P T t=1 ℓA,t = P T t=1 P n i=1 wi,tℓi,t. A common goal in such online learning settings is to minimize an algorithm's regret. Here the regret is defined as the difference between the cumulative loss of the algorithm and the cumulative loss of an algorithm that would have "chosen" the best expert in hindsight by setting his weight to 1 throughout all the periods. Formally, the regret is given by LA,T − min i∈{1,··· ,n} Li,T .
Many algorithms that have been analyzed in the online experts setting are based on exponential weight updates. These exponential updates allow the algorithm to quickly transfer weight to an expert that is outperforming the others. For example, in the Weighted Majority algorithm of Littlestone and Warmuth [19], the weight on each expert i is defined as
wi,t = wi,t−1e −ηℓ i,t P n j=1 wj,t−1e −ηℓ j,t = e −ηL i,t P n j=1 e −ηL j,t ,(14)
where η is the learning rate, a small positive parameter that controls the magnitude of the updates. The following theorem gives a bound on the regret of Weighted Majority. For a proof of this result and a nice overview of learning with expert advice, see, for example, Cesa-Bianchi and Lugosi [3].
Theorem 8. Let A be the Weighted Majority algorithm with parameter η. After a sequence of T trials, LA,T − min i∈{1,··· ,n} Li,T ≤ ηT + ln(n) η .
Relationship to LMSR Markets
There is a manifest similarity between the expert weights used by Weighted Majority and the prices in the LMSR market. One might ask if the results from the experts setting can be applied to the analysis of prediction markets. Our answer is yes. In fact, it is possible to use Theorem 8 to rediscover the well-known bound of b ln(n) for the loss of an LMSR market maker with n outcomes.
Let ǫ be a limit on the number of shares that a trader may purchase or sell at each time step; in other words, if a trader would like to purchase or sell q shares, this purchase must be broken down into ⌈q/ǫ⌉ separate purchases of ǫ or less shares. Note that the total number of time steps T needed to execute such a sequence of purchases and sales is proportional to 1/ǫ.
We will construct a sequence of loss functions in a setting with n experts to induce a sequence of weight matrices that correspond to the price matrices of the LMSR market. At each time step t, let pi,t ∈ [0, 1] be the instantaneous price of security i at the end of period t, and let qi,t ∈ [−ǫ, ǫ] be the number of shares of security i purchased during period t. Let Qi,t be the total number of shares of security i that have been purchased up to time t. Now, let's define the instantaneous loss of each expert as ℓi,t = (2ǫ − qi,t)/(ηb). First notice that this loss is always in [0, 1] as long as η ≥ 2ǫ/b. From Equations 2 and 14, at each time t,
pi,t = e Q i,t /b P n j=1 e Q j,t /b = e 2ǫt/b−ηL i,t P n j=1 e 2ǫt/b−ηL j,t = e −ηL i,t P n j=1 e −ηL j,t = wi,t .
Applying Theorem 8, and rearranging terms, we find that max i∈{1,··· ,n}
T X t=1 qi,t − T X t=1 n X i=1 pi,tqi,t ≤ η 2 T b + b ln(n).
The first term of the left-hand side is the maximum payment that the market maker needs to make, while the second terms of the left-hand side captures the total money the market maker has received. The right hand side is clearly minimized when η is set as small as possible. Setting η = 2ǫ/b gives us max i∈{1,··· ,n}
T X t=1 qi,t − T X t=1 n X i=1 pi,tqi,t ≤ 4ǫ 2 T b + b ln(n).
Since T = O(1/ǫ), the term 4ǫ 2 T b goes to 0 as ǫ becomes very small. Thus in the limit as ǫ → 0, we get the wellknown result that the worst-case loss of the market maker is bounded by b ln(n).
Considering Permutations
Recently Helmbold and Warmuth [16] have shown that many results from the standard experts setting can be extended to a setting in which, instead of competing with the best expert, the goal is to compete with the best permutation over n items. Here each permutation suffers a loss at each time step, and the goal of the algorithm is to maintain a weighting over permutations such that the cumulative regret to the best permutation is small. It is infeasible to treat each permutation as an expert and run a standard algorithm since this would require updating n! weights at each time step. Instead, they show that when the loss has a certain structure (in particular, when the loss of a permutation is the sum of the losses of each of the n mappings), an alternate algorithm can be used that requires tracking only n 2 weights in the form of an n × n doubly stochastic matrix.
Formally, let W t be a doubly stochastic matrix of weights maintained by the algorithm A at time t. Here W t i,j is the weight corresponding to the probability associated with item i being mapped into position j. Let L t ∈ [0, 1] n×n be the loss matrix at time t. The instantaneous loss of a permutation σ at time t is ℓσ,t = P n i=1 L t i,σ(i) . The instantaneous loss of A is ℓA,t = P n i=1 P n j=1 W t i,j L t i,j , the matrix dot product between W t and L t . Notice that ℓA,t is equivalent to the expectation over permutations σ drawn according to W t of ℓσ,t. The goal of the algorithm is to minimize the cumulative regret to the best permutation, LA,T − minσ∈Ω Lσ,T where the cumulative loss is defined as before.
Helmbold and Warmuth go on to present an algorithm called PermELearn that updates the weight matrix in two steps. First, it creates a temporary matrix W ′ , such that for every i and j, W ′ i,j = W t i,j e −ηL t i,j . It then obtains W t+1 i,j by repeatedly rescaling the rows and columns of W ′ until the matrix is doubly stochastic. Alternately rescaling rows and columns of a matrix M in this way is known as Sinkhorn balancing [22]. Normalizing the rows of a matrix is equivalent to pre-multiplying by a diagonal matrix, while normalizing the columns is equivalent to post-multiplying by a diagonal matrix. Sinkhorn [22] shows that this procedure converges to a unique doubly stochastic matrix of the form RM C where R and C are diagonal matrices if M is a positive matrix. Although there are cases in which Sinkhorn balancing does not converge in finite time, many results show that the number of Sinkhorn iterations needed to scale a matrix so that row and column sums are 1 ± ǫ is polynomial in 1/ǫ [1,17,18].
The following theorem [16] bounds the cumulative loss of the PermELearn in terms of the cumulative loss of the best permutation.
Theorem 9. (Helmbold and Warmuth [16]) Let A be the PermELearn algorithm with parameter η. After a sequence of T trials, LA,T ≤ n ln(n) + η minσ∈Ω Lσ,T 1 − e −η .
Approximating Subset Betting
Using the PermELearn algorithm, it is simple to approximate prices for subset betting in polynomial time. We start with a n × n price matrix P 1 in which all entries are 1/n. As before, traders may purchase securities of the form i|Φ that pay off $1 if and only if horse or candidate i finishes in a position j ∈ Φ, or securities of the form Ψ|j that pay off $1 if and only if a horse or candidate i ∈ Ψ finishes in position j.
As in Section 6.2, each time a trader purchases or sells q shares, the purchase or sale is broken up into ⌈q/ǫ⌉ purchases or sales of ǫ shares or less, where ǫ > 0 is a small constant. 2 Thus we can treat the sequence of purchases as a sequence of T purchases of ǫ or less shares, where T = O(1/ǫ). Let q t i,j be the number of shares of securities i|Φ with j ∈ Φ or Ψ|j with i ∈ Ψ purchased at time t; then q t i,j ∈ [−ǫ, ǫ] for all i and j.
The price matrix is updated in two steps. First, a temporary matrix P ′ is created where for every i and j, P ′ i,j = P t i,j e q t i,j /b
2 We remark that dividing purchases in this way has the negative effect of creating a polynomial time dependence on the quantity of shares purchased. However, this is not a problem if the quantity of shares bought or sold in each trade is bounded to start, which is a reasonable assumption. The additional time required is then linear only in 1/ǫ.
where b > 0 is a parameter playing a similar role to b in Equation 2. Next, P ′ is Sinkhorn balanced to the desired precision, yielding an (approximately) doubly stochastic matrix P t+1 .
The following lemma shows that updating the price matrix in this way results in a price matrix that is equivalent to the weight matrix of PermELearn with particular loss functions.
Lemma 10. The sequence of price matrices obtained by the approximation algorithm for subset betting on a sequence of purchases q t ∈ [−ǫ, ǫ] n×n is equivalent to the sequence of weight matrices obtained by running PermELearn(η) on a sequence of losses L t with
L t i,j = 2ǫ − q t i,j ηb
for all i and j, for any η ≥ 2ǫ/b.
Proof. First note that for any η ≥ 2ǫ/b, L t i,j ∈ [0, 1] for all t, i, and j, so the loss matrix is valid for PermELearn. P 1 and W 1 both contain all entries of 1/n. Assume that P t = W t . When updating weights for time t + 1, for all i and j,
P ′ i,j = P t i,j e q t i,j /b = W t i,j e q t i,j /b = e 2ǫ/b W t i,j e −2ǫ/b+q t i,j /b = e 2ǫ/b W t i,j e −ηL t i,j = e 2ǫ/b W ′ i,j .
Since the matrix W ′ is a constant multiple of P ′ , the Sinkhorn balancing step will produce the same matrices.
Using this lemma, we can show that the difference between the amount of money that the market maker must distribute to traders in the worst case (i.e. when the true outcome is the outcome that pays off the most) and the amount of money collected by the market is bounded. We will see in the corollary below that as ǫ approaches 0, the worst case loss of the market maker approaches bn ln(n), regardless of the number of shares purchased. Unfortunately, if ǫ > 0, this bound can grow arbitrarily large.
Theorem 11. For any sequence of valid subset betting purchases q t where q t i,j ∈ [−ǫ, ǫ] for all t, i, and j, let P 1 , · · · , P T be the price matrices obtained by running the subset betting approximation algorithm. Then
max σ∈Sn T X t=1 n X i=1 q t i,σ(i) − T X t=1 n X i=1 n X j=1 P t i,j q t i,j ≤ 2ǫ/b 1 − e −2ǫ/b n ln(n) + " 2ǫ/b 1 − e −2ǫ/b − 1 «
2ǫnT .
| 7,945 |
0802.1362
|
2952078074
|
We analyze the computational complexity of market maker pricing algorithms for combinatorial prediction markets. We focus on Hanson's popular logarithmic market scoring rule market maker (LMSR). Our goal is to implicitly maintain correct LMSR prices across an exponentially large outcome space. We examine both permutation combinatorics, where outcomes are permutations of objects, and Boolean combinatorics, where outcomes are combinations of binary events. We look at three restrictive languages that limit what traders can bet on. Even with severely limited languages, we find that LMSR pricing is @math -hard, even when the same language admits polynomial-time matching without the market maker. We then propose an approximation technique for pricing permutation markets based on a recent algorithm for online permutation learning. The connections we draw between LMSR pricing and the vast literature on online learning with expert advice may be of independent interest.
|
Hanson highlights the use of LMSR for Boolean combinatorial markets, noting that the subsidy required to run a combinatorial market on @math outcomes is no greater than that required to run @math independent one-dimensional markets @cite_4 @cite_3 . Hanson discusses the computational difficulty of maintaining LMSR prices on a combinatorial space, and proposes some solutions, including running market makers on overlapping subsets of events, allowing traders to synchronize the markets via arbitrage.
|
{
"abstract": [
"Information markets are markets created to aggregate information. Such markets usually estimate a probability distribution over the values of certain variables, via bets on those values. Combinatorial information markets would aggregate information on the entire joint probability distribution over many variables, by allowing bets on all variable value combinations. To achieve this, we want to overcome the thin market and irrational participation problems that plague standard information markets. Scoring rules avoid these problems, but instead suffer from opinion pooling problems in the thick market case. Market scoring rules avoid all these problems, by becoming automated market makers in the thick market case and simple scoring rules in the thin market case. Logarithmic versions have cost and modularity advantages. After introducing market scoring rules, we consider several design issues, including how to represent variables to support both conditional and unconditional estimates, how to avoid becoming a money pump via errors in calculating probabilities, and how to ensure that users can cover their bets, without needlessly preventing them from using previous bets as collateral for future bets.",
"In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals. Regarding a bet on one event given another event, only logarithmic versions preserve the probability of the given event. Logarithmic versions also preserve the conditional probabilities of other events, and so preserve conditional independence relations. Given logarithmic rules that elicit relative probabilities of base event pairs, it costs no more to elicit estimates on all combinations of these base events."
],
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2120832833",
"1599628716"
]
}
|
Complexity of Combinatorial Market Makers *
|
One way to elicit information is to ask people to bet on it. A prediction market is a common forum where people bet with each other or with a market maker [9,10,23,20,21]. A typical binary prediction market allows bets along one dimension, for example either for or against Hillary Clinton to win the 2008 US Presidential election. Thousands of such one-or small-dimensional markets exist today, each operating independently. For example, at the racetrack, betting on a horse to win does not directly impact the odds for that horse to finish among the top two, as logically it should, because the two bet types are handled separately.
A combinatorial prediction market is a central clearinghouse for handling logically-related bets defined on a combinatorial space. For example, the outcome space might be all n! possible permutations of n horses in a horse race, while bets are properties of permutations such as "horse A finishes 3rd" or "horse A beats horse B." Alternately, the outcome space might be all 2 50 possible state-by-state results for the Democratic candidate in the 2008 US Presidential election, while bets are Boolean statements such as "Democrat wins in Ohio and Florida but not in Texas."
Low liquidity marginalizes the value of prediction markets, and combinatorics only exacerbates the problem by dividing traders' attention among an exponential number of outcomes. A combinatorial matching market-the combinatorial generalization of a standard double auction-may simply fail to find any trades [11,4,5].
In contrast, an automated market maker is always willing to trade on every bet at some price. A combinatorial market maker implicitly or explicitly maintains prices across all (exponentially many) outcomes, thus allowing any trader at any time to place any bet, if transacted at the market maker's quoted price.
Hanson's [13,14] logarithmic market scoring rule market maker (LMSR) is becoming the de facto standard market maker for prediction markets. LMSR has a number of desirable properties, including bounded loss that grows logarithmically in the number of outcomes, infinite liquidity, and modularity that respects some independence relationships. LMSR is used by a number of companies, including inklingmarkets.com, Microsoft, thewsx.com, and yoonew.com, and is the subject of a number of research studies [7,15,8].
In this paper, we analyze the computational complexity of LMSR in several combinatorial betting scenarios. We examine both permutation combinatorics and Boolean combinatorics. We show that both computing instantaneous prices and computing payments of transactions are #P-hard in all cases we examine, even when we restrict participants to very simplistic and limited types of bets. For example, in the horse race analogy, if participants can place bets only of the form "horse A finishes in position N", then pricing these bets properly according to LMSR is #P-hard, even though matching up bets of the exact same form (with no market maker) is polynomial [4].
On a more positive note, we examine an approximation algorithm for LMSR pricing in permutation markets that makes use of powerful techniques from the literature on online learning with expert advice [3,19,12]. We briefly review this online learning setting, and examine the parallels that exist between LMSR pricing and standard algorithms for learning with expert advice. We then show how a recent algorithm for permutation learning [16] can be transformed into an approximation algorithm for pricing in permutation markets in which the market maker is guaranteed to have bounded loss.
RELATED WORK
Fortnow et al. [11] study the computational complexity of finding acceptable trades among a set of bids in a Boolean combinatorial market. In their setting, the center is an auctioneer who takes no risk, only matching together willing traders. They study a call market setting in which bids are collected together and processed once en masse. They show that the auctioneer matching problem is co-NP-complete when orders are divisible and Σ p 2 -complete when orders are indivisible, but identify a tractable special case in which participants are restricted to bet on disjunctions of positive events or single negative events.
Chen et al. [4] analyze the the auctioneer matching problem for betting on permutations, examining two bidding languages. Subset bets are bets of the form "candidate i finishes in positions x, y, or z" or "candidate i, j, or k finishes in position x." Pair bets are of the form "candidate i beats candidate j." They give a polynomial-time algorithm for matching divisible subset bets, but show that matching pair bets is NP-hard.
Hanson highlights the use of LMSR for Boolean combinatorial markets, noting that the subsidy required to run a combinatorial market on 2 n outcomes is no greater than that required to run n independent one-dimensional markets [13,14]. Hanson discusses the computational difficulty of maintaining LMSR prices on a combinatorial space, and proposes some solutions, including running market makers on overlapping subsets of events, allowing traders to synchronize the markets via arbitrage.
The work closest to our own is that of Chen, Goel, and Pen-nock [6], who study a special case of Boolean combinatorics in which participants bet on how far a team will advance in a single elimination tournament, for example a sports playoff like the NCAA college basketball tournament. They provide a polynomial-time algorithm for LMSR pricing in this setting based on a Bayesian network representation of prices. They also show that LMSR pricing is NP-hard for a very general bidding language. They suggest an approximation scheme based on Monte Carlo simulation or importance sampling.
We believe ours are the first non-trivial hardness results and worst-case bounded approximation scheme for LMSR pricing.
Logarithmic Market Scoring Rules
Proposed by Hanson [13,14], a logarithmic market scoring rule is an automated market maker mechanism that always maintains a consistent probability distribution over an outcome space Ω reflecting the market's estimate of the likelihood of each outcome. A generic LMSR offers a security corresponding to each possible outcome ω. The security associated to outcome ω pays off $1 if the outcome ω happens, and $0 otherwise. Let q = (qω)ω∈Ω indicate the number of outstanding shares for all securities. The LMSR market maker starts the market with some initial shares of securities, q 0 , which may be 0. The market keeps track of the outstanding shares of securities q at all times, and maintains a cost function
C(q) = b log X ω∈Ω e qω /b ,(1)
and an instantaneous price function for each security
pω(q) = e qω /b P τ ∈Ω e qτ /b ,(2)
where b is a positive parameter related to the depth of the market. The cost function captures the total money wagered in the market, and C(q 0 ) reflects the market maker's maximum subsidy to the market. The instantaneous price function pω(q) gives the current cost of buying an infinitely small quantity of the security for outcome ω, and is the partial derivative of the cost function, i.e. pω(q) = ∂C(q)/∂qω. We use p = (pω(q))ω∈Ω to denote the price vector. Traders buy and sell securities through the market maker. If a trader wishes to change the number of outstanding shares from q toq, the cost of the transaction that the trader pays is C(q) − C(q), which equals the integral of the price functions following any path from q toq.
When the outcome space is large, it is often natural to offer only compound securities on sets of outcomes. A compound security S pays $1 if one of the outcomes in the set S ⊂ Ω occurs and $0 otherwise. Such a security is the combination of all securities ω ∈ S. Buying or selling q shares of the compound security S is equivalent to buying or selling q shares of each security ω ∈ S. Let Θ denote the set of all allowable compound securities. Denote the outstanding shares of all compound securities as Q = (qS)S∈Θ. The cost function can be written as
C(Q) = b log X ω∈Ω e P S∈Θ:ω∈S q S /b = b log X ω∈Ω Y S∈Θ:ω∈S e q S /b .(3)
The instantaneous price of a compound security S is computed as the sum of the instantaneous prices of the securities that compose the compound security S,
pS(Q) = P ω∈S e qω /b P τ ∈Ω e qτ /b = P ω∈S e P S ′ ∈Θ:ω∈S ′ q S ′ /b P τ ∈Ω e P S ′ ∈Θ:τ ∈S ′ q S ′ /b = P ω∈S Q S ′ ∈Θ:ω∈S ′ e q S ′ /b P τ ∈Ω Q S ′ ∈Θ:τ ∈S ′ e q S ′ /b .(4)
Logarithmic market scoring rules are so named because they are based on logarithmic scoring rules. A logarithmic scoring rule is a set of reward functions
{sω(r) = aω + b log(rω) : ω ∈ Ω},
where r = (rω)ω∈Ω is a probability distribution over Ω, and aω is a free parameter. An agent who reports r is rewarded sω(r) if outcome ω happens. Logarithmic scoring rules are proper in the sense that when facing them a risk-neutral agent will truthfully report his subjective probability distribution to maximize his expected reward. A LMSR market can be viewed as a sequential version of logarithmic scoring rule, because by changing market prices from p top a trader's net profit is sω(p) − sω(p) when outcome ω happens. At any time, a trader in a LMSR market is essentially facing a logarithmic scoring rule.
LMSR markets have many desirable properties. They offer consistent pricing for combinatorial events. As market maker mechanisms, they provide infinite liquidity by allowing trades at any time. Although the market maker subsidizes the market, he is guaranteed a worst-case loss no greater than C(q 0 ), which is b log n if |Ω| = n and the market starts with 0 share of every security. In addition, it is a dominant strategy for a myopic risk-neutral trader to reveal his probability distribution truthfully since he faces a proper scoring rule. Even for forward-looking traders, truthful reporting is an equilibrium strategy when traders' private information is independent conditional on the true outcome [7].
Complexity of Counting
The well-known class NP contains questions that ask whether a search problem has a solution, such as whether a graph is 3-colorable. The class #P consists of functions that count the number of solutions of NP search questions, such as the number of 3-colorings of a graph.
A function g is #P-hard if, for every function f in #P, it is possible to compute f in polynomial time given an oracle for g. Clearly oracle access to such a function g could additionally be used to solve any NP problem, but in fact one can solve much harder problems too. Toda [24] showed that every language in the polynomial-time hierarchy can be solved efficiently with access to a #P-hard function.
To show a function g is a #P-hard function, it is sufficient to show that a function f reduces to g where f was previously known to be #P-hard. In this paper we use the following #P-hard functions to reduce from:
• Permanent: The permanent of an n-by-n matrix A = (ai,j) is defined as
perm(A) = X σ∈Ω n Y i=1 a i,σ(i) ,(5)
where Ω is the set of all permutations over {1, 2, ..., n}.
Computing the permanent of a matrix A containing 0-1 entries is #P-hard [25].
• #2-SAT: Counting the number of satisfying assignments of a formula given in conjunctive normal form with each clause having two literals is #P-hard [26].
• Counting Linear Extensions: Counting the number of total orders that extend a partial order given by a directed graph is #P-hard [2].
#P-hardness is the best we can achieve since all the functions in this paper can themselves be reduced to some other #P function.
LMSR FOR PERMUTATION BETTING
In this section we consider a particular type of market combinatorics in which the final outcome is a ranking over n competing candidates. Let the set of candidates be Nn = {1, . . . , n}, which is also used to represent the set of positions. In the setting, Ω is the set of all permutations over Nn. An outcome σ ∈ Ω is interpreted as the scenario in which each candidate i ends up in position σ(i). Chen et al. [4] propose two betting languages, subset betting and pair betting, for this type of combinatorics and analyze the complexity of the auctioneer's order matching problem for each.
In what follows we address the complexity of operating an LMSR market for both betting languages.
Subset Betting
As in Chen et al. [4], participants in a LMSR market for subset betting may trade two types of compound securities:
(1) a security of the form i|Φ where Φ ⊂ Nn is a subset of positions; and (2) a security Ψ|j where Ψ ⊂ Nn is a subset of candidates. The security i|Φ pays off $1 if candidate i stands at a position that is an element of Φ and $0 otherwise. Similarly, the security Ψ|j pays off $1 if any of the candidates in Ψ finishes at position j and $0 otherwise. For example, in a horse race, participants can trade securities of the form "horse A will come in the second, fourth, or fifth place", or "either horse B or horse C will come in the third place".
Note that owning one share of i|Φ is equivalent to owning one share of i|j for every j ∈ Φ, and similarly owning one share of Ψ|j is equivalent to owing one share of i|j for every i ∈ Ψ. We restrict our attention to a simplified market where securities traded are of the form i|j . We show that even in this simplified market it is #P-hard for the market maker to provide the instantaneous security prices, evaluate the cost function, or calculate payments for transactions, which implies that the running an LMSR market for the more general case of subset betting is also #P-hard.
Traders can trade securities i|j for all i ∈ Nn and j ∈ Nn with the market maker. Let qi,j be the total number of outstanding shares for security i|j in the market. Let Q = (qi,j)i∈N n ,j∈Nn denote the outstanding shares for all securities. The market maker keeps track of Q at all times. From Equation 4, the instantaneous price of security i|j is
pi,j(Q) = P σ∈Ω:σ(i)=j Q n k=1 e q k,σ(k) /b P τ ∈Ω Q n k=1 e q k,τ (k) /b ,(6)
and from Equation 3, the cost function for subset betting is
C(Q) = b log X σ∈Ω n Y k=1 e q k,σ(k) /b .(7)
We will show that computing instantaneous prices, the cost function, and/or payments of transactions for a subset betting market is #P-hard by a reduction from the problem of computing the permanent of a (0,1)-matrix.
Theorem 1. It is #P-hard to compute instantaneous prices in a LMSR market for subset betting. Additionally, it is #P-hard to compute the value of the cost function.
Proof. We show that if we could compute the instantaneous prices or the value of the cost function for subset betting for any quantities of shares purchased, then we could compute the permanent of any (0, 1)-matrix in polynomial time.
Let n be the number of candidates, A = (ai,j) be any n-byn (0,1)-matrix, and N = n! + 1. Note that Q n i=1 a i,σ(i) is either 0 or 1. From Equation 5, perm(A) ≤ n! and hence perm(A) mod N = perm(A). We show how to compute perm(A) mod N from prices in subset betting markets in which qi,j shares of i|j have been purchased, where qi,j is defined by
qi,j = ( b ln N if ai,j = 0, b ln(N + 1) if ai,j = 1(8)
for any i ∈ Nn and any j ∈ Nn.
Let B = (bi,j) be a n-by-n matrix containing entries of the form bi,j = e q i,j /b . Note that bi,j = N if ai,j = 0 and bi,j = N + 1 if ai,j = 1. Thus, perm(A) mod N = perm(B) mod N . Thus, from Equation 6, the price for i|j in the market is
pi,j(Q) = P σ∈Ω:σ(i)=j Q n k=1 b k,σ(k) P τ ∈Ω Q n k=1 b k,τ (k) = bi,j P σ∈Ω:σ(i)=j Q k =i b k,σ(k) P τ ∈Ω Q n k=1 b k,τ (k) = bi,j · perm(Mi,j) perm(B)
where Mi,j is the matrix obtained from B by removing the ith row and jth column. Thus the ability to efficiently compute prices gives us the ability to efficiently compute perm(Mi,j)/perm(B).
It remains to show that we can use this ability to compute perm(B). We do so by telescoping a sequence of prices. Let Bi be the matrix B with the first i rows and columns removed. From above, we have perm(B1)/perm(B) = p1,1(Q)/b1,1. Define Qm to be the (n−m)-by-(n−m) matrix (qi,j)i>m,j>m, that is, the matrix of quantities of securities (qi,j) with the first k rows and columns removed. In a market with only n−m candidates, applying the same technique to the matrix Qm, we can obtain perm(Bm+1)/perm(Bm) from market prices for m = 1, ..., (n−2). Thus by computing n − 1 prices, we can compute
" perm(B1) perm(B) « " perm(B2) perm(B1) « · · · " perm(Bn−1) perm(Bn−2) « = " perm(Bn−1) perm(B)
« .
Noting that Bn−1 only has one element, we thus can compute perm(B) from market prices. Consequently, perm(B) mod N gives perm(A).
Therefore, given a n-by-n (0, 1)-matrix A, we can compute the permanent of A in polynomial time using prices in n − 1 subset betting markets wherein an appropriate quantity of securities have been purchased.
Additionally, note that
C(Q) = b log X σ∈Ω n Y k=1 b k,σ(k) = b log perm(B) .
Thus if we can compute C(Q), we can also compute perm(A).
As computing the permanent of a (0, 1)-matrix is #P-hard, both computing market prices and computing the cost function in a subset betting market are #P-hard.
Corollary 2. Computing the payment of a transaction in a LMSR for subset betting is #P-hard.
Proof. Suppose the market maker starts the market with 0 share of every security. Denote Q 0 as the initial quantities of all securities. If the market maker can compute C(Q) − C(Q) for any quantitiesQ and Q, it can compute C(Q) − C(Q 0 ) for any Q. As C(Q 0 ) = b log n!, the market maker is able to compute C(Q). According to Theorem 1, computing the payment of a transaction is #P-hard.
Pair Betting
In contrast to subset betting, where traders bet on absolute positions for a candidate, pair betting allows traders to bet on the relative position of a candidate with respect to another. More specifically, traders buy and sell securities of the form i > j , where i and j are candidates. The security pays off $1 if candidate i ranks higher than candidate j (i.e., σ(i) < σ(j) where σ is the final ranking of candidates) and $0 otherwise. For example, traders may bet on events of the form "horse A beats horse B", or "candidate C receives more votes than candidate D".
As for subset betting, the current state of the market is determined by the total number of outstanding shares for all securities. Let qi,j denote the number of outstanding shares for i > j . Applying Equations 3 and 4 to the present context, we find that the instantaneous price of the security i, j is given by
pi,j(Q) = P σ∈Ω:σ(i)<σ(j) Q i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q i ′ ,j ′ /b P τ ∈Ω Q i ′ ,j ′ :τ (i ′ )<τ (j ′ ) e q i ′ ,j ′ /b ,(9)
and the cost function for pair betting is
C(Q) = b log X σ∈Ω Y i,j:σ(i)<σ(j) e q i,j /b .(10)
We will show that computing prices, the value of the cost function, and/or payments of transactions for pair betting is #P-hard via a reduction from the problem of computing the number of linear extensions to any partial ordering.
Theorem 3. It is #P-hard to compute instantaneous prices in a LMSR market for pair betting. Additionally, it is #P-hard to compute the value of the cost function.
Proof. Let P be a partial order over {1, . . . , n}. We recall that a linear (or total) order T is a linear extension of P if whenever x ≤ y in P it also holds that x ≤ y in T . We denote by N (P ) the number of linear extensions of P .
Recall that (i, j) is a covering pair of P if i ≤ j in P and there does not exist ℓ = i, j such that i ≤ ℓ ≤ j. Let {(i1, j1), (i2, j2), ... , (i k , j k )} be a set of covering pairs of P . Note that covering pairs of a partially ordered set with n elements can be easily obtained in polynomial time, and that their number is less than n 2 .
We will show that we can design a sequence of trades that, given a list of covering pairs for P , provide N (P ) through a simple function of market prices.
We consider a pair betting market over n candidates. We construct a sequence of k trading periods, and denote by q t i,j and p t i,j respectively the outstanding quantity of security i > j and its instantaneous price at the end of period t. At the beginning of the market, q 0 i,j = 0 for any i and j. At each period t, 0 < t ≤ k, b ln n! shares of security it > jt are purchased.
Let
Nt(i, j) = X σ∈Ω:σ(i)<σ(j) Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q t i ′ ,j ′ /b , and Dt = X σ∈Ω Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q t i ′ ,j ′ /b .
Note that according to Equation 9, p t it,jt = Nt(it, jt)/Dt.
For the first period, as only the security i1 > j1 is purchased, we get
D1 = X σ∈Ω:σ(i 1 )<σ(j 1 ) n! + X σ:σ(i 1 )>σ(j 1 ) 1 = (n!) 2 + n! 2 .
We now show that D k can be calculated inductively from D1 using successive prices given by the market. During period t, b ln n! shares of it > jt are purchased. Note also that the securities purchased are different at each period, so that q s it,jt = 0 if s < t and q s it,jt = b ln n! if s ≥ t. We have Nt(it, jt) = Nt−1(it, jt)e b ln(n!)/b = n!Nt−1(it, jt) .
Hence,
p t it,jt p t−1 it,jt = Nt(it, jt)/Dt Nt−1(it, jt)/Dt−1 = n!Dt−1 Dt ,
and therefore,
D k = (n!) k−1 k Y ℓ=2 p ℓ−1 i ℓ ,j ℓ p ℓ i ℓ ,j ℓ ! D1 .
So D k can be computed in polynomial time in n from the prices.
Alternately, since the cost function at the end of period k can be written as C(Q) = b log D k , D k can also be computed efficiently from the cost function in period k.
We finally show that given D k , we can compute N (P ) in polynomial time. Note that at the end of the k trading periods, the securities purchased correspond to the covering pairs of P , such that e q k i,j /b = n! if (i, j) is a covering pair of P and e q k i,j /b = 1 otherwise. Consequently, for a permutation σ that satisfies the partial order P , meaning that σ(i) ≤ σ(j) whenever i ≤ j in P, we have
Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q k i ′ ,j ′ /b = (n!) k .
On the other hand, if a permutation σ does not satisfy P , it does not satisfy at least one covering pair, meaning that there is a covering pair of P , (i, j), such that σ(i) > σ(j), so that
Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q k i ′ ,j ′ /b ≤ (n!) k−1 .
Since the total number of permutations is n!, the total sum of all terms in the sum D k corresponding to permutations that do not satisfy the partial ordering P is less than or equal to n!(n!) k−1 = (n!) k , and is strictly less than (n!) k unless the number of linear extensions is 0, while the total sum of all the terms corresponding to permutations that do satisfy P is N (P )(n!) k . Thus N (P ) =¨D k /(n!) k˝.
We know that computing the number of linear extensions of a partial ordering is #P-hard. Therefore, both computing the prices and computing the value of the cost function in pair betting are #P-hard. The proof is nearly identical to the proof of Corollary 2.
LMSR FOR BOOLEAN BETTING
We now examine an alternate type of market combinatorics in which the final outcome is a conjunction of event outcomes. Formally, let A be event space, consisting of N individual events A1, · · · , AN , which may or may not be mutually independent. We define the state space Ω be the set of all possible joint outcomes for the N events, so that its size is |Ω| = 2 N . A Boolean betting market allows traders to bet on Boolean formulas of these events and their negations. A security φ pays off $1 if the Boolean formula φ is satisfied by the final outcome and $0 otherwise. For example, a security A1 ∨ A2 pays off $1 if and only if at least one of events A1 and A2 occurs, while a security A1 ∧ A3 ∧ ¬A5 pays off $1 if and only if the events A1 and A3 both occur and the event A5 does not. Following the notational conventions of Fortnow et al. [11], we use ω ∈ φ to mean that the outcome ω satisfies the Boolean formula φ. Similarly, ω ∈ φ implies that the outcome ω does not satisfy φ.
In this section, we focus our attention to LMSR markets for a very simple Boolean betting language, Boolean formulas of two events. We show that even when bets are only allowed to be placed on disjunctions or conjunctions of two events, it is still #P-hard to calculate the prices, the value of the cost function, and payments of transactions in a Boolean betting market operated by a LMSR market maker.
Let X be the set containing all elements of A and their negations. In other words, each event outcome Xi ∈ X is either Aj or ¬Aj for some Aj ∈ A. We begin by considering the scenario in which traders may only trade securities Xi ∨ Xj corresponding to disjunctions of any two event outcomes.
Let qi,j be the total number of shares purchased by all traders for the security Xi ∨ Xj , which pays off $1 in the event of any outcome ω such that ω ∈ (Xi ∨ Xj ) and $0 otherwise. From Equation 4, we can calculate the instantaneous price for the security Xi ∨ Xj for any two event outcomes Xi, Xj ∈ X as
pi,j(Q) = P ω∈Ω:ω∈(X i ∨X j ) Q 1≤i ′ <j ′ ≤2N :ω∈(X i ′ ∨X j ′ ) e q i ′ ,j ′ /b P τ ∈Ω Q 1≤i ′ <j ′ ≤2N :τ ∈(X i ′ ∨X j ′ ) e q i ′ ,j ′ /b .(11)
Note that if Xi = ¬Xj , pi,j(Q) is always $1 regardless of how many shares of other securities have been purchased. According to Equation 3, the cost function is
C(Q) = b log X ω∈Ω Y 1≤i<j≤2N:ω∈(X i ∨X j ) e q i,j /b .(12)
Theorem 5 shows that computing prices and the value of the cost function in such a market is #P-hard, via a reduction from the #2-SAT problem. 1
Theorem 5. It is #P-hard to compute instantaneous prices in a LMSR market for Boolean betting when bets are 1 This can also be proved via a reduction from counting linear extensions using a similar technique to the proof of Theorem 3, but the reduction to #2-SAT is more natural. restricted to disjunctions of two event outcomes. Additionally, it is #P-hard to compute the value of the cost function in this setting.
Proof. Suppose we are given a 2-CNF (Conjunctive Normal Form) formula (Xi 1 ∨ Xj 1 ) ∧ (Xi 2 ∨ Xj 2 ) ∧ · · · ∧ (Xi k ∨ Xj k ) (13) with k clauses, where each clause is a disjunction of two literals (i.e. events and their negations). Assume any redundant terms have been removed.
The structure of the proof is similar to that of the pair betting case. We consider a Boolean betting markets with N events, and show how to construct a sequence of trades that provides, through prices or the value of the cost function, the number of satisfiable assignments for the 2-CNF formula.
We create k trading periods. At period t, a quantity b ln(2 N ) of the security Xi t ∨ Xj t is purchased. We denote by p t i,j and q t i,j respectively the price and outstanding quantities of the security Xi ∨ Xj at the end of period t. Suppose the market starts with 0 share of every security. Note that q s it,jt = 0 if s < t and q s it,
jt = b ln(2 N ) if s ≥ t. Let Nt(i, j) = X ω∈Ω:ω∈(X i ∨X j ) Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q t i ′ ,j ′ /b , and Dt = X ω∈Ω Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q t i ′ ,j ′ /b .
Thus, p t i,j = Nt(it, jt)/Dt.
Since only one security Xi 1 ∨ Xj 1 has been purchased in period 1, we get
D1 = X ω∈Ω:ω∈(X i 1 ∨X j 1 ) 2 N + X ω∈Ω:ω ∈(X i 1 ∨X j 1 ) 1 = 3 · 2 2N−2 + 2 N−2 .
We then show that D k can be calculated inductively from D1. As the only security purchased in period t is (Xi t ∨ Xj t ) in quantity b ln(2 N ), we obtain
Nt(it, jt) = Nt−1(it, jt)e b ln(2 N )/b = Nt−1(it, jt)2 N .
Therefore,
p t it,jt p t−1 it,jt = Nt(it, jt)/Dt Nt−1(it, jt)/Dt−1 = 2 N Dt−1 Dt ,
and we get
D k = (2 N ) k−1 k Y ℓ=2 p ℓ−1 i ℓ ,j ℓ p ℓ i ℓ ,j ℓ ! D1 .
In addition, since the cost function at the end of period k can be expressed as
C(Q) = b log D k ,
D k can also be computed efficiently from the cost function in period k.
We now show that we can deduce from D k the number of satisfiable assignments for the 2-CNF formula (Equation 13). Indeed, each term in the sum
X ω∈Ω Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q k i ′ ,j ′ /b
that corresponds to an outcome ω that satisfies the formula is exactly 2 kN , as exactly k terms in the product are 2 N and the rest is 1. On the contrary, each term in the sum that corresponds to an outcome ω that does not satisfy the 2-CNF formula will be at most 2 (k−1)N since at most k − 1 terms in the product will be 2 N and the rest will be 1. Since the total number of outcomes is 2 N , the total sum of all terms corresponding to outcomes that do not satisfy (13) is less than or equal to 2 N (2 (k−1)N ) = 2 kN , and is strictly less than 2 kN unless the number of satisfying assignments is 0. Thus the number of satisfying assignments is¨D k /2 kN˝.
We know that computing the number of satisfiable assignments of a 2-CNF formula is #P-hard. We have shown how to compute it in polynomial time using prices or the value of the cost function in a Boolean betting market of N events. Therefore, both computing prices and computing the value of the cost function in a Boolean betting market is #P-hard.
Corollary 6. Computing the payment of a transaction in a LMSR for Boolean betting is #P-hard when traders can only bet on disjunctions of two events.
The proof is nearly identical to the proof of Corollary 2.
If we impose that participants in a Boolean betting market may only trade securities corresponding to conjunctions of any two event outcomes, Ai ∧ Aj , the following Corollary gives the complexity results for this situation.
Corollary 7. It is #P-hard to compute instantaneous prices in a LMSR market for Boolean betting when bets are restricted to conjunctions of two event outcomes. Additionally, it is #P-hard to compute the value of the cost function in this setting, and #P-hard to compute the payment for a transaction.
Proof. Buying q shares of security Ai ∧ Aj is equivalent to selling q shares of ¬Ai ∨ ¬Aj . Thus if we can operate a Boolean betting market for securities of the type Ai ∧ Aj in polynomial time, we can also operate a Boolean betting market for securities of the type Ai ∨ Aj in polynomial time. The result then follows from Theorem 5 and Corollary 6.
AN APPROXIMATION ALGORITHM FOR SUBSET BETTING
There is an interesting relationship between logarithmic market scoring rule market makers and a common class of algorithms for online learning in an experts setting. In this section, we elaborate on this connection, and show how results from the online learning community can be used to prove new results about an approximation algorithm for subset betting.
The Experts Setting
We begin by describing the standard model of online learning with expert advice [19,12,27]. In this model, at each time t ∈ {1, · · · , T }, each expert i ∈ {1, · · · , n} receives a loss ℓi,t ∈ [0, 1]. The cumulative loss of expert i at time T is Li,T = P T t=1 ℓi,t. No statistical assumptions are made about these losses, and in general, algorithms are expected to perform well even if the sequence of losses is chosen by an adversary.
An algorithm A maintains a current weight wi,t for each expert i, where P n i=1 wi,t = 1. These weights can be viewed as distributions over the experts. The algorithm then receives its own instantaneous loss ℓA,t = P n i=1 wi,tℓi,t, which may be interpreted as the expected loss of the algorithm when choosing an expert according to the current distribution. The cumulative loss of A up to time T is then defined in the natural way as LA,T = P T t=1 ℓA,t = P T t=1 P n i=1 wi,tℓi,t. A common goal in such online learning settings is to minimize an algorithm's regret. Here the regret is defined as the difference between the cumulative loss of the algorithm and the cumulative loss of an algorithm that would have "chosen" the best expert in hindsight by setting his weight to 1 throughout all the periods. Formally, the regret is given by LA,T − min i∈{1,··· ,n} Li,T .
Many algorithms that have been analyzed in the online experts setting are based on exponential weight updates. These exponential updates allow the algorithm to quickly transfer weight to an expert that is outperforming the others. For example, in the Weighted Majority algorithm of Littlestone and Warmuth [19], the weight on each expert i is defined as
wi,t = wi,t−1e −ηℓ i,t P n j=1 wj,t−1e −ηℓ j,t = e −ηL i,t P n j=1 e −ηL j,t ,(14)
where η is the learning rate, a small positive parameter that controls the magnitude of the updates. The following theorem gives a bound on the regret of Weighted Majority. For a proof of this result and a nice overview of learning with expert advice, see, for example, Cesa-Bianchi and Lugosi [3].
Theorem 8. Let A be the Weighted Majority algorithm with parameter η. After a sequence of T trials, LA,T − min i∈{1,··· ,n} Li,T ≤ ηT + ln(n) η .
Relationship to LMSR Markets
There is a manifest similarity between the expert weights used by Weighted Majority and the prices in the LMSR market. One might ask if the results from the experts setting can be applied to the analysis of prediction markets. Our answer is yes. In fact, it is possible to use Theorem 8 to rediscover the well-known bound of b ln(n) for the loss of an LMSR market maker with n outcomes.
Let ǫ be a limit on the number of shares that a trader may purchase or sell at each time step; in other words, if a trader would like to purchase or sell q shares, this purchase must be broken down into ⌈q/ǫ⌉ separate purchases of ǫ or less shares. Note that the total number of time steps T needed to execute such a sequence of purchases and sales is proportional to 1/ǫ.
We will construct a sequence of loss functions in a setting with n experts to induce a sequence of weight matrices that correspond to the price matrices of the LMSR market. At each time step t, let pi,t ∈ [0, 1] be the instantaneous price of security i at the end of period t, and let qi,t ∈ [−ǫ, ǫ] be the number of shares of security i purchased during period t. Let Qi,t be the total number of shares of security i that have been purchased up to time t. Now, let's define the instantaneous loss of each expert as ℓi,t = (2ǫ − qi,t)/(ηb). First notice that this loss is always in [0, 1] as long as η ≥ 2ǫ/b. From Equations 2 and 14, at each time t,
pi,t = e Q i,t /b P n j=1 e Q j,t /b = e 2ǫt/b−ηL i,t P n j=1 e 2ǫt/b−ηL j,t = e −ηL i,t P n j=1 e −ηL j,t = wi,t .
Applying Theorem 8, and rearranging terms, we find that max i∈{1,··· ,n}
T X t=1 qi,t − T X t=1 n X i=1 pi,tqi,t ≤ η 2 T b + b ln(n).
The first term of the left-hand side is the maximum payment that the market maker needs to make, while the second terms of the left-hand side captures the total money the market maker has received. The right hand side is clearly minimized when η is set as small as possible. Setting η = 2ǫ/b gives us max i∈{1,··· ,n}
T X t=1 qi,t − T X t=1 n X i=1 pi,tqi,t ≤ 4ǫ 2 T b + b ln(n).
Since T = O(1/ǫ), the term 4ǫ 2 T b goes to 0 as ǫ becomes very small. Thus in the limit as ǫ → 0, we get the wellknown result that the worst-case loss of the market maker is bounded by b ln(n).
Considering Permutations
Recently Helmbold and Warmuth [16] have shown that many results from the standard experts setting can be extended to a setting in which, instead of competing with the best expert, the goal is to compete with the best permutation over n items. Here each permutation suffers a loss at each time step, and the goal of the algorithm is to maintain a weighting over permutations such that the cumulative regret to the best permutation is small. It is infeasible to treat each permutation as an expert and run a standard algorithm since this would require updating n! weights at each time step. Instead, they show that when the loss has a certain structure (in particular, when the loss of a permutation is the sum of the losses of each of the n mappings), an alternate algorithm can be used that requires tracking only n 2 weights in the form of an n × n doubly stochastic matrix.
Formally, let W t be a doubly stochastic matrix of weights maintained by the algorithm A at time t. Here W t i,j is the weight corresponding to the probability associated with item i being mapped into position j. Let L t ∈ [0, 1] n×n be the loss matrix at time t. The instantaneous loss of a permutation σ at time t is ℓσ,t = P n i=1 L t i,σ(i) . The instantaneous loss of A is ℓA,t = P n i=1 P n j=1 W t i,j L t i,j , the matrix dot product between W t and L t . Notice that ℓA,t is equivalent to the expectation over permutations σ drawn according to W t of ℓσ,t. The goal of the algorithm is to minimize the cumulative regret to the best permutation, LA,T − minσ∈Ω Lσ,T where the cumulative loss is defined as before.
Helmbold and Warmuth go on to present an algorithm called PermELearn that updates the weight matrix in two steps. First, it creates a temporary matrix W ′ , such that for every i and j, W ′ i,j = W t i,j e −ηL t i,j . It then obtains W t+1 i,j by repeatedly rescaling the rows and columns of W ′ until the matrix is doubly stochastic. Alternately rescaling rows and columns of a matrix M in this way is known as Sinkhorn balancing [22]. Normalizing the rows of a matrix is equivalent to pre-multiplying by a diagonal matrix, while normalizing the columns is equivalent to post-multiplying by a diagonal matrix. Sinkhorn [22] shows that this procedure converges to a unique doubly stochastic matrix of the form RM C where R and C are diagonal matrices if M is a positive matrix. Although there are cases in which Sinkhorn balancing does not converge in finite time, many results show that the number of Sinkhorn iterations needed to scale a matrix so that row and column sums are 1 ± ǫ is polynomial in 1/ǫ [1,17,18].
The following theorem [16] bounds the cumulative loss of the PermELearn in terms of the cumulative loss of the best permutation.
Theorem 9. (Helmbold and Warmuth [16]) Let A be the PermELearn algorithm with parameter η. After a sequence of T trials, LA,T ≤ n ln(n) + η minσ∈Ω Lσ,T 1 − e −η .
Approximating Subset Betting
Using the PermELearn algorithm, it is simple to approximate prices for subset betting in polynomial time. We start with a n × n price matrix P 1 in which all entries are 1/n. As before, traders may purchase securities of the form i|Φ that pay off $1 if and only if horse or candidate i finishes in a position j ∈ Φ, or securities of the form Ψ|j that pay off $1 if and only if a horse or candidate i ∈ Ψ finishes in position j.
As in Section 6.2, each time a trader purchases or sells q shares, the purchase or sale is broken up into ⌈q/ǫ⌉ purchases or sales of ǫ shares or less, where ǫ > 0 is a small constant. 2 Thus we can treat the sequence of purchases as a sequence of T purchases of ǫ or less shares, where T = O(1/ǫ). Let q t i,j be the number of shares of securities i|Φ with j ∈ Φ or Ψ|j with i ∈ Ψ purchased at time t; then q t i,j ∈ [−ǫ, ǫ] for all i and j.
The price matrix is updated in two steps. First, a temporary matrix P ′ is created where for every i and j, P ′ i,j = P t i,j e q t i,j /b
2 We remark that dividing purchases in this way has the negative effect of creating a polynomial time dependence on the quantity of shares purchased. However, this is not a problem if the quantity of shares bought or sold in each trade is bounded to start, which is a reasonable assumption. The additional time required is then linear only in 1/ǫ.
where b > 0 is a parameter playing a similar role to b in Equation 2. Next, P ′ is Sinkhorn balanced to the desired precision, yielding an (approximately) doubly stochastic matrix P t+1 .
The following lemma shows that updating the price matrix in this way results in a price matrix that is equivalent to the weight matrix of PermELearn with particular loss functions.
Lemma 10. The sequence of price matrices obtained by the approximation algorithm for subset betting on a sequence of purchases q t ∈ [−ǫ, ǫ] n×n is equivalent to the sequence of weight matrices obtained by running PermELearn(η) on a sequence of losses L t with
L t i,j = 2ǫ − q t i,j ηb
for all i and j, for any η ≥ 2ǫ/b.
Proof. First note that for any η ≥ 2ǫ/b, L t i,j ∈ [0, 1] for all t, i, and j, so the loss matrix is valid for PermELearn. P 1 and W 1 both contain all entries of 1/n. Assume that P t = W t . When updating weights for time t + 1, for all i and j,
P ′ i,j = P t i,j e q t i,j /b = W t i,j e q t i,j /b = e 2ǫ/b W t i,j e −2ǫ/b+q t i,j /b = e 2ǫ/b W t i,j e −ηL t i,j = e 2ǫ/b W ′ i,j .
Since the matrix W ′ is a constant multiple of P ′ , the Sinkhorn balancing step will produce the same matrices.
Using this lemma, we can show that the difference between the amount of money that the market maker must distribute to traders in the worst case (i.e. when the true outcome is the outcome that pays off the most) and the amount of money collected by the market is bounded. We will see in the corollary below that as ǫ approaches 0, the worst case loss of the market maker approaches bn ln(n), regardless of the number of shares purchased. Unfortunately, if ǫ > 0, this bound can grow arbitrarily large.
Theorem 11. For any sequence of valid subset betting purchases q t where q t i,j ∈ [−ǫ, ǫ] for all t, i, and j, let P 1 , · · · , P T be the price matrices obtained by running the subset betting approximation algorithm. Then
max σ∈Sn T X t=1 n X i=1 q t i,σ(i) − T X t=1 n X i=1 n X j=1 P t i,j q t i,j ≤ 2ǫ/b 1 − e −2ǫ/b n ln(n) + " 2ǫ/b 1 − e −2ǫ/b − 1 «
2ǫnT .
| 7,945 |
0802.1362
|
2952078074
|
We analyze the computational complexity of market maker pricing algorithms for combinatorial prediction markets. We focus on Hanson's popular logarithmic market scoring rule market maker (LMSR). Our goal is to implicitly maintain correct LMSR prices across an exponentially large outcome space. We examine both permutation combinatorics, where outcomes are permutations of objects, and Boolean combinatorics, where outcomes are combinations of binary events. We look at three restrictive languages that limit what traders can bet on. Even with severely limited languages, we find that LMSR pricing is @math -hard, even when the same language admits polynomial-time matching without the market maker. We then propose an approximation technique for pricing permutation markets based on a recent algorithm for online permutation learning. The connections we draw between LMSR pricing and the vast literature on online learning with expert advice may be of independent interest.
|
The work closest to our own is that of Chen, Goel, and Pennock @cite_12 , who study a special case of Boolean combinatorics in which participants bet on how far a team will advance in a single elimination tournament, for example a sports playoff like the NCAA college basketball tournament. They provide a polynomial-time algorithm for LMSR pricing in this setting based on a Bayesian network representation of prices. They also show that LMSR pricing is NP-hard for a very general bidding language. They suggest an approximation scheme based on Monte Carlo simulation or importance sampling.
|
{
"abstract": [
"In a prediction market, agents trade assets whose value is tied to a future event, for example the outcome of the next presidential election. Asset prices determine a probability distribution over the set of possible outcomes. Typically, the outcome space is small, allowing agents to directly trade in each outcome, and allowing a market maker to explicitly update asset prices. Combinatorial markets, in contrast, work to estimate a full joint distribution of dependent observations, in which case the outcome space grows exponentially. In this paper, we consider the problem of pricing combinatorial markets for single-elimination tournaments. With @math competing teams, the outcome space is of size 2n-1. We show that the general pricing problem for tournaments is P-hard. We derive a polynomial-time algorithm for a restricted betting language based on a Bayesian network representation of the probability distribution. The language is fairly natural in the context of tournaments, allowing for example bets of the form \"team i wins game k\". We believe that our betting language is the first for combinatorial market makers that is both useful and tractable. We briefly discuss a heuristic approximation technique for the general case."
],
"cite_N": [
"@cite_12"
],
"mid": [
"2074925781"
]
}
|
Complexity of Combinatorial Market Makers *
|
One way to elicit information is to ask people to bet on it. A prediction market is a common forum where people bet with each other or with a market maker [9,10,23,20,21]. A typical binary prediction market allows bets along one dimension, for example either for or against Hillary Clinton to win the 2008 US Presidential election. Thousands of such one-or small-dimensional markets exist today, each operating independently. For example, at the racetrack, betting on a horse to win does not directly impact the odds for that horse to finish among the top two, as logically it should, because the two bet types are handled separately.
A combinatorial prediction market is a central clearinghouse for handling logically-related bets defined on a combinatorial space. For example, the outcome space might be all n! possible permutations of n horses in a horse race, while bets are properties of permutations such as "horse A finishes 3rd" or "horse A beats horse B." Alternately, the outcome space might be all 2 50 possible state-by-state results for the Democratic candidate in the 2008 US Presidential election, while bets are Boolean statements such as "Democrat wins in Ohio and Florida but not in Texas."
Low liquidity marginalizes the value of prediction markets, and combinatorics only exacerbates the problem by dividing traders' attention among an exponential number of outcomes. A combinatorial matching market-the combinatorial generalization of a standard double auction-may simply fail to find any trades [11,4,5].
In contrast, an automated market maker is always willing to trade on every bet at some price. A combinatorial market maker implicitly or explicitly maintains prices across all (exponentially many) outcomes, thus allowing any trader at any time to place any bet, if transacted at the market maker's quoted price.
Hanson's [13,14] logarithmic market scoring rule market maker (LMSR) is becoming the de facto standard market maker for prediction markets. LMSR has a number of desirable properties, including bounded loss that grows logarithmically in the number of outcomes, infinite liquidity, and modularity that respects some independence relationships. LMSR is used by a number of companies, including inklingmarkets.com, Microsoft, thewsx.com, and yoonew.com, and is the subject of a number of research studies [7,15,8].
In this paper, we analyze the computational complexity of LMSR in several combinatorial betting scenarios. We examine both permutation combinatorics and Boolean combinatorics. We show that both computing instantaneous prices and computing payments of transactions are #P-hard in all cases we examine, even when we restrict participants to very simplistic and limited types of bets. For example, in the horse race analogy, if participants can place bets only of the form "horse A finishes in position N", then pricing these bets properly according to LMSR is #P-hard, even though matching up bets of the exact same form (with no market maker) is polynomial [4].
On a more positive note, we examine an approximation algorithm for LMSR pricing in permutation markets that makes use of powerful techniques from the literature on online learning with expert advice [3,19,12]. We briefly review this online learning setting, and examine the parallels that exist between LMSR pricing and standard algorithms for learning with expert advice. We then show how a recent algorithm for permutation learning [16] can be transformed into an approximation algorithm for pricing in permutation markets in which the market maker is guaranteed to have bounded loss.
RELATED WORK
Fortnow et al. [11] study the computational complexity of finding acceptable trades among a set of bids in a Boolean combinatorial market. In their setting, the center is an auctioneer who takes no risk, only matching together willing traders. They study a call market setting in which bids are collected together and processed once en masse. They show that the auctioneer matching problem is co-NP-complete when orders are divisible and Σ p 2 -complete when orders are indivisible, but identify a tractable special case in which participants are restricted to bet on disjunctions of positive events or single negative events.
Chen et al. [4] analyze the the auctioneer matching problem for betting on permutations, examining two bidding languages. Subset bets are bets of the form "candidate i finishes in positions x, y, or z" or "candidate i, j, or k finishes in position x." Pair bets are of the form "candidate i beats candidate j." They give a polynomial-time algorithm for matching divisible subset bets, but show that matching pair bets is NP-hard.
Hanson highlights the use of LMSR for Boolean combinatorial markets, noting that the subsidy required to run a combinatorial market on 2 n outcomes is no greater than that required to run n independent one-dimensional markets [13,14]. Hanson discusses the computational difficulty of maintaining LMSR prices on a combinatorial space, and proposes some solutions, including running market makers on overlapping subsets of events, allowing traders to synchronize the markets via arbitrage.
The work closest to our own is that of Chen, Goel, and Pen-nock [6], who study a special case of Boolean combinatorics in which participants bet on how far a team will advance in a single elimination tournament, for example a sports playoff like the NCAA college basketball tournament. They provide a polynomial-time algorithm for LMSR pricing in this setting based on a Bayesian network representation of prices. They also show that LMSR pricing is NP-hard for a very general bidding language. They suggest an approximation scheme based on Monte Carlo simulation or importance sampling.
We believe ours are the first non-trivial hardness results and worst-case bounded approximation scheme for LMSR pricing.
Logarithmic Market Scoring Rules
Proposed by Hanson [13,14], a logarithmic market scoring rule is an automated market maker mechanism that always maintains a consistent probability distribution over an outcome space Ω reflecting the market's estimate of the likelihood of each outcome. A generic LMSR offers a security corresponding to each possible outcome ω. The security associated to outcome ω pays off $1 if the outcome ω happens, and $0 otherwise. Let q = (qω)ω∈Ω indicate the number of outstanding shares for all securities. The LMSR market maker starts the market with some initial shares of securities, q 0 , which may be 0. The market keeps track of the outstanding shares of securities q at all times, and maintains a cost function
C(q) = b log X ω∈Ω e qω /b ,(1)
and an instantaneous price function for each security
pω(q) = e qω /b P τ ∈Ω e qτ /b ,(2)
where b is a positive parameter related to the depth of the market. The cost function captures the total money wagered in the market, and C(q 0 ) reflects the market maker's maximum subsidy to the market. The instantaneous price function pω(q) gives the current cost of buying an infinitely small quantity of the security for outcome ω, and is the partial derivative of the cost function, i.e. pω(q) = ∂C(q)/∂qω. We use p = (pω(q))ω∈Ω to denote the price vector. Traders buy and sell securities through the market maker. If a trader wishes to change the number of outstanding shares from q toq, the cost of the transaction that the trader pays is C(q) − C(q), which equals the integral of the price functions following any path from q toq.
When the outcome space is large, it is often natural to offer only compound securities on sets of outcomes. A compound security S pays $1 if one of the outcomes in the set S ⊂ Ω occurs and $0 otherwise. Such a security is the combination of all securities ω ∈ S. Buying or selling q shares of the compound security S is equivalent to buying or selling q shares of each security ω ∈ S. Let Θ denote the set of all allowable compound securities. Denote the outstanding shares of all compound securities as Q = (qS)S∈Θ. The cost function can be written as
C(Q) = b log X ω∈Ω e P S∈Θ:ω∈S q S /b = b log X ω∈Ω Y S∈Θ:ω∈S e q S /b .(3)
The instantaneous price of a compound security S is computed as the sum of the instantaneous prices of the securities that compose the compound security S,
pS(Q) = P ω∈S e qω /b P τ ∈Ω e qτ /b = P ω∈S e P S ′ ∈Θ:ω∈S ′ q S ′ /b P τ ∈Ω e P S ′ ∈Θ:τ ∈S ′ q S ′ /b = P ω∈S Q S ′ ∈Θ:ω∈S ′ e q S ′ /b P τ ∈Ω Q S ′ ∈Θ:τ ∈S ′ e q S ′ /b .(4)
Logarithmic market scoring rules are so named because they are based on logarithmic scoring rules. A logarithmic scoring rule is a set of reward functions
{sω(r) = aω + b log(rω) : ω ∈ Ω},
where r = (rω)ω∈Ω is a probability distribution over Ω, and aω is a free parameter. An agent who reports r is rewarded sω(r) if outcome ω happens. Logarithmic scoring rules are proper in the sense that when facing them a risk-neutral agent will truthfully report his subjective probability distribution to maximize his expected reward. A LMSR market can be viewed as a sequential version of logarithmic scoring rule, because by changing market prices from p top a trader's net profit is sω(p) − sω(p) when outcome ω happens. At any time, a trader in a LMSR market is essentially facing a logarithmic scoring rule.
LMSR markets have many desirable properties. They offer consistent pricing for combinatorial events. As market maker mechanisms, they provide infinite liquidity by allowing trades at any time. Although the market maker subsidizes the market, he is guaranteed a worst-case loss no greater than C(q 0 ), which is b log n if |Ω| = n and the market starts with 0 share of every security. In addition, it is a dominant strategy for a myopic risk-neutral trader to reveal his probability distribution truthfully since he faces a proper scoring rule. Even for forward-looking traders, truthful reporting is an equilibrium strategy when traders' private information is independent conditional on the true outcome [7].
Complexity of Counting
The well-known class NP contains questions that ask whether a search problem has a solution, such as whether a graph is 3-colorable. The class #P consists of functions that count the number of solutions of NP search questions, such as the number of 3-colorings of a graph.
A function g is #P-hard if, for every function f in #P, it is possible to compute f in polynomial time given an oracle for g. Clearly oracle access to such a function g could additionally be used to solve any NP problem, but in fact one can solve much harder problems too. Toda [24] showed that every language in the polynomial-time hierarchy can be solved efficiently with access to a #P-hard function.
To show a function g is a #P-hard function, it is sufficient to show that a function f reduces to g where f was previously known to be #P-hard. In this paper we use the following #P-hard functions to reduce from:
• Permanent: The permanent of an n-by-n matrix A = (ai,j) is defined as
perm(A) = X σ∈Ω n Y i=1 a i,σ(i) ,(5)
where Ω is the set of all permutations over {1, 2, ..., n}.
Computing the permanent of a matrix A containing 0-1 entries is #P-hard [25].
• #2-SAT: Counting the number of satisfying assignments of a formula given in conjunctive normal form with each clause having two literals is #P-hard [26].
• Counting Linear Extensions: Counting the number of total orders that extend a partial order given by a directed graph is #P-hard [2].
#P-hardness is the best we can achieve since all the functions in this paper can themselves be reduced to some other #P function.
LMSR FOR PERMUTATION BETTING
In this section we consider a particular type of market combinatorics in which the final outcome is a ranking over n competing candidates. Let the set of candidates be Nn = {1, . . . , n}, which is also used to represent the set of positions. In the setting, Ω is the set of all permutations over Nn. An outcome σ ∈ Ω is interpreted as the scenario in which each candidate i ends up in position σ(i). Chen et al. [4] propose two betting languages, subset betting and pair betting, for this type of combinatorics and analyze the complexity of the auctioneer's order matching problem for each.
In what follows we address the complexity of operating an LMSR market for both betting languages.
Subset Betting
As in Chen et al. [4], participants in a LMSR market for subset betting may trade two types of compound securities:
(1) a security of the form i|Φ where Φ ⊂ Nn is a subset of positions; and (2) a security Ψ|j where Ψ ⊂ Nn is a subset of candidates. The security i|Φ pays off $1 if candidate i stands at a position that is an element of Φ and $0 otherwise. Similarly, the security Ψ|j pays off $1 if any of the candidates in Ψ finishes at position j and $0 otherwise. For example, in a horse race, participants can trade securities of the form "horse A will come in the second, fourth, or fifth place", or "either horse B or horse C will come in the third place".
Note that owning one share of i|Φ is equivalent to owning one share of i|j for every j ∈ Φ, and similarly owning one share of Ψ|j is equivalent to owing one share of i|j for every i ∈ Ψ. We restrict our attention to a simplified market where securities traded are of the form i|j . We show that even in this simplified market it is #P-hard for the market maker to provide the instantaneous security prices, evaluate the cost function, or calculate payments for transactions, which implies that the running an LMSR market for the more general case of subset betting is also #P-hard.
Traders can trade securities i|j for all i ∈ Nn and j ∈ Nn with the market maker. Let qi,j be the total number of outstanding shares for security i|j in the market. Let Q = (qi,j)i∈N n ,j∈Nn denote the outstanding shares for all securities. The market maker keeps track of Q at all times. From Equation 4, the instantaneous price of security i|j is
pi,j(Q) = P σ∈Ω:σ(i)=j Q n k=1 e q k,σ(k) /b P τ ∈Ω Q n k=1 e q k,τ (k) /b ,(6)
and from Equation 3, the cost function for subset betting is
C(Q) = b log X σ∈Ω n Y k=1 e q k,σ(k) /b .(7)
We will show that computing instantaneous prices, the cost function, and/or payments of transactions for a subset betting market is #P-hard by a reduction from the problem of computing the permanent of a (0,1)-matrix.
Theorem 1. It is #P-hard to compute instantaneous prices in a LMSR market for subset betting. Additionally, it is #P-hard to compute the value of the cost function.
Proof. We show that if we could compute the instantaneous prices or the value of the cost function for subset betting for any quantities of shares purchased, then we could compute the permanent of any (0, 1)-matrix in polynomial time.
Let n be the number of candidates, A = (ai,j) be any n-byn (0,1)-matrix, and N = n! + 1. Note that Q n i=1 a i,σ(i) is either 0 or 1. From Equation 5, perm(A) ≤ n! and hence perm(A) mod N = perm(A). We show how to compute perm(A) mod N from prices in subset betting markets in which qi,j shares of i|j have been purchased, where qi,j is defined by
qi,j = ( b ln N if ai,j = 0, b ln(N + 1) if ai,j = 1(8)
for any i ∈ Nn and any j ∈ Nn.
Let B = (bi,j) be a n-by-n matrix containing entries of the form bi,j = e q i,j /b . Note that bi,j = N if ai,j = 0 and bi,j = N + 1 if ai,j = 1. Thus, perm(A) mod N = perm(B) mod N . Thus, from Equation 6, the price for i|j in the market is
pi,j(Q) = P σ∈Ω:σ(i)=j Q n k=1 b k,σ(k) P τ ∈Ω Q n k=1 b k,τ (k) = bi,j P σ∈Ω:σ(i)=j Q k =i b k,σ(k) P τ ∈Ω Q n k=1 b k,τ (k) = bi,j · perm(Mi,j) perm(B)
where Mi,j is the matrix obtained from B by removing the ith row and jth column. Thus the ability to efficiently compute prices gives us the ability to efficiently compute perm(Mi,j)/perm(B).
It remains to show that we can use this ability to compute perm(B). We do so by telescoping a sequence of prices. Let Bi be the matrix B with the first i rows and columns removed. From above, we have perm(B1)/perm(B) = p1,1(Q)/b1,1. Define Qm to be the (n−m)-by-(n−m) matrix (qi,j)i>m,j>m, that is, the matrix of quantities of securities (qi,j) with the first k rows and columns removed. In a market with only n−m candidates, applying the same technique to the matrix Qm, we can obtain perm(Bm+1)/perm(Bm) from market prices for m = 1, ..., (n−2). Thus by computing n − 1 prices, we can compute
" perm(B1) perm(B) « " perm(B2) perm(B1) « · · · " perm(Bn−1) perm(Bn−2) « = " perm(Bn−1) perm(B)
« .
Noting that Bn−1 only has one element, we thus can compute perm(B) from market prices. Consequently, perm(B) mod N gives perm(A).
Therefore, given a n-by-n (0, 1)-matrix A, we can compute the permanent of A in polynomial time using prices in n − 1 subset betting markets wherein an appropriate quantity of securities have been purchased.
Additionally, note that
C(Q) = b log X σ∈Ω n Y k=1 b k,σ(k) = b log perm(B) .
Thus if we can compute C(Q), we can also compute perm(A).
As computing the permanent of a (0, 1)-matrix is #P-hard, both computing market prices and computing the cost function in a subset betting market are #P-hard.
Corollary 2. Computing the payment of a transaction in a LMSR for subset betting is #P-hard.
Proof. Suppose the market maker starts the market with 0 share of every security. Denote Q 0 as the initial quantities of all securities. If the market maker can compute C(Q) − C(Q) for any quantitiesQ and Q, it can compute C(Q) − C(Q 0 ) for any Q. As C(Q 0 ) = b log n!, the market maker is able to compute C(Q). According to Theorem 1, computing the payment of a transaction is #P-hard.
Pair Betting
In contrast to subset betting, where traders bet on absolute positions for a candidate, pair betting allows traders to bet on the relative position of a candidate with respect to another. More specifically, traders buy and sell securities of the form i > j , where i and j are candidates. The security pays off $1 if candidate i ranks higher than candidate j (i.e., σ(i) < σ(j) where σ is the final ranking of candidates) and $0 otherwise. For example, traders may bet on events of the form "horse A beats horse B", or "candidate C receives more votes than candidate D".
As for subset betting, the current state of the market is determined by the total number of outstanding shares for all securities. Let qi,j denote the number of outstanding shares for i > j . Applying Equations 3 and 4 to the present context, we find that the instantaneous price of the security i, j is given by
pi,j(Q) = P σ∈Ω:σ(i)<σ(j) Q i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q i ′ ,j ′ /b P τ ∈Ω Q i ′ ,j ′ :τ (i ′ )<τ (j ′ ) e q i ′ ,j ′ /b ,(9)
and the cost function for pair betting is
C(Q) = b log X σ∈Ω Y i,j:σ(i)<σ(j) e q i,j /b .(10)
We will show that computing prices, the value of the cost function, and/or payments of transactions for pair betting is #P-hard via a reduction from the problem of computing the number of linear extensions to any partial ordering.
Theorem 3. It is #P-hard to compute instantaneous prices in a LMSR market for pair betting. Additionally, it is #P-hard to compute the value of the cost function.
Proof. Let P be a partial order over {1, . . . , n}. We recall that a linear (or total) order T is a linear extension of P if whenever x ≤ y in P it also holds that x ≤ y in T . We denote by N (P ) the number of linear extensions of P .
Recall that (i, j) is a covering pair of P if i ≤ j in P and there does not exist ℓ = i, j such that i ≤ ℓ ≤ j. Let {(i1, j1), (i2, j2), ... , (i k , j k )} be a set of covering pairs of P . Note that covering pairs of a partially ordered set with n elements can be easily obtained in polynomial time, and that their number is less than n 2 .
We will show that we can design a sequence of trades that, given a list of covering pairs for P , provide N (P ) through a simple function of market prices.
We consider a pair betting market over n candidates. We construct a sequence of k trading periods, and denote by q t i,j and p t i,j respectively the outstanding quantity of security i > j and its instantaneous price at the end of period t. At the beginning of the market, q 0 i,j = 0 for any i and j. At each period t, 0 < t ≤ k, b ln n! shares of security it > jt are purchased.
Let
Nt(i, j) = X σ∈Ω:σ(i)<σ(j) Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q t i ′ ,j ′ /b , and Dt = X σ∈Ω Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q t i ′ ,j ′ /b .
Note that according to Equation 9, p t it,jt = Nt(it, jt)/Dt.
For the first period, as only the security i1 > j1 is purchased, we get
D1 = X σ∈Ω:σ(i 1 )<σ(j 1 ) n! + X σ:σ(i 1 )>σ(j 1 ) 1 = (n!) 2 + n! 2 .
We now show that D k can be calculated inductively from D1 using successive prices given by the market. During period t, b ln n! shares of it > jt are purchased. Note also that the securities purchased are different at each period, so that q s it,jt = 0 if s < t and q s it,jt = b ln n! if s ≥ t. We have Nt(it, jt) = Nt−1(it, jt)e b ln(n!)/b = n!Nt−1(it, jt) .
Hence,
p t it,jt p t−1 it,jt = Nt(it, jt)/Dt Nt−1(it, jt)/Dt−1 = n!Dt−1 Dt ,
and therefore,
D k = (n!) k−1 k Y ℓ=2 p ℓ−1 i ℓ ,j ℓ p ℓ i ℓ ,j ℓ ! D1 .
So D k can be computed in polynomial time in n from the prices.
Alternately, since the cost function at the end of period k can be written as C(Q) = b log D k , D k can also be computed efficiently from the cost function in period k.
We finally show that given D k , we can compute N (P ) in polynomial time. Note that at the end of the k trading periods, the securities purchased correspond to the covering pairs of P , such that e q k i,j /b = n! if (i, j) is a covering pair of P and e q k i,j /b = 1 otherwise. Consequently, for a permutation σ that satisfies the partial order P , meaning that σ(i) ≤ σ(j) whenever i ≤ j in P, we have
Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q k i ′ ,j ′ /b = (n!) k .
On the other hand, if a permutation σ does not satisfy P , it does not satisfy at least one covering pair, meaning that there is a covering pair of P , (i, j), such that σ(i) > σ(j), so that
Y i ′ ,j ′ :σ(i ′ )<σ(j ′ ) e q k i ′ ,j ′ /b ≤ (n!) k−1 .
Since the total number of permutations is n!, the total sum of all terms in the sum D k corresponding to permutations that do not satisfy the partial ordering P is less than or equal to n!(n!) k−1 = (n!) k , and is strictly less than (n!) k unless the number of linear extensions is 0, while the total sum of all the terms corresponding to permutations that do satisfy P is N (P )(n!) k . Thus N (P ) =¨D k /(n!) k˝.
We know that computing the number of linear extensions of a partial ordering is #P-hard. Therefore, both computing the prices and computing the value of the cost function in pair betting are #P-hard. The proof is nearly identical to the proof of Corollary 2.
LMSR FOR BOOLEAN BETTING
We now examine an alternate type of market combinatorics in which the final outcome is a conjunction of event outcomes. Formally, let A be event space, consisting of N individual events A1, · · · , AN , which may or may not be mutually independent. We define the state space Ω be the set of all possible joint outcomes for the N events, so that its size is |Ω| = 2 N . A Boolean betting market allows traders to bet on Boolean formulas of these events and their negations. A security φ pays off $1 if the Boolean formula φ is satisfied by the final outcome and $0 otherwise. For example, a security A1 ∨ A2 pays off $1 if and only if at least one of events A1 and A2 occurs, while a security A1 ∧ A3 ∧ ¬A5 pays off $1 if and only if the events A1 and A3 both occur and the event A5 does not. Following the notational conventions of Fortnow et al. [11], we use ω ∈ φ to mean that the outcome ω satisfies the Boolean formula φ. Similarly, ω ∈ φ implies that the outcome ω does not satisfy φ.
In this section, we focus our attention to LMSR markets for a very simple Boolean betting language, Boolean formulas of two events. We show that even when bets are only allowed to be placed on disjunctions or conjunctions of two events, it is still #P-hard to calculate the prices, the value of the cost function, and payments of transactions in a Boolean betting market operated by a LMSR market maker.
Let X be the set containing all elements of A and their negations. In other words, each event outcome Xi ∈ X is either Aj or ¬Aj for some Aj ∈ A. We begin by considering the scenario in which traders may only trade securities Xi ∨ Xj corresponding to disjunctions of any two event outcomes.
Let qi,j be the total number of shares purchased by all traders for the security Xi ∨ Xj , which pays off $1 in the event of any outcome ω such that ω ∈ (Xi ∨ Xj ) and $0 otherwise. From Equation 4, we can calculate the instantaneous price for the security Xi ∨ Xj for any two event outcomes Xi, Xj ∈ X as
pi,j(Q) = P ω∈Ω:ω∈(X i ∨X j ) Q 1≤i ′ <j ′ ≤2N :ω∈(X i ′ ∨X j ′ ) e q i ′ ,j ′ /b P τ ∈Ω Q 1≤i ′ <j ′ ≤2N :τ ∈(X i ′ ∨X j ′ ) e q i ′ ,j ′ /b .(11)
Note that if Xi = ¬Xj , pi,j(Q) is always $1 regardless of how many shares of other securities have been purchased. According to Equation 3, the cost function is
C(Q) = b log X ω∈Ω Y 1≤i<j≤2N:ω∈(X i ∨X j ) e q i,j /b .(12)
Theorem 5 shows that computing prices and the value of the cost function in such a market is #P-hard, via a reduction from the #2-SAT problem. 1
Theorem 5. It is #P-hard to compute instantaneous prices in a LMSR market for Boolean betting when bets are 1 This can also be proved via a reduction from counting linear extensions using a similar technique to the proof of Theorem 3, but the reduction to #2-SAT is more natural. restricted to disjunctions of two event outcomes. Additionally, it is #P-hard to compute the value of the cost function in this setting.
Proof. Suppose we are given a 2-CNF (Conjunctive Normal Form) formula (Xi 1 ∨ Xj 1 ) ∧ (Xi 2 ∨ Xj 2 ) ∧ · · · ∧ (Xi k ∨ Xj k ) (13) with k clauses, where each clause is a disjunction of two literals (i.e. events and their negations). Assume any redundant terms have been removed.
The structure of the proof is similar to that of the pair betting case. We consider a Boolean betting markets with N events, and show how to construct a sequence of trades that provides, through prices or the value of the cost function, the number of satisfiable assignments for the 2-CNF formula.
We create k trading periods. At period t, a quantity b ln(2 N ) of the security Xi t ∨ Xj t is purchased. We denote by p t i,j and q t i,j respectively the price and outstanding quantities of the security Xi ∨ Xj at the end of period t. Suppose the market starts with 0 share of every security. Note that q s it,jt = 0 if s < t and q s it,
jt = b ln(2 N ) if s ≥ t. Let Nt(i, j) = X ω∈Ω:ω∈(X i ∨X j ) Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q t i ′ ,j ′ /b , and Dt = X ω∈Ω Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q t i ′ ,j ′ /b .
Thus, p t i,j = Nt(it, jt)/Dt.
Since only one security Xi 1 ∨ Xj 1 has been purchased in period 1, we get
D1 = X ω∈Ω:ω∈(X i 1 ∨X j 1 ) 2 N + X ω∈Ω:ω ∈(X i 1 ∨X j 1 ) 1 = 3 · 2 2N−2 + 2 N−2 .
We then show that D k can be calculated inductively from D1. As the only security purchased in period t is (Xi t ∨ Xj t ) in quantity b ln(2 N ), we obtain
Nt(it, jt) = Nt−1(it, jt)e b ln(2 N )/b = Nt−1(it, jt)2 N .
Therefore,
p t it,jt p t−1 it,jt = Nt(it, jt)/Dt Nt−1(it, jt)/Dt−1 = 2 N Dt−1 Dt ,
and we get
D k = (2 N ) k−1 k Y ℓ=2 p ℓ−1 i ℓ ,j ℓ p ℓ i ℓ ,j ℓ ! D1 .
In addition, since the cost function at the end of period k can be expressed as
C(Q) = b log D k ,
D k can also be computed efficiently from the cost function in period k.
We now show that we can deduce from D k the number of satisfiable assignments for the 2-CNF formula (Equation 13). Indeed, each term in the sum
X ω∈Ω Y 1≤i ′ <j ′ ≤2N:ω∈(X i ′ ∨X j ′ ) e q k i ′ ,j ′ /b
that corresponds to an outcome ω that satisfies the formula is exactly 2 kN , as exactly k terms in the product are 2 N and the rest is 1. On the contrary, each term in the sum that corresponds to an outcome ω that does not satisfy the 2-CNF formula will be at most 2 (k−1)N since at most k − 1 terms in the product will be 2 N and the rest will be 1. Since the total number of outcomes is 2 N , the total sum of all terms corresponding to outcomes that do not satisfy (13) is less than or equal to 2 N (2 (k−1)N ) = 2 kN , and is strictly less than 2 kN unless the number of satisfying assignments is 0. Thus the number of satisfying assignments is¨D k /2 kN˝.
We know that computing the number of satisfiable assignments of a 2-CNF formula is #P-hard. We have shown how to compute it in polynomial time using prices or the value of the cost function in a Boolean betting market of N events. Therefore, both computing prices and computing the value of the cost function in a Boolean betting market is #P-hard.
Corollary 6. Computing the payment of a transaction in a LMSR for Boolean betting is #P-hard when traders can only bet on disjunctions of two events.
The proof is nearly identical to the proof of Corollary 2.
If we impose that participants in a Boolean betting market may only trade securities corresponding to conjunctions of any two event outcomes, Ai ∧ Aj , the following Corollary gives the complexity results for this situation.
Corollary 7. It is #P-hard to compute instantaneous prices in a LMSR market for Boolean betting when bets are restricted to conjunctions of two event outcomes. Additionally, it is #P-hard to compute the value of the cost function in this setting, and #P-hard to compute the payment for a transaction.
Proof. Buying q shares of security Ai ∧ Aj is equivalent to selling q shares of ¬Ai ∨ ¬Aj . Thus if we can operate a Boolean betting market for securities of the type Ai ∧ Aj in polynomial time, we can also operate a Boolean betting market for securities of the type Ai ∨ Aj in polynomial time. The result then follows from Theorem 5 and Corollary 6.
AN APPROXIMATION ALGORITHM FOR SUBSET BETTING
There is an interesting relationship between logarithmic market scoring rule market makers and a common class of algorithms for online learning in an experts setting. In this section, we elaborate on this connection, and show how results from the online learning community can be used to prove new results about an approximation algorithm for subset betting.
The Experts Setting
We begin by describing the standard model of online learning with expert advice [19,12,27]. In this model, at each time t ∈ {1, · · · , T }, each expert i ∈ {1, · · · , n} receives a loss ℓi,t ∈ [0, 1]. The cumulative loss of expert i at time T is Li,T = P T t=1 ℓi,t. No statistical assumptions are made about these losses, and in general, algorithms are expected to perform well even if the sequence of losses is chosen by an adversary.
An algorithm A maintains a current weight wi,t for each expert i, where P n i=1 wi,t = 1. These weights can be viewed as distributions over the experts. The algorithm then receives its own instantaneous loss ℓA,t = P n i=1 wi,tℓi,t, which may be interpreted as the expected loss of the algorithm when choosing an expert according to the current distribution. The cumulative loss of A up to time T is then defined in the natural way as LA,T = P T t=1 ℓA,t = P T t=1 P n i=1 wi,tℓi,t. A common goal in such online learning settings is to minimize an algorithm's regret. Here the regret is defined as the difference between the cumulative loss of the algorithm and the cumulative loss of an algorithm that would have "chosen" the best expert in hindsight by setting his weight to 1 throughout all the periods. Formally, the regret is given by LA,T − min i∈{1,··· ,n} Li,T .
Many algorithms that have been analyzed in the online experts setting are based on exponential weight updates. These exponential updates allow the algorithm to quickly transfer weight to an expert that is outperforming the others. For example, in the Weighted Majority algorithm of Littlestone and Warmuth [19], the weight on each expert i is defined as
wi,t = wi,t−1e −ηℓ i,t P n j=1 wj,t−1e −ηℓ j,t = e −ηL i,t P n j=1 e −ηL j,t ,(14)
where η is the learning rate, a small positive parameter that controls the magnitude of the updates. The following theorem gives a bound on the regret of Weighted Majority. For a proof of this result and a nice overview of learning with expert advice, see, for example, Cesa-Bianchi and Lugosi [3].
Theorem 8. Let A be the Weighted Majority algorithm with parameter η. After a sequence of T trials, LA,T − min i∈{1,··· ,n} Li,T ≤ ηT + ln(n) η .
Relationship to LMSR Markets
There is a manifest similarity between the expert weights used by Weighted Majority and the prices in the LMSR market. One might ask if the results from the experts setting can be applied to the analysis of prediction markets. Our answer is yes. In fact, it is possible to use Theorem 8 to rediscover the well-known bound of b ln(n) for the loss of an LMSR market maker with n outcomes.
Let ǫ be a limit on the number of shares that a trader may purchase or sell at each time step; in other words, if a trader would like to purchase or sell q shares, this purchase must be broken down into ⌈q/ǫ⌉ separate purchases of ǫ or less shares. Note that the total number of time steps T needed to execute such a sequence of purchases and sales is proportional to 1/ǫ.
We will construct a sequence of loss functions in a setting with n experts to induce a sequence of weight matrices that correspond to the price matrices of the LMSR market. At each time step t, let pi,t ∈ [0, 1] be the instantaneous price of security i at the end of period t, and let qi,t ∈ [−ǫ, ǫ] be the number of shares of security i purchased during period t. Let Qi,t be the total number of shares of security i that have been purchased up to time t. Now, let's define the instantaneous loss of each expert as ℓi,t = (2ǫ − qi,t)/(ηb). First notice that this loss is always in [0, 1] as long as η ≥ 2ǫ/b. From Equations 2 and 14, at each time t,
pi,t = e Q i,t /b P n j=1 e Q j,t /b = e 2ǫt/b−ηL i,t P n j=1 e 2ǫt/b−ηL j,t = e −ηL i,t P n j=1 e −ηL j,t = wi,t .
Applying Theorem 8, and rearranging terms, we find that max i∈{1,··· ,n}
T X t=1 qi,t − T X t=1 n X i=1 pi,tqi,t ≤ η 2 T b + b ln(n).
The first term of the left-hand side is the maximum payment that the market maker needs to make, while the second terms of the left-hand side captures the total money the market maker has received. The right hand side is clearly minimized when η is set as small as possible. Setting η = 2ǫ/b gives us max i∈{1,··· ,n}
T X t=1 qi,t − T X t=1 n X i=1 pi,tqi,t ≤ 4ǫ 2 T b + b ln(n).
Since T = O(1/ǫ), the term 4ǫ 2 T b goes to 0 as ǫ becomes very small. Thus in the limit as ǫ → 0, we get the wellknown result that the worst-case loss of the market maker is bounded by b ln(n).
Considering Permutations
Recently Helmbold and Warmuth [16] have shown that many results from the standard experts setting can be extended to a setting in which, instead of competing with the best expert, the goal is to compete with the best permutation over n items. Here each permutation suffers a loss at each time step, and the goal of the algorithm is to maintain a weighting over permutations such that the cumulative regret to the best permutation is small. It is infeasible to treat each permutation as an expert and run a standard algorithm since this would require updating n! weights at each time step. Instead, they show that when the loss has a certain structure (in particular, when the loss of a permutation is the sum of the losses of each of the n mappings), an alternate algorithm can be used that requires tracking only n 2 weights in the form of an n × n doubly stochastic matrix.
Formally, let W t be a doubly stochastic matrix of weights maintained by the algorithm A at time t. Here W t i,j is the weight corresponding to the probability associated with item i being mapped into position j. Let L t ∈ [0, 1] n×n be the loss matrix at time t. The instantaneous loss of a permutation σ at time t is ℓσ,t = P n i=1 L t i,σ(i) . The instantaneous loss of A is ℓA,t = P n i=1 P n j=1 W t i,j L t i,j , the matrix dot product between W t and L t . Notice that ℓA,t is equivalent to the expectation over permutations σ drawn according to W t of ℓσ,t. The goal of the algorithm is to minimize the cumulative regret to the best permutation, LA,T − minσ∈Ω Lσ,T where the cumulative loss is defined as before.
Helmbold and Warmuth go on to present an algorithm called PermELearn that updates the weight matrix in two steps. First, it creates a temporary matrix W ′ , such that for every i and j, W ′ i,j = W t i,j e −ηL t i,j . It then obtains W t+1 i,j by repeatedly rescaling the rows and columns of W ′ until the matrix is doubly stochastic. Alternately rescaling rows and columns of a matrix M in this way is known as Sinkhorn balancing [22]. Normalizing the rows of a matrix is equivalent to pre-multiplying by a diagonal matrix, while normalizing the columns is equivalent to post-multiplying by a diagonal matrix. Sinkhorn [22] shows that this procedure converges to a unique doubly stochastic matrix of the form RM C where R and C are diagonal matrices if M is a positive matrix. Although there are cases in which Sinkhorn balancing does not converge in finite time, many results show that the number of Sinkhorn iterations needed to scale a matrix so that row and column sums are 1 ± ǫ is polynomial in 1/ǫ [1,17,18].
The following theorem [16] bounds the cumulative loss of the PermELearn in terms of the cumulative loss of the best permutation.
Theorem 9. (Helmbold and Warmuth [16]) Let A be the PermELearn algorithm with parameter η. After a sequence of T trials, LA,T ≤ n ln(n) + η minσ∈Ω Lσ,T 1 − e −η .
Approximating Subset Betting
Using the PermELearn algorithm, it is simple to approximate prices for subset betting in polynomial time. We start with a n × n price matrix P 1 in which all entries are 1/n. As before, traders may purchase securities of the form i|Φ that pay off $1 if and only if horse or candidate i finishes in a position j ∈ Φ, or securities of the form Ψ|j that pay off $1 if and only if a horse or candidate i ∈ Ψ finishes in position j.
As in Section 6.2, each time a trader purchases or sells q shares, the purchase or sale is broken up into ⌈q/ǫ⌉ purchases or sales of ǫ shares or less, where ǫ > 0 is a small constant. 2 Thus we can treat the sequence of purchases as a sequence of T purchases of ǫ or less shares, where T = O(1/ǫ). Let q t i,j be the number of shares of securities i|Φ with j ∈ Φ or Ψ|j with i ∈ Ψ purchased at time t; then q t i,j ∈ [−ǫ, ǫ] for all i and j.
The price matrix is updated in two steps. First, a temporary matrix P ′ is created where for every i and j, P ′ i,j = P t i,j e q t i,j /b
2 We remark that dividing purchases in this way has the negative effect of creating a polynomial time dependence on the quantity of shares purchased. However, this is not a problem if the quantity of shares bought or sold in each trade is bounded to start, which is a reasonable assumption. The additional time required is then linear only in 1/ǫ.
where b > 0 is a parameter playing a similar role to b in Equation 2. Next, P ′ is Sinkhorn balanced to the desired precision, yielding an (approximately) doubly stochastic matrix P t+1 .
The following lemma shows that updating the price matrix in this way results in a price matrix that is equivalent to the weight matrix of PermELearn with particular loss functions.
Lemma 10. The sequence of price matrices obtained by the approximation algorithm for subset betting on a sequence of purchases q t ∈ [−ǫ, ǫ] n×n is equivalent to the sequence of weight matrices obtained by running PermELearn(η) on a sequence of losses L t with
L t i,j = 2ǫ − q t i,j ηb
for all i and j, for any η ≥ 2ǫ/b.
Proof. First note that for any η ≥ 2ǫ/b, L t i,j ∈ [0, 1] for all t, i, and j, so the loss matrix is valid for PermELearn. P 1 and W 1 both contain all entries of 1/n. Assume that P t = W t . When updating weights for time t + 1, for all i and j,
P ′ i,j = P t i,j e q t i,j /b = W t i,j e q t i,j /b = e 2ǫ/b W t i,j e −2ǫ/b+q t i,j /b = e 2ǫ/b W t i,j e −ηL t i,j = e 2ǫ/b W ′ i,j .
Since the matrix W ′ is a constant multiple of P ′ , the Sinkhorn balancing step will produce the same matrices.
Using this lemma, we can show that the difference between the amount of money that the market maker must distribute to traders in the worst case (i.e. when the true outcome is the outcome that pays off the most) and the amount of money collected by the market is bounded. We will see in the corollary below that as ǫ approaches 0, the worst case loss of the market maker approaches bn ln(n), regardless of the number of shares purchased. Unfortunately, if ǫ > 0, this bound can grow arbitrarily large.
Theorem 11. For any sequence of valid subset betting purchases q t where q t i,j ∈ [−ǫ, ǫ] for all t, i, and j, let P 1 , · · · , P T be the price matrices obtained by running the subset betting approximation algorithm. Then
max σ∈Sn T X t=1 n X i=1 q t i,σ(i) − T X t=1 n X i=1 n X j=1 P t i,j q t i,j ≤ 2ǫ/b 1 − e −2ǫ/b n ln(n) + " 2ǫ/b 1 − e −2ǫ/b − 1 «
2ǫnT .
| 7,945 |
0801.1300
|
2952943368
|
We consider the following problem. Given a 2-CNF formula, is it possible to remove at most @math clauses so that the resulting 2-CNF formula is satisfiable? This problem is known to different research communities in Theoretical Computer Science under the names 'Almost 2-SAT', 'All-but- @math 2-SAT', '2-CNF deletion', '2-SAT deletion'. The status of fixed-parameter tractability of this problem is a long-standing open question in the area of Parameterized Complexity. We resolve this open question by proposing an algorithm which solves this problem in @math and thus we show that this problem is fixed-parameter tractable.
|
The parameterized MAX-SAT problem (a complementary problem to the one considered in the present paper) where the goal is to satisfy at least @math clauses of arbitrary sizes received a considerable attention from the researchers resulted in a series of improvements of the worst-case upper bound on the runtime of this problem. Currently the best algorithm is given in @cite_12 and solves this problem in @math , where @math is the size of the given formula.
|
{
"abstract": [
"Abstract In this paper, we present improved exact and parameterized algorithms for the maximum satisfiability problem. In particular, we give an algorithm that computes a truth assignment for a boolean formula F satisfying the maximum number of clauses in time O(1.3247 m | F |), where m is the number of clauses in F , and | F | is the sum of the number of literals appearing in each clause in F . Moreover, given a parameter k , we give an O(1.3695 k +| F |) parameterized algorithm that decides whether a truth assignment for F satisfying at least k clauses exists. Both algorithms improve the previous best algorithms by Bansal and Raman for the problem."
],
"cite_N": [
"@cite_12"
],
"mid": [
"2148185803"
]
}
|
Almost 2-SAT is Fixed-Parameter Tractable
|
We consider the following problem. Given a 2-cnf formula, is it possible to remove at most k clauses so that the resulting 2-cnf formula is satisfiable? This problem is known to different research communities in Theoretical Computer Science under the names 'Almost 2-SAT', 'All-but-k 2-SAT', '2-cnf deletion', '2-SAT deletion'. The status of fixed-parameter tractability of this problem is a long-standing open question in the area of Parameterized Complexity. The question regarding the fixed-parameter tractability of this problem was first raised in 1997 by Mahajan and Raman [12] (see [13] for the journal version). This question has been posed in the book of Niedermeier [16] being referred as one of central challenges for parameterized algorithms design. Finally, in July 2007, this question was included by Fellows in the list of open problems of the Dagstuhl seminar on Parameterized Complexity [6]. In this paper we resolve this open question by proposing an algorithm that solves this problem in O(15 k * k * m 3 ) time. Thus we show that this problem is fixed-parameter tractable (fpt).
Overview of the algorithm
We start from the terminology we adopt regarding the names of the considered problems. We call Almost 2-SAT (abbreviated as 2-ASAT ) the optimization problem whose output is the smallest subset of clauses that have to be removed from the given 2-CNF formula so that the resulting 2-CNF formula is satisfiable. The parameterized 2-ASAT problem gets as additional input a parameter k and the output of this problem is a set of at most k clauses whose removal makes the given 2-CNF formula satisfiable, in case such a set exists. If there is no such a set, the output is 'NO'. So, the algorithm proposed in this paper solves the parameterized 2-ASAT problem.
We introduce a variation of the 2-ASAT problem called the annotated 2-ASAT problem with a single literal abbreviated as 2-ASLASAT. The input of this problem is (F, L, l), where F is a 2-CNF formula, L is a set of literals such that F is satisfiable w.r.t. L (i.e. has a satisfying assignment which does not include negations of literals of L), l is a single literal. The task is to find a smallest subset of clauses of F such that after their removal the resulting formula is satisfiable w.r.t. (L ∪ {l}). The parameterized versions of the 2-ASLASAT problem is defined analogously to the parameterized 2-ASAT problem.
The description of the algorithm for the parameterized 2-ASAT problem is divided into two parts. In the first part (which is the most important one) we provide an algorithm which solves the parameterized 2-ASLASAT problem in O * (5 k ) time. In the second part we show that the parameterized 2-ASAT problem can be solved by O * (3 k ) applications of the algorithm solving the parameterized 2-ASLASAT problem. The resulting runtime follows from the product of the last two complexity expressions. The transformation of the 2-ASAT problem into the 2-ASLASAT problem is based on the iterative compression and can be seen as an adaptation of the method employed in [9] in order to solve the graph bipartization problem. In the rest of the subsection we overview the first part.
In order to show that the 2-ASLASAT problem is FPT, we represent the 2-ASLASAT problem as a separation problem and prove a number of theorems based on this view. In particular, we introduce a notion of a walk from a literal l ′ to a literal l ′′ in a 2-CNF formula F . We define the walk as a sequence (l ′ ∨ l 1 ), (¬l 1 ∨ l 2 ), . . . , (¬l k−1 ∨ l k ), (¬l k ∨ l ′′ ) of clauses of F such that literals are ordered within each clause so that the second literal of each clause except the last one is the negation of the first literal of the next clause. Then we prove that, given an instance (F, L, l) of the 2-ASLASAT problem, F is insatisfiable w.r.t. L ∪ {l} if and only if there is a walk from ¬L (i.e. from the set of negations of the literals of L) to ¬l or a walk from ¬l to ¬l. Thus the 2-ASLASAT problem can be viewed as a problem of finding the smallest set of clauses whose removal breaks all these walks.
Next we define the notion of a path of F as a walk of F with no repeated clauses. Based on this notion we prove a Menger's like theorem. In particular, given an instance (F, L, l) of the 2-ASLASAT problem, we show that the smallest number of clauses whose removal breaks all the paths from ¬L to ¬l equals the largest number of clause-disjoint paths from ¬L to ¬l (for this result it is essential that F is satisfiable w.r.t. L). Based on this result, we show that the size of the above smallest separator of ¬L from from ¬l can be computed in a polynomial time by a Ford-Fulkerson-like procedure. Thus this size is a polynomially computable lower bound on the size of the solution of (F, L, l).
Next we introduce the notion of a neutral literal l * of (F, L, l) whose main property is that the number of clauses which separate ¬(L ∪ {l * }) from ¬l equals the number of clauses separating ¬L from ¬l. Then we prove a theorem stating that in this case the size of a solution of (F, L ∪ {l * }, l) does not exceed the size of a solution of (F, L, l). The strategy of the proof is similar to the strategy of the proof of the main theorem of [2].
Having proved all the above theorems, we present the algorithm solving the parameterized 2-ASLASAT problem on input (F, L, l, k). The algorithm selects a clause C. If C includes a neutral literal l * then the algorithm applies itself recursively to (F, L ∪ {l * }, l, k) (this operation is justified by the theorem in the previous paragraph). If not, the algorithm produces at most three branches on one of them it removes C from F and decreases the parameter. On each of the other branches the algorithm adds one of literals of C to L and applies itself recursively without changing the size of the parameter. The search tree produced by the algorithm is bounded because on each branch either the parameter is decreased or the lower bound on the solution size is increased (because the literals of the selected clause are not neutral ). Thus on each branch the gap between the parameter and the lower bound of the solution size is decreased which ensures that the size of the search tree exponentially depends only on k and not on the size of F .
Structure of the Paper
In Section 2 we introduce the terminology which we use in the rest of the paper. In Section 3 we prove the theorems mentioned in the above overview subsection. In Section 4 we present an algorithm for the parameterized 2-ASLASAT problem, prove its correctness and evaluate the runtime. In Section 5 we present the iterative compression based transformation from parameterized 2-ASAT problem to the parameterized 2-SLASAT problem.
Terminology
2-CNF Formulas
A CNF formula F is called a 2-CNF formula if each clause of F is of size at most 2. Throughout the paper we make two assumptions regarding the considered 2-CNF formulas. First, we assume that all the clauses of the considered formulas are of size 2. If a formula has a clause (l) of size 1 then this clause is represented as (l ∨ l). Second, everywhere except the very last theorem, we assume that all the clauses of any considered formula are pairwise distinct. 1 This assumption allows us to represent the operation of removal clauses from a formula in a settheoretical manner. In particular, let S be a set of clauses 2 . Then F \ S is a 2-CNF formula which is the AN D of clauses of F that are not contained in S. The result of removal a single clause C is denoted by F \ C rather than F \ {C}.
Let F , S, C, L be a 2-CNF formula, a set of clauses, a single clause, and a set of literals. Then V ar(F ), V ar(S), V ar(C), V ar(L) denote the set of variables whose literals appear in F , S, C, and L, respectively. For a single literal l, we denote by V ar(l) the variable of l. Also we denote by Clauses(F ) the set of clauses of F .
A set of literals L is called non-contradictory if it does not contain a literal and its negation. A literal l satisfies a clause (l 1 ∨l 2 ) if l = l 1 or l = l 2 . Given a 2-CNF formula F , a non-contradictory set of literals L such that V ar(F ) = V ar(L) and each clause of F is satisfied by at least one literal of L, we call L a satisfying assignment of F . F is satisfiable if it has at least one satisfying assignment. Given a set of literals L, we denote by ¬L the set consisting of negations of all the literals of L. For example, if L = {l 1 , l 2 , ¬l 3 } then ¬L = {¬l 1 , ¬l 2 , l 3 }.
Let F be a 2-CNF formula and L be a set of literals. F is satisfiable with respect to L if F has a satisfying assignment P which does not intersect with ¬L 3 . The notion of satisfiability of a 2-CNF formula with respect to the given set of literals will be very frequently used in the paper, hence, in order to save the space, we introduce a special notation for this notion. In particular, we say that SW RT (F, L) is true (false) if F is, respectively, satisfiable (not satisfiable) with respect to L. If L consists of a single literal l then we write SW RT (F, l) rather than SW RT (F, {l}).
Walks and paths
Definition 1. A walk of the given 2-CNF formula F is a non-empty sequence w = (C 1 , . . . , C q ) of (not necessarily distinct) clauses of F having the following property. For each C i one of its literals is specified as the first literal of C i , the other literal is the second literal, and for any two consecutive clauses C i and C i+1 the second literal of C i is the negation of the first literal of C i+1 .
Let w = (C 1 , . . . , C q ) be a walk and let l ′ and l ′′ be the first literal of C 1 and the second literal of C q , respectively. Then we say that l ′ is the first literal of w, that l ′′ is the last literal of w, and that w is a walk from l ′ to l ′′ . Let L be a set of literals such that l ′ ∈ L. Then we say that w is a walk from L. Let C = (l 1 ∨ l 2 ) be a clause of w. Then l 1 is a first literal of C with respect to (w.r.t.) w if l 1 is the first literal of some C i such that C = C i . A second literal of a clause with respect to a walk is defined accordingly. (Generally a literal of a clause may be both a first and a second with respect to the given walk, which is shown in the example below). We denote by reverse(w) a walk (C q , . . . , C 1 ) in which the first and the second literals of each entry are exchanged w.r.t. w. Given a clause C ′′ = (¬l ′′ ∨ l * ), we denote by w + (¬l ′′ ∨ l * ) the walk obtained by appending C ′′ to the end of w and setting ¬l ′′ to be the first literal of the last entry of w + (¬l ′′ ∨ l * ) and l * to be the second one. More generally, let w ′ be a walk whose first literal is ¬l ′′ . Then w +w ′ is the walk obtained by concatenation of w ′ to the end of w with the first and second literals of all entries in w and w ′ preserving their roles in w + w ′ .
Definition 2. A path of a 2-CNF formula F is a walk of F all clauses of which are pairwise distinct.
Consider an example demonstrating the above notions. Let w = (l 1 ∨l 2 ), (¬l 2 ∨ l 3 ), (¬l 3 ∨ l 4 ), (¬l 4 ∨ ¬l 3 ), (l 3 ∨ ¬l 2 ), (l 2 ∨ l 5 ) be a walk of some 2-CNF formula presented so that the first literals of all entries appear before the second literals. Then l 1 and l 5 are the first and the last literals of w, respectively, and hence w is a walk from l 1 to l 5 . The clause (¬l 2 ∨ l 3 ) has an interesting property that both its literals are first literals of this clause with respect to w (and therefore the second literals as well). The second item of w witnesses ¬l 2 being a first literal of (¬l 2 ∨ l 3 ) w.r.t. w (and hence l 3 being a second one), while the second item of w from the end provides the witness for l 3 being a first literal of (¬l 2 ∨ l 3 ) w.r.t. w (and hence ¬l 2 being a second one). The rest of clauses do not possess this property. For example l 1 is the first literal of (l 1 ∨ l 2 ) w.r.t. w (as witnessed by the first entry) but not the second one. Next, reverse(w) = (l 5 ∨ l 2 ), (¬l 2 ∨ l 3 ), (¬l 3 ∨ ¬l 4 ), (l 4 ∨ ¬l 3 ), (l 3 ∨ ¬l 2 ), (l 2 ∨ l 1 ). Let w 1 be the prefix of w containing all the clauses except the last one. Then w = w 1 + (l 2 ∨ l 5 ). Let w 2 be the prefix of w containing the first 4 entries, w 3 be the suffix of w containing the last 2 entries. Then w = w 2 + w 3 . Finally, observe that w is not a path due to the repeated occurrence of clause (¬l 2 ∨ l 3 ), while w 2 is a path.
2.3 2-ASAT and 2-ASLASAT problems.
Definition 3. 1. A Culprit Set (CS) of a 2-CNF formula F is a subset S of Clauses(F ) such that F \ S is satisfiable. 2. Let (F, L, l) be a triple where F is a 2-CNF formula, L is a non-contradictory
set of literals such that SW RT (F, L) is true and l s a literal such that
V ar(l) / ∈ V ar(L). A CS of (F, L, l) is a subset S of Clauses(F ) such that SW RT (F \ S, L ∪ {l}) is true.
Having defined a CS with respect to two different structures, we define problems of finding a smallest CS (SCS) with respect to these structures. In particular Almost 2-SAT problem (2-ASAT problem) is defined as follows: given a 2-CNF formula F , find an SCS of F . The Annotated Almost 2-SAT problem with single literal (2-ASLASAT problem) is defined as follows: given the triplet (F, L, l) as in the last item of Definition 3, find an SCS of (F, L, l). Now we introduce parameterized versions of the 2-ASAT and 2-ASLASAT problems, where the parameter restricts the size of a CS. In particular, the input of the parameterized 2-ASAT problem is (F, k), where F is a 2-CNF formula and k is a non-negative integer. The output is a CS of F of size at most k, if one exists. Otherwise, the output is 'NO'. The input of the parameterized 2-ASLASAT problem is (F, L, l, k) where (F, L, l) is as specified in Definition 3. The output is a CS of (F, L, l) of size at most k, if there is such one. Otherwise, the output is 'NO'. Proof. Since w is a walk of F , V ar(l x ) ∈ V ar(F ) and V ar(l y ) ∈ V ar(F ). Consequently for any satisfying assignment P of F both V ar(l x ) and V ar(l y ) belong to V ar(P ). Therefore SW RT (F, {¬l x , ¬l y }) may be true only if there is a satisfying assignment of F containing both ¬l x and ¬l y . We going to show that this is impossible by induction on the length of w This is clear if |w| = 1 because in this case w = (l x ∨ l y ). Assume that |w| > 1 and the statement is satisfied for all shorter walks. Then w = w ′ + (l t ∨ l y ), where w ′ is a walk of w from l x to ¬l t . By the induction assumption SW RT (F, {¬l x , l t }) is false and hence any satisfying assignment of F containing ¬l x contains ¬l t and hence contains l y . As we noted above in the proof, this implies that SW RT (F, {¬l x , ¬l y }) is false.
Lemma 2. Let F be a 2-cnf formula and let L be a set of literals such that SW RT (F, L) is true. Let C = (l 1 ∨ l 2 ) be a clause of F and let w be a walk of F from ¬L containing C and assume that l 1 is a first literal of C w.r.t. w. Then l 1 is not a second literal of C w.r.t. any walk from ¬L.
Proof. Let w ′ be a walk of F from ¬L which contains C so that l 1 is a second literal of C w.r.t. w ′ . Then w ′ has a prefix w ′′ whose last literal is l 1 . Let l ′ be the first literal of w ′ (and hence of w ′′ ). According to Lemma 1, SW RT (F, {¬l 1 , ¬l ′ }) is false. Therefore if l 1 ∈ ¬L then SW RT (F, L) is false (because {¬l 1 , ¬l ′ } ⊆ L) in contradiction to the conditions of the lemma. Thus l 1 / ∈ ¬L and hence l 1 is not the first literal of w. Consequently, w has a prefix w * whose last literal is ¬l 1 . Let l * be the first literal of w (and hence of w * ). Then w * +reverse(w ′′ ) is a walk from l * to l ′ , both belong to ¬L. According to Lemma 1, SW RT (F, {¬l * , ¬l ′ }) is false and hence SW RT (F, L) is false in contradiction to the conditions of the lemma. It follows that the walk w ′ does not exist and the present lemma is correct.
Lemma 3. Let F be a 2-cnf formula, let L be a set of literals such that SW RT (F, L) is true, and let w be a walk from ¬L. Then F has a path p with the same first and last literals as w and the set of clauses of p is a subset of the set of clauses of w.
Proof. The proof is by induction on the length of w. The statement is clear if |w| = 1 because w itself is the desired path. Assume that |w| > 1 and the lemma holds for all shorter paths from ¬L. If all clauses of w are distinct then w is the desired path. Otherwise, let w = (C 1 , . . . , C q ) and assume that C i = C j where 1 ≤ i < j ≤ q. By Lemma 2, C i and C j have the same first (and, of course, the second) literal. If i = 1, let w ′ be the suffix of w starting at C j . Otherwise, if C j = q, let w ′ be the prefix of w ending at C i . If none of the above happens then w ′ = (C 1 , . . . , C i , C j+1 , C q ). In all the cases, w ′ is a walk of F with the same first and last literals as w such that |w ′ | < |w| and the set of clauses of w ′ is a subset of the set of clauses of w. The desired path is extracted from w ′ by the induction assumption. Proof. Assume that F has a walk from ¬l to ¬l or from ¬l ′ to ¬l such that l ′ ∈ L. Then, according to Lemma 1, SW RT (F, l) is false or SW RT (F, {l ′ , l}) is false, respectively. Clearly in both cases SW RT (F, L ∪ {l}) is false as L ∪ {l} is, by definition, a superset of both {l} and {l ′ , l}.
Assume now that SW RT (F, L∪{l}) is false. Let I be a set of literals including l and all literals l ′ such that F has a walk from ¬l to l ′ . Let S be the set of all clauses of F satisfied by I.
Assume that I is non-contradictory and does not intersect with ¬L. Let P be a satisfying assignment of F which does not intersect with ¬L (such an assignment exists according to definition of the 2-aslasat problem). Let P ′ be the subset of P such that V ar(P ′ ) = V ar(F ) \ V ar(I). Observe that P ′ ∪ I is non-contradictory. Indeed, P ′ is non-contradictory as being a subset of a satisfying assignment P of F , I is non-contradictory by assumption, and due to the disjointness of V ar(I) and V ar(P ′ ), there is no literal l ′ ∈ I and ¬l ′ ∈ P ′ . Next, note that every clause C of F is satisfied by P ′ ∪ I. Indeed, if C ∈ S then C is satisfied by I, by definition of I. Otherwise, assume first that V ar(C) ∩ V ar(I) = ∅. Then C = (¬l ′ ∨ l ′′ ), where l ′ ∈ I. Then either l ′ = l or F has a walk w from ¬l to l ′ . Consequently, either (¬l ′ ∨ l ′′ ) or w + (¬l ′ ∨ l ′′ ) is a walk from ¬l to l ′′ witnessing that l ′′ ∈ I and hence C ∈ S, a contradiction. It remains to conclude that V ar(C) ∩ V ar(I) = ∅, i.e. that V ar(C) ⊆ V ar(P ′ ). If P ′ contains contradictions of both literals of C then P \ P ′ contains at least one literal of C implying that P contains a literal and its negation in contradiction to the definition of P . Consequently, C is satisfied by P ′ . Taking into account that V ar(P ′ ∪I) = V ar(F ), P ′ ∪I is a satisfying assignment of F . Observe that P ′ ∪I does not intersect with ¬(L ∪ l). Indeed, both I and P ′ do not intersect with ¬L, the former by assumption the latter by definition. Next, l ∈ I and P ′ ∪ I is noncontradictory, hence ¬l / ∈ P ′ ∪I. Thus P ′ ∪I witnesses that SW RT (F, L ∪{l}) is true in contradiction to our assumption. Thus our assumption regarding I made in the beginning of the present paragraph is incorrect.
It follows from the previous paragraph that either I contains a literal and its negation or I intersects with ¬L. In the former case if ¬l ∈ I then by definition of I there is a walk from ¬l to ¬l. Otherwise I contains l ′ and ¬l ′ such that V ar(l ′ ) = V ar(l). Let w 1 be the walk from ¬l to l ′ and let w 2 be the walk from ¬l to ¬l ′ (both walks exist according tot he definition of I). Clearly w 1 +reverse(w 2 ) is a walk from ¬l to ¬l. In the latter case, F has a walk w from ¬l to ¬l ′ such that l ′ ∈ L. Clearly reverse(w) is a walk from ¬L to ¬l. Thus we have shown that if SW RT (F, L ∪ {l}) is false then F has a walk from ¬l to ¬l or a walk from ¬L to ¬l, which completes the proof of the theorem.
Smallest Separators
Definition 4. A set SC of clauses of a 2-CNF formula F is a separator with respect to a set of literals L and literal l y if F \ SC does not have a path from L to l y .
We denote by SepSize(F, L, l y ) the size of a smallest separator of F w.r.t. L and l y and by OptSep(F, L, l y ) the set of all smallest separators of F w.r.t. L and l y . Thus for any S ∈ OptSep(F, L, l y ), |S| = SepSize(F, L, l y ).
Given the above definition, we derive an easy corollary from Lemma 1.
Corollary 1. Let (F, L, l) be an instance of the 2-ASLASAT problem. Then the size of an SCS of this instance is greater than or equal to SepSize(F, ¬L, ¬l).
Proof. Assume by contradiction that S is a CS of (F, L, l) such that |S| < SepSize(F, ¬L, ¬l). Then F \ S has at least one path p from a literal ¬l ′ (l ′ ∈ L) to ¬l. According to Lemma 1, F \ S is not satisfiable w.r.t. {l ′ , l} and hence it is not satisfiable with respect to L ∪ {l} which is a superset of {l ′ , l}. That is, S is not a CS of (F, L, l), a contradiction.
Let D = (V, A) be the implication graph on F which is a digraph whose set V (D) of nodes corresponds to the set of literals of the variables of F and (l 1 , l 2 ) is an arc in its set A(D) of arcs if and only if (¬l 1 ∨ l 2 ) ∈ Clauses(F ). We say that arc (l 1 , l 2 ) represents the clause (¬l 1 ∨ l 2 ). Note that each arc represents exactly one clause while a clause including two distinct literals is represented by two different arcs. In particular, if ¬l 1 = l 2 , the other arc which represents (¬l 1 ∨ l 2 ) is (¬l 2 , ¬l 1 ). In the context of D we denote by L and ¬L the set of nodes corresponding to the literals of L and ¬L, respectively. We adopt the definition of a walk and a path of a digraph given in [1]. Taking into account that all the walks of D considered in this paper are non-empty we represent them as the sequences of arcs instead of alternative sequences of arcs and nodes. In other words, if w = (x 1 , e 1 , . . . , x q , e q , x q+1 ) is a walk of D, we represent it as (e 1 , . . . , e q ). The arc separator of D w.r.t. a set of literals L and a literal l is a set of arcs such that the graph resulting from their removal has no path from L to l. Similarly to the case with 2-cnf formulas, we denote by ArcSepSize(D, L, l) the size of the smallest arc separator of D w.r.t. L and l.
Theorem 2. Let F be a 2-cnf formula, let L be a set of literals such that SW RT (F, ¬L) is true. Let l y be a literal such that V ar(l y ) / ∈ V ar(L). Then the following statements hold.
1. The largest number M axP aths(F, L, l y ) of clause-disjoint paths from L to l y in F equals the largest number M axP aths(D, ¬L, l y ) of arc-disjoint paths from ¬L to l y in D. Note that generally (if there is no requirement that SW RT (F, ¬L) is true) SepSize(F, L, l y ) may differ from ArcSepSize(D, ¬L, l y ). The reason is that a separator of D may correspond to a smaller separator of F due to the fact that some arcs may represent the same clause. As we will see in the proof, the requirement that SW RT (F, ¬L) is true rules out this possibility.
Proof of Theorem 2. We may safely assume that V ar(L) ⊆ V ar(F ) because literals whose variables do not belong to V ar(F ) cannot be starting points of paths in F . Also since l y / ∈ ¬L any walk from ¬L to l y in D is non-empty. We use this fact implicitly in the proof without referring to it.
Let w = (C 1 , . . . , C q ) be a walk from l ′ to l ′′ in F . Let w(D) = (a 1 , . . . , a q ) be the sequence of arcs of D constructed as follows. For each C i = (l 1 ∨ l 2 ) (we assume that l 1 is the first literal of C i ), a i = (¬l 1 , l 2 ). Then ¬l ′ is the tail of a 1 and l ′′ is the head of a q . Also, by definition of w, for any two arcs a i and a i+1 , the head of a i is the same as the tail of a i+1 . It follows that w(D) is a walk from ¬l ′ to l ′′ in D such that each a i represents C i . Now, let P = {p 1 , . . . , p t } be a set of clause-disjoint paths from L to l y in F . Then {p 1 (D), . . . , p q (D)} is a set of walks from ¬L to l y in D which are arc-disjoint. Indeed, if an arc a belongs to both p i (D) and p j (D) (where i = j) then, due to the disjointness of p i and p j , this arc a represents two different clauses which is impossible by definition. Conversely, let p = (a 1 , . . . , a q ) be a path from ¬l ′ to l ′′ in D. Let p(F ) be the sequence (C 1 , . . . , C q ) of clauses defined as follows. For each a i = (¬l 1 , l 2 ), C i = (l 1 ∨ l 2 ), l 1 and l 2 are specified as the first and the second literals of C i , respectively. Then l ′ is the first literal of C 1 , l ′′ is the last literal of C q and for each consecutive pair C i and C i+1 the second literal of C i is the negation of the first literal of C i+1 . In other words, p(F ) is a walk from l ′ to l ′′ in F where each C i is represented by a i . Now, let P = {p 1 , . . . , p t } be a set of arc-disjoint paths from ¬L to l y in D. Then {p 1 (F ), . . . p t (F )} is a set of walks from L to l y in F . Observe that these walks are clause-disjoint. Indeed, if a clause C = (l 1 ∨ l 2 ) belongs to both p i (F ) and p j (F ) (where i = j) then (l 1 ∨ l 2 ) is represented by arc, say, (¬l 1 , l 2 ) in p i and by arc (¬l 2 , l 1 ) in p j . By construction of p i (F ) and p j (F ), l 1 is the first literal of C w.r.t. p i (F ) and the second literal of C w.r.t. p j (F ) which contradicts Lemma 2. That is the walks of {p 1 (F ), . . . , p t (F )} are clause-disjoint. Also, by Lemma 3, for each p i (F ), there is a path p ′ i (F ) of F with the same first and last literals as p i (F ) and whose set of clauses is a subset of the set of clauses of p i (F ). Clearly the paths {p ′ 1 (F ), . . . , p ′ t (F )} are clause disjoint. Thus M axP aths(D, ¬L, l y ) ≤ M axP aths(F, L, l y ). Combining this statement with the statement proven in the previous paragraph, we conclude that M axP aths(D, ¬L, l y ) = M axP aths(F, L, l y ).
Let S ∈ OptSep(F, L, l y ). For each C ∈ S, let p C be a path of F from L to l y including C (such a path necessarily exists due to the minimality of S). Let a(C) be an arc of p C (D) which represents C. Let S(D) be the set of all a(C). We are going to show that S(D) separates ¬L from l y in D. Assume that this is not so and let p * be a path from ¬L to l y in D \ S(D). Then, according to Lemma 3, p * (F ) necessarily includes a path from L to l y and hence p * (F ) contains at least one clause C = (l 1 ∨ l 2 ) of S. Let a * be an arc of p * which represents C. By definition of of p * , a * = a(C) and hence a(C) is, say (¬l 1 , l 2 ) and a * is (¬l 2 , l 1 ). By definition of p C (D) and p * (F ), l 1 is the first literal of C w.r.t. p C and the second one w.r.t. p * (F ) which contradicts Lemma 2. This shows that S(D) separates ¬L from l y in D and, consequently, taking into account that |S(D)| = |S|, ArcSepSize(D, ¬L, l y ) ≤ SepSize(F, L, l y ).
Let S be a smallest arc separator of D w.r.t. ¬L and l y . For each a ∈ S, let p a be a path of D from ¬L to l y which includes a. Let C(a) be a clause of p a (F ) which is represented by a. Denote the set of all C(a) by S(F ). Then we can show that S(F ) is a separator w.r.t. L and l y in F . In particular, let p * be a path from L to l y in F \S(F ). Then p * (D) necessarily includes an arc a ∈ S. Let C * be a clause of p * represented by a. Since C * = C(a), the arc a represents two different clauses in contradiction to the definition of D. Consequently, taking into account that |S(F )| ≤ |S|, ArcSepSize(D, ¬L, l y ) ≥ SepSize(F, L, l y ). Considering the previous paragraph we conclude that ArcSepSize(D, ¬L, l y ) = SepSize(F, L, l y ).
Let PF be a largest set of clause-disjoint paths from L to l y in F and let PD be a largest set of arc-disjoint paths from ¬L to l y in D. It follows from the above proof that in order to show that |PF| = SepSize(F, L, l y ), it is sufficient to show that |PD| = ArcSepSize(D, ¬L, l y ). Taking into account that by our assumption l y / ∈ ¬L, the latter can be easily derived by contracting the vertices of ¬L into one vertex and applying the arc version of Menger's Theorem for directed graphs [1].
Neutral Literals
Definition 5. Let (F, L, l) be an instance of the 2-aslasat problem. A literal l * is a neutral literal of (F, L, l) if (F, L ∪ {l * }, l) is a valid instance of 2-aslasat problem and SepSize(F, ¬L, ¬l) = SepSize(F, ¬(L ∪ {l * }), ¬l).
The following theorem has a crucial role in the design of the algorithm provided in the next section.
Theorem 3. Let (F, L, l) be an instance of the 2-ASALSAT problem and let l * be a neutral literal of (F, L, l). Then there is a CS of (F, L ∪ {l * }, l) of size smaller than or equal to the size of an SCS of (F, L, l).
Before we prove Theorem 3, we extend our terminology.
Definition 6. Let (F, L, l) be an instance of the 2-ASLASAT problem. A clause C = (l 1 ∨ l 2 ) of F is reachable from ¬L if there is a walk w from ¬L including C. Assume that l 1 is a first literal of C w.r.t. w. Then l 1 is called the main literal of C w.r.t. (F, L, l).
Given Definition 6, Lemma 2 immediately implies the following corollary. Now we are ready to prove Theorem 3. Proof of Theorem 3. Let SP ∈ OptSep(F, ¬(L ∪ {l * }), ¬l). Since ¬L is a subset of ¬(L ∪ {l * }), SP is a separator w.r.t. ¬L and ¬l in F . Moreover, since l * is a neutral literal of (F, L, l), SP ∈ OptSet(F, ¬L, ¬l).
In the 2-cnf F \ SP , let R be the set of clauses reachable from ¬L and let N R be the rest of the clauses of F \ SP . Observe that the sets R, N R, SP are a partition of the set of clauses of F . Let X be a SCS of (F, L, l). Denote X ∩ R, X ∩ SP , X ∩ N R by XR, XSP , XN R respectively. Observe that the sets XR, XSP, XN R are a partition of X.
Let Y be the subset of SP \ XSP including all clauses C = (l 1 ∨ l 2 ) (we assume that l 1 is the main literal of C) such that there is a walk w from l 1 to ¬l with C being the first clause of w and all clauses of w following C (if any) belong to N R \ XN R. We call this walk w a witness walk of C. By definition, SP \ XP = SP \ X and N R \ XN R = N R \ X, hence the clauses of w do not intersect with X.
Claim 1 |Y | ≤ |XR|.
Proof. By definition of the 2-aslasat problem, SW RT (F, L) is true. Therefore, according to Theorem 2, there is a set P of |SP | clause-disjoint paths from ¬L to ¬l. Clearly each C ∈ SP participates in exactly one path of P and each p ∈ P includes exactly one clause of SP . In other words, we can make one-to-one correspondence between paths of P and the clauses of SP they include. Let PY be the subset of P consisting of the paths corresponding to the clauses of Y . We are going to show that for each p ∈ PY the clause of SP corresponding to p is preceded in p by a clause of XR.
Assume by contradiction that this is not true for some p ∈ PY and let C = (l 1 ∨l 2 ) be the clause of SP corresponding to p with l 1 being the main literal of C w.r.t. (F, L, l). By our assumption, C is the only clause of SP participating in p, hence all the clauses of p preceding C belong to R. Consequently, the only possibility of those preceding clauses to intersect with X is intersection with XR. Since this possibility is ruled out according to our assumption, we conclude that no clause of p preceding C belongs to X. Next, according to Corollary 2, l 1 is the first literal of C w.r.t p, hence the suffix of p starting at C can be replaced by the witness walk of C and as a result of this replacement, a walk w ′ from ¬L to ¬l is obtained. Taking into account that the witness walk of C does not intersect with X, we get that w ′ does not intersect with X. By Theorem 1, SW RT (F \ X, L ∪ {l}) is false in contradiction to being X a CS of (F, L, l). This contradiction shows that our initial assumption fails and C is preceded in p by a clause of XR.
In other words, each path of PY intersects with a clause of XR. Since the paths of PY are clause-disjoint, |XR| ≥ |PY| = |Y |, as required.
Consider the set X * = Y ∪XSP ∪XN R. Observe that |X * | = |Y |+ |XSP |+ |XN R| ≤ |XR| + |XSP | + |XN R| = |X|, the first equality follows from the mutual disjointness of Y , XSP and XN R by their definition, the inequality follows from Claim 1, the last equality was justified in the paragraph where the sets XP , XSP , XN R, and X have been defined. We are going to show that X * is a CS of (F, L ∪ {l * }, l) which will complete the proof of the present theorem.
Claim 2 F \ X * has no walk from ¬(L ∩ {l * }) to ¬l.
Proof. Assume by contradiction that w is a walk from ¬(L ∩ {l * }) to ¬l in F \ X * . Taking into account that SW RT (F \ X * , L ∪ {l * }) is true (because we know that SW RT (F, L ∪ {l * }) is true), and applying Lemma 3, we get that F \ X * has a path p from ¬(L ∩ {l * }) to ¬l. As p is a path in F , it includes at least one clause of SP (recall that SP is a separator w.r.t. ¬(L ∩ {l * }) and ¬l in F ). Let C = (l 1 ∨l 2 ) be the last clause of SP as we traverse p from ¬(L ∩{l * }) to ¬l and assume w.l.o.g. that l 1 is the main literal of C w.r.t. (F \ X * , L ∪ {l * }, l) (and hence of (F, L ∪ {l * }, l)). Let p * be the suffix of p starting at C.
According to Corollary 2, l 1 is the first literal of p * . In the next paragraph we will show that no clause of R follows C is p * . Combining this statement with the observation that the clauses of F \ X * can be partitioned into R, SP \ XSP and N R \ XN R (the rest of clauses belong to X * ) we conclude that p * is a walk witnessing that C ∈ Y . But this is a contradiction because by definition Y ⊆ X * . This contradiction will complete the proof of the present claim.
Assume by contradiction that C is followed in p * by a clause
C ′ = (l ′ 1 ∨ l ′ 2 ) of R (we assume w.l.o.g. that l ′ 1 is the main literal of C ′ w.r.t. (F \ X * , L ∪ {l * }, l)
). Let p ′ be a suffix of p * starting at C ′ . It follows from Corollary 2 that the first literal of p ′ is l ′ 1 . By definition of R and taking into account that R ∩ X * = ∅, F \ X * has a walk w 1 from ¬L whose last clause is C ′ and all clauses of which belong to R. By Corollary 2, the last literal of w 1 is l ′ 2 . Therefore we can replace C ′ by w 1 in p ′ . As a result we get a walk w 2 from ¬L to ¬l in F \ X * . By Lemma 3, there is a path p 2 from ¬L to ¬l whose set of clauses is a subset of the set of clauses of w 2 . As p 2 is also a path of F , it includes a clause of SP . However, w 1 does not include any clause of SP by definition. Therefore, p ′ includes a clause of SP . Consequently, p * includes a clause of SP following C in contradiction to the selection of C. This contradiction shows that clause C ′ does not exist, which completes the proof of the present claim as noted in the previous paragraph.
Claim 3 F \ X * has no walk from ¬l to ¬l.
Proof. Assume by contradiction that F \ X * has a walk w from ¬l to ¬l. By definition of X and Theorem 1, w contains at least one clause of X. Since XSP and XN R are subsets of X * , w contains a clause C ′ = (l ′ 1 ∨ l ′ 2 ) of XR. Assume w.l.o.g. that l ′ 1 is the main literal of C ′ w.r.t. (F, L, l). If l ′ 1 is a first literal of C ′ w.r.t. w then let w * be a suffix of w whose first clause is C ′ and first literal is l ′ 1 . Otherwise, let w * be a suffix of reverse(w) having the same properties. In any case, w * is a walk from l ′ 1 to ¬l in F \ X * whose first clause is C ′ . Arguing as in the last paragraph of proof of Claim 2, we see that F \ X * has a walk w 1 from ¬L to l ′ 2 whose last clause is C ′ . Therefore we can replace C ′ by w 1 in w * and get a walk w 2 from ¬L to ¬l in F \ X * in contradiction to Claim 2. This contradiction shows that our initial assumption regarding the existence of w is incorrect and hence completes the proof of the present claim.
It follows from Combination of Theorem 1, Claim 2, and Claim 3 that X * is a CS of (F, L ∪ {l * }, l), which completes the proof of the present theorem.
Additional Terminology and Auxiliary Lemmas
In order to analyze the above algorithm, we extend our terminology. Let us call a quadruple (F, L, l, k) a valid input if (F, L, l, k) is a valid instance of the parameterized 2-ASLASAT problem (as specified in Section 2.3.). Now we introduce the notion of the search tree ST (F, L, l, k) produced by FindCS (F, L, l, k). The root of the tree is identified with (F, L, l, k). If FindCS(F, L, l, k) does not apply itself recursively then (F, L, l, k) is the only node of the tree. Otherwise the children of (F, L, l, k) correspond to the inputs of the calls applied within the call FindCS(F, L, l, k). For example, if FindCS(F, L, l, k) performs
Step 9 then the children of (F, L, l, k) are (F, L ∪ {l 2 }, l, k) and (F \ C, L, l, k − 1). For each child (F ′ , L ′ , l ′ , k ′ ) of (F, L, l, k), the subtree of ST (F, L, l, k) rooted by (F ′ , L ′ , l ′ , k ′ ) is ST (F ′ , L ′ , l ′ , k ′ ). It is clear from the description of FindCS that the third item of a valid input is not changed for its children hence in the rest of the section when we denote a child or descendant of (F, L, l, k) we will leave the third item unchanged, e.g. (F 1 , L 1 , l, k 1 ). Proof. Assume that F has a walk from ¬L to ¬l and let w be the shortest possible such walk. Let l 1 be the first literal of w and let C = (l 1 ∨ l 2 ) be the first clause of F . By definition l 1 ∈ ¬L. We claim that V ar(l 2 ) / ∈ V ar(L). Indeed, assume that this is not true. If l 2 ∈ ¬L then SW RT (F, {¬l 1 , ¬l 2 }) is false and hence SW RT (F, L) is false as L is a superset of {¬l 1 , ¬l 2 }. But this contradicts the definition of the 2-aslasat problem. Assume now that l 2 ∈ L. By definition of the 2-aslasat problem, V ar(l) / ∈ V ar(L), hence C is not the last clause of w. Consequently the first literal of the second clause of w belongs to ¬L . Thus if we remove the first clause from w we obtain a shorter walk from ¬L to ¬l in contradiction to the definition of w. It follows that our claim is true and the required clause C can be selected if the condition of Step 5 is satisfied.
Consider now the case where the condition of Step 5 is not satisfied. Note that SW RT (F, L ∪ {l}) is false because otherwise the algorithm would have finished at Step 1. Consequently by Theorem 1, F has a walk from ¬l to ¬l. We claim that any such walk w contains a clause C = (l 1 ∨l 2 ) such that SW RT (F, {l 1 , l 2 }) is true. Let P be a satisfying assignment of F (which exists by definition of the 2-aslasat problem). Let F ′ be the 2-cnf formula created by the clauses of w and let P ′ be the subset of P such that V ar(P ′ ) = V ar(F ′ ). By Lemma 1, SW RT (F ′ , l) is false and hence, taking into account that V ar(l) ∈ V ar(F ′ ), ¬l ∈ P ′ . Consequently l ∈ ¬P ′ . Therefore ¬P ′ is not a satisfying assignment of F ′ i.e. ¬P ′ does not satisfy at least one clause of F ′ . Taking into account that V ar(¬P ′ ) = V ar(F ′ ), it contains negations of both literals of at least one clause C of F ′ . Therefore P ′ (and hence P ) contains both literals of C. Clearly, C is the required clause.
The soundness of Steps 5 and 6 of FindCS is assumed in the rest of the paper without explicit referring to Lemma 4. Lemma 5. Let (F, L, l, k) be a valid input and assume that Solve2ASLASAT (F, L, l, k) applies itself recursively. Then all the children of (F, L, l, k) in the search tree are valid inputs.
Proof. Let (F 1 , L 1 , l, k 1 ) be a child of (F, L, l, k) . Observe that k 1 ≥ k − 1. Observe also that k > 0 because FindCS(F, L, l, k) would not apply itself recursively if k = 0. It follows that k 1 ≥ 0.
It remains to prove that (F 1 , L 1 , l) is a valid instance of the 2-aslasat problem. If k 1 = k − 1 then (F 1 , L 1 , l) = (F \ C, L, l) where C is the clause selected on Steps 5 and 6. In this case the validity of instance (F \ C, L, l) immediately follows from the validity of (F, L, l). Consider the remaining case where (F 1 , L 1 , l, k 1 ) = (F, L ∪ {l * }, l, k) where l * is a literal of the clause C = (l 1 ∨ l 2 ) selected on Steps 5 and 6. In particular, we are going to show that
-L ∪ {l * } is non-contradictory; -V ar(l) / ∈ V ar(L ∪ {l * }; -SW RT (F, L ∪ {l * }) is true.
That L ∪ {l * } is non-contradictory follows from description of the algorithm because it is explicitly stated that the literal being joined to L does not belong to ¬(L ∪ {l}). This also implies that the second condition may be violated only if l * = l. In this case assume that C is selected on Step 5. Then w.l.o.g. l 1 ∈ ¬L and l 2 = l. Let P be a satisfying assignment of F which does not intersect with ¬L (existing since (SW RT (F, L) is true). Then l 2 ∈ P , i.e. SW RT (F, L ∪ {l}) is true, which is impossible since in this the algorithm would stop at Step 1. The assumption that C is selected on Step 6 also leads to a contradiction because on the one hand SW RT (F, l) is false by Lemma 1 due to existence of a walk from ¬l to ¬l, on the other hand SW RT (F, l) is true by the selection criterion. It follows that V ar(l) / ∈ V ar(L ∪ {l * }). Let us prove the last item. Assume first that C is selected on Step 5 and assume w.l.o.g. that l 1 ∈ ¬L. Then, by the first statement, l * = l 2 . Moreover, as noted in the previous paragraph l 2 ∈ P where P is a satisfying assignment of F which does intersect with ¬L, i.e. SW RT (F, L ∪ {l 2 }) is true in the considered case. Assume that C is selected on Step 6 and let w be the walk from ¬l to ¬l in F to which C belongs. Observe that F has a walk w ′ from l * to ¬l: if l * is a first literal of C w.r.t. w then let w ′ be a suffix of w whose first literal is l * , otherwise let be the suffix of reverse(w) whose first literal is l * . Assume that SW RT (F, L ∪ {l * }) is false. Since L ∪ {l * } is non-contradictory by the first item, V ar(l * ) / ∈ V ar(L). It follows that (F, L, l * ) is a valid instance of the 2-aslasat problem. In this case, by Theorem 1, F has either a walk from ¬L to ¬l * or a walk from ¬l * to ¬l * . The latter is ruled out by Lemma 1 because SW RT (F, l * ) is true by selection of C. Let w ′′ be a walk from ¬L to ¬l * in F . Then w ′′ + w ′ is a walk of F from ¬L to ¬l in contradiction to our assumption that C is selected on Step 6. Thus SW RT (F, L ∪ {l * }) is true. The proof of the present lemma is now complete. Now we introduce two measures of the input of the Solve2ASLASAT procedure. Let α(F, L, l, k) = |V ar(F ) \ V ar(L)| + k and β(F, L, l, k) = max(0, 2k − SepSize(F, ¬L, ¬l)). Lemma 6. Let (F, L, l, k) be a valid input and let (F 1 , L 1 , l, k 1 ) be a child of (F, L, l, k). Then α(F, L, l, k) > α(F 1 , L 1 , l, k 1 ).
Proof. If k 1 = k − 1 then the statement is clear because the first item in the definition of the α-measure does not increase and the second decreases. So, assume that (F 1 , L 1 , l, k 1 ) = (F, L ∪ {l * }, l, k). In this case it is sufficient to prove that V ar(l * ) / ∈ V ar(L). Due to the validity of (F, L ∪ {l * }, l, k) by Lemma 5, l * / ∈ ¬L, so it remains to prove that l * / ∈ L. Assume that l * ∈ L. Then the clause C is selected on Step 6. Indeed, if C is selected on Step 5 then one of its literals belongs to ¬L and hence cannot belong to L, due to the validity of (F, L, l, k) (and hence being L non-contradictory), while the variable of the other literal does not belong to V ar(L) at all. Let w be the walk from ¬l to ¬l in F to which C belongs. Due to the validity of (F, L ∪ {l * }, l, k) by Lemma 5, l * = ¬l. Therefore either w or reverse(w) has a suffix which is a walk from ¬l * to ¬l, i.e. a walk from ¬L to ¬l. But this contradicts the selection of C on Step 6. So, l * / ∈ L and the proof of the lemma is complete.
For the next lemma we extend our terminology. We call a node (F ′ , L ′ , l, k ′ ) of ST (F, L, l, k) a trivial node if it is a leaf or its only child is of the form (F ′ , L ′ ∪ {l * }, l, k ′ ) for some literal l * . Lemma 7. Let (F, L, l, k) be a valid input and let (F 1 , L 1 , l, k 1 ) be a child of (F, L, l, k). Then β(F, L, l, k) ≥ β(F 1 , L 1 , l, k 1 ). Moreover if (F, L, l, k) is a nontrivial node then β(F, L, l, k) > β(F 1 , L 1 , l, k 1 ).
Proof. Note that β(F, L, l, k) > 0 because if β(F, L, l, k) = 0 then FindCS(F, L, l, k) does not apply itself recursively, i.e. does not have children. It follows that β(F, L, l, k) = 2k−SepSize(F, ¬L, ¬l) > 0. Consequently, to show that β(F, L, l, k) > β(F 1 , L 1 , l, k 1 ) or that β(F, L, l, k) ≥ β(F 1 , L 1 , l, k 1 ) it is sufficient to show that 2k−SepSize(F, ¬L, ¬l) > 2k 1 −SepSize(F 1 , ¬L 1 , ¬l) or 2k−SepSize(F, ¬L, ¬l) ≥ 2k 1 − SepSize(F 1 , ¬L 1 , ¬l), respectively.
Assume first that (F 1 , L 1 , l, k 1 ) = (F \C, L, l, k−1). Observe that SepSize(F \ C, ¬L, ¬l) ≥ SepSize(F, ¬L, ¬l) − 1. Indeed assume the opposite and let S be a separator w.r.t. to ¬L and ¬l in F \C whose size is at most SepSize(F, ¬L, ¬l)− 2. Then S∪{C} is a separator w.r.t. ¬L and ¬l in F of size at most SepSize(F, ¬L, ¬l)− 1 in contradiction to the definition of SepSize(F, ¬L, ¬l).
Thus 2(k − 1) − SepSize(F \C, ¬L, ¬l) = 2k−SepSize(F \C, ¬L, ¬l)−2 ≤ 2k−SepSize(F, ¬L, ¬l)− 1 < 2k − SepSize(F, ¬L, ¬l).
Assume now that (F 1 , L 1 , l, k 1 ) = (F, L∪{l * }, l, k) for some literal l * . Clearly, SepSize(F, ¬L, ¬l) ≤ SepSize(F, ¬(L ∪ {l * }), ¬l) due to being ¬L a subset of ¬(L ∪ {l * }). It follows that 2k − SepSize(F, ¬L, ¬l) ≥ 2k − SepSize(F, ¬(L ∪ {l * }), ¬l). It remains to show that ≥ can be replaced by > in case where (F, L, l, k) is a non-trivial node. It is sufficient to show that in this case SepSize(F, ¬L, ¬l) < SepSize(F, ¬(L ∪ {l * }), ¬l). If (F, L, l, k) is a non-trivial node then the recursive call FindCS(F, L ∪ {l * }, l, k) is applied on Steps 8.2, 8.4, or 9.3. In the last case, it is explicitly said that l * is not a neutral literal in (F, L, l). Consequently, SepSize(F, ¬L, ¬l) < SepSize(F, ¬(L ∪ {l * }), ¬l) by definition.
For the first two cases note that Step 8 is applied only if the clause C is selected on Step 6. That is, F has no walk from ¬L to ¬l. In particular, F has no path from ¬L to ¬l, i.e. SepSize(¬L, ¬l) = 0. Let w be the walk from ¬l to ¬l in F to which C belongs. Note that by Lemma 5, (F, L ∪ {l * }, l, k) is a valid input, in particular V ar(l * ) = V ar(l). Therefore either w or reverse(w) has a suffix which is a walk from ¬l * to ¬l, i.e. a walk from ¬(L ∪{l * }) to ¬l. Applying Lemma 3 together with Lemma 5, we see that F has a path from ¬(L ∪ {l * }) to ¬l, i.e. SepSize(F, ¬(L ∪ {l * }, ¬l) > 0. -The height of ST (F, L, l, k) is at most α(F, L, l, k). 6
-Each node (F ′ , L ′ , l, k ′ ) of ST (F, L, l, k) is a valid input, the subtree rooted by (F ′ , L ′ , l, k ′ ) is ST (F ′ , L ′ , l, k ′ ) and α(F ′ , L ′ , l, k ′ ) < α(F, L, l, k). -For each node (F ′ , L ′ , l, k ′ ) of ST (F, L, l, k), β(F ′ , L ′ , l, k ′ ) ≤ β(F, L, l, k) − t
where t is the number of non-trivial nodes besides (F ′ , L ′ , l, k ′ ) in the path from (F, L, l, k) to (F ′ , L ′ , l, k ′ ) of ST (F, L, l, k).
Proof. This lemma is clearly true if (F, L, l, k) has no children. Consequently, it is true if α(F, L, l, k) = 0. Now, apply induction on the size of α(F, L, l, k) and assume that α(F, L, l, k) > 0. By the induction assumption, Lemma 5, and Lemma 6, the present lemma is true for any child of (F, L, l, k). Consequently, for any child (F * , L * , l, k * ) of (F, L, l, k), the height of ST (F * , L * , l, k * ) is at most α(F * , L * , l, k * ). Hence the first statement follows by Lemma 6. Furthermore, any node (F ′ , L ′ , l, k ′ ) of ST (F, L, l, k) belongs to ST (F * , L * , l, k * ) of some child (F * , L * , l, k * ) of (F, L, l, k) and the subtree rooted by (F ′ , L ′ , l, k ′ ) in ST (F, L, l, k) is the subtree rooted by (F ′ , L ′ , l, k ′ ) in ST (F * , L * , l, k * ). Consequently, (F ′ , L ′ , l, k ′ ) is a valid input, the subtree rooted by it is ST (F ′ , L ′ , l, k ′ ), and α(F ′ , L ′ , l, k ′ ) ≤ α(F * , L * , l, k * ) < α(F, L, l, k), the last inequality follows from Lemma 6. Finally, β(F ′ , L ′ , l, k ′ ) ≤ β(F * , L * , l, k * ) − t * where t * is the number of non-trivial nodes besides (F ′ , L ′ , l, k ′ ) in the path from (F * , L * , l, k * ) to (F ′ , L ′ , l, k ′ ) in ST (F * , L * , l, k * ), and hence in ST (F, L, l, k) 7 . If (F, L, l, k) is a trivial node then t = t * and the last statement of the present lemma is true by Lemma 7. Otherwise t = t * + 1 and by another application of Lemma 7 we get that β(F ′ , L ′ , l, k ′ ) ≤ β(F, L, l, k) − t * − 1 = β(F, L, l, k) − t.
Correctness Proof
Theorem 4. Let (F, L, l, k) be a valid input. Then FindCS(F, L, l, k) correctly solves the parameterized 2-aslasat problem. That is, if FindCS(F, L, l, k) returns a set, this set is a CS of (F, L, l) of size at most k. If FindCS(F, L, l, k) returns 'NO' then (F, L, l) has no CS of size at most k. 6 Besides providing the upper bound on the height of ST (F, L, l, k), this statement claims that ST (F, L, l, k) is finite and hence we may safely refer to a path between two nodes. 7 Note that this inequality applies to the case where (F ′ , L ′ , l, k ′ ) = (F * , L * , l, k * ).
Therefore, according to Theorem 3, the size of a SCS of (F, L, l) is at least k + 1 which contradicts the existence of S. Finally assume that 'NO' is returned on
Step 8.7. or on Step 9.5. Assume first that the clause C selected on Steps 5 and 6 does not belong to S. Let P be a satisfying assignment of (F \ S) which does not intersect with ¬(L ∪ {l}). Then at least one literal l * of C is contained in P . This literal does not belong to ¬(L ∪ {l}) and hence FindCS(F, L ∪ {l * }, l, k) has been applied and returned 'NO'. However, P witnesses that S is a CS of (F, L ∪ {l * }, l, k) of size at most k, that is FindCS(F, L ∪ {l * }, l, k) returned an incorrect answer in contradiction to Claim 4. Finally assume that C ∈ S. Then S \ C is a CS of (F \ C, L, l) of size at most k − 1 and hence answer 'NO' returned by FindCS(F \ C, L, l) contradicts Claim 4. Thus the answer 'NO' returned by FindCS(F, L, l, k) is valid. Proof. According to Lemma 8, (F ′ , L ′ , l, k ′ ) is a valid input, α(F ′ , L ′ , l, k ′ ) < α(F, L, l, k), and the subtree of ST (F, L, l, k) rooted by (F ′ , L ′ , l, k ′ ) is ST (F ′ , L ′ , l, k ′ ). Therefore the claim follows by the induction assumption.
If (F, L, l, k) has only one child (F 1 , L 1 , l, k 1 ) then clearly the number of leaves of ST (F, L, l, k) equals the number of leaves of the subtree rooted by (F 1 , L 1 , l, k 1 ) which, by Claim 5, is at most √ 5 t1 , where t 1 = β(F 1 , L 1 , l, k 1 ).
According to Lemma 7, t 1 ≤ t so the present theorem holds for the considered case. If (F, L, l, k) has 2 children (F 1 , L 1 , l, k 1 ) and (F 2 , L 2 , l, k 2 ) then the number of leaves of ST (F, L, l, k) is the sum of the numbers of leaves of subtrees rooted by (F 1 , L 1 , l, k 1 ) and (F 2 , L 2 , l, k 2 ) which, by Claim 5, is at most
√ 5 t1 + √ 5 t2 ,
where t i = β(F i , L i , l, k i ) for i = 1, 2. Taking into account that (F, L, l, k) is a non-trivial node and applying Lemma 7, we get that t 1 < t and t 2 < t. hence the number of leaves of ST (F, L, l, k) is at most (2/ √ 5) * ( √ 5 t ) < √ 5 t , so the theorem holds for the considered case as well.
For the case where (F, L, l, k) has 3 children, denote them by (F i , L i , l, k i ), i = 1, 2, 3. Assume w.l.o.g. that (F 1 , L 1 , l, k 1 ) = (F, L∪{l 1 }, l, k), (F 2 , L 2 , l, k 2 ) = (F, L ∪{l 2 }, l, k), (F 3 , L 3 , l, k 3 ) = (F \ C, l, k − 1), where C = (l 1 ∨l 2 ) is the clause selected on steps 5 and 6. Let t i = β(F i , L i , l, k i ) for i = 1, 2, 3.
Claim 6 t ≥ 2 and t 3 ≤ t − 2.
Proof. Note that k > 0 because otherwise FindCS(F, L, l, k) does not apply itself recursively. Observe also that SepSize(F, ¬L, ¬l) = 0 because clause C can be selected only on Step 6, which means that F has no walk from ¬L to ¬l and, in particular, F has no path from ¬L to ¬l. Therefore 2k − Sepsize(F, ¬L, ¬l) = 2k ≥ 2 and hence t = β(F, L, l, k) = 2k ≥ 2. If t 3 = 0 the second statement of the claim is clear. Otherwise
t 3 = 2(k − 1) − SepSize(F \ (l 1 ∨ l 2 ), ¬L, ¬l) = 2(k − 1) − 0 = 2k − 2 = t − 2.
Assume that some ST (F i , L i , l, k i ) for i = 1, 2 has only one leaf. Assume w.l.o.g. that this is ST (F 1 , L 1 , l, k 1 ). Then the number of leaves of ST (F, L, l, k) is the sum of the numbers of leaves of the subtrees rooted by (F 2 , L 2 , l, k 2 ) and (F 3 , L 3 , l, k 3 ) plus one. By Claims 5 and 6, and Lemma 7, this is at most
√ 5 t−1 + √ 5 t−2 + 1. Then √ 5 t − √ 5 t−1 − √ 5 t−2 − 1 ≥ √ 5 2 − √ 5 2−1 − √ 5 2−2 − 1 = 5 − √ 5 − 2 > 0,
the first inequality follows from Claim 6. That is, the present theorem holds for the considered case.
It remains to assume that both ST (F 1 , L 1 , l, k 1 ) and ST (F 2 , L 2 , l, k 2 ) have at least two leaves. Then for i = 1, 2, ST (F i , L i , l, k i ) has a node having at least two children. Let (F F i , LL i , l, kk i ) be such a node of ST (F i , L i , l, k i ) which lies at the smallest distance from (F, L, l, k) in ST (F, L, l, k).
Claim 7 The number of leaves of the subtree rooted by
(F F i , LL i , l, kk i ) is at most (2/5) * √ 5 t .
Proof. Assume that (F F i , LL i , l, kk i ) has 2 children and denote them by (F F * 1 , LL * 1 , l, kk * 1 ) and (F F * 2 , LL * 2 , l, kk * 2 ). Then the number of leaves of the subtree rooted by (F F i , LL i , l, kk i ) equals the sum of numbers of leaves of the subtrees rooted by (F F * 1 , LL * 1 , l, kk * 1 ) and (F F * 2 , LL * 2 , l, kk * 2 ). By Claim 5, this sum does not exceed 2 * √ 5 t * where t * is the maximum of β(F F * j , LL * j , l, kk * j ) for j = 1, 2. Note that the path from (F, L, l, k) to any (F F * j , LL * j , l, kk * j ) includes at least 2 non-trivial nodes besides (F F * j , LL * j , l, kk * j ), namely (F, L, l, k) and (F F i , LL i , l, kk i ). Consequently, t * ≤ t − 2 by Lemma 8 and the present claim follows for the considered case.
Assume that (F F i , LL i , l, kk i ) has 3 children. Then let tt i = β(F F i , LL i , l, kk i ) and note that according to Claim 5, the number of leaves of the subtree rooted by (F F i , LL i , l, kk i ) is at most √ 5 tti . Taking into account that (F F i , LL i , l, kk i ) is a valid input by Lemma 8 and arguing analogously to the second sentence of the proof of Claim 6, we see that SepSize(F F i , ¬LL i , ¬l) = 0. On the other hand, using the argumentation in the last paragraph of the proof of Lemma 7, we can see that SepSize(F i , ¬L i , l) > 0. This means that (F i , L i , l, k i ) = (F F i , LL i , l, kk i ).
Moreover, the path from (F i , L i , l, k i ) to (F F i , LL i , l, kk i ) includes a pair of consecutive nodes (F ′ , L ′ , l, k ′ ) and (F ′′ , L ′′ , l, k ′′ ), being the former the parent of the latter, such that SepSize(F ′ , ¬L ′ , ¬l) > SepSize(F ′′ , ¬L ′′ , ¬l). This only can happen if k ′′ = k ′ − 1 (for otherwise (F ′′ , L ′′ , l, k ′′ ) = (F ′ , L ′ ∪ {l ′ }, l, k ′ ) for some literal l ′ and clearly adding a literal to L ′ does not decrease the size of the separator). Consequently, (F ′ , L ′ , l, k ′ ) is a non-trivial node. Therefore, the path from (F, L, l, k) to (F F i , LL i , l, kk i ) includes at least 2 non-trivial nodes besides (F F i , LL i , l, kk i ): (F, L, l, k) and (F ′ , L ′ , l, k ′ ). That is tt i ≤ t − 2 by Lemma 8 and the present claims follows for this case as well which completes its proof. It remains to notice that the number of leaves of ST (F, L, l, k) is the sum of the numbers of leaves of subtrees rooted by (F F 1 , LL 1 , l, kk 1 ), (F F 2 , LL 2 , l, kk 2 ), and (F 3 , L 3 , l, k 3 ) which, according to Claims 5 6 and 7, is at most 5 * Proof. According to assumptions of the theorem, (F, L, l, k) is a valid input. Assume that F is represented by its implication graph D = (V, A) which is almost identical to the implication graph of F with the only difference that V (D) corresponds to V ar(F ) ∪ V ar(L) ∪ V ar(l ′ ), that is if for any literal l ′ such that V ar(l ′ ) ∈ (V ar(L) ∪ {V ar(l)}) \ V ar(F ), D has isolated nodes corresponding to l ′ and ¬l ′ . We also assume that the nodes corresponding to L, ¬L, l, ¬l are specifically marked. This representation of (F, L, l, k) can be obtained in a polynomial time from any other reasonable representation. It follows from Theorem 4 that FindCS(F, L, l, k) correctly solves the parameterized 2aslasat problem with respect to the given input. Let us evaluate the complexity of FindCS(F, L, l, k). According to Lemma 8, the height of the search tree is at most α(F, L, l, k) ≤ n + k. The first operation performed by FindCS(F, L, l, k) is checking whether SW RT (F, L ∪ {l}) is true. Note that this is equivalent to checking the satis-fiability of a 2-cnf F ′ which is obtained from F by adding clauses (l ′ ∨ l ′ ) for each l ′ ∈ L ∪ {l}. It is well known [17] The proof of Lemma 4 also outlines an algorithm implementing Step 6: choose an arbitrary walk w from ¬l to ¬l in F , (which, as noted in the proof of Theorem 2, corresponds to a walk from l to ¬l in D), find a satisfying assignment P of F which does not intersect with ¬L and choose a clause of w whose both literals are satisfied by P . Taking into account the above discussion, all the operations take O(m + |L|), hence Step 6 takes this time. Note that preparing an input for a recursive call takes O(1) because this preparation includes removal of one clause from F or adding one literal to L (with introducing appropriate changes to the implication graph). Therefore Steps 7 and 8 take O(1). Finally, note that for any subsequent recursive call (F ′ , L ′ , l, k ′ ) the implication graph of (F ′ , L ′ , l) is a subgraph of the graph of (F, L, l): every change of graph in the path from (F, L, l, k) to (F ′ , L ′ , l, k ′ ) is caused by removal of a clause or adding to the second parameter a literal of a variable of F . Consequently, the complexity of any recursive call is O((m + |L|) * k) and the time taken by the entire run of FindCS(F, L, l, k) is O(5 k * k(n + k) * (m + |L|)) as required.
√ 5 t−2 = √ 5 t .
Fixed-Parameter Tractability of 2-ASAT problem
In this section we prove the main result of the paper, fixed-parameter tractability of the 2-ASAT problem. Proof. We introduce the following 2 intermediate problems.
Problem I1
Input: A satisfiable 2-cnf formula F , a non-contradictory set of literals L, a parameter k Output: A set S ⊆ Clauses(F ) such that |S| ≤ k and SW RT (F \ S, L) is true, if there is such a set S; 'NO' otherwise.
Problem I2
Input: A 2-cnf formula F , a parameter k, and a set S ⊆ Clauses(F ) such |S| = k + 1 and F \ S is satisfiable Output: A set Y ⊆ Clauses(F ) such that |Y | < |S| and F \ Y is satisfiable, if there is such a set Y ; 'NO' otherwise.
The following two claims prove the fixed-parameter tractability of Problem I1 through transformation of its instance into an instance of 2-aslasat problem and of Problem I2 through transformation of its instance into an instance of Problem I1. Then we will show that the 2-asat problem with no repeated occurrence of clauses can be solved through transformation of its instance into an instance of Problem I2. Finally, we show that the 2-asat problem with repeated occurrences of clauses is fpt through transformation of its instance into an instance of 2-asat without repeated occurrences of clauses. Proof. Observe that we may assume that V ar(L) ⊆ V ar(F ). Otherwise we can take a subset L ′ such that V ar(L ′ ) = V ar(F ) ∩ V ar(L) and solve problem I1 w.r.t. the instance (F, L ′ , k). It is not hard to see that the resulting solution applies to (F, L, k) as well.
Let P be a satisfying assignment of F . If L ⊆ P then the empty set can be immediately returned. Otherwise partition L into two subsets L 1 and L 2 such that L 1 ⊆ P and ¬L 2 ⊆ P .
We apply a two stages transformation of formula F . On the first stage we assign each clause of F a unique index from 1 to m, introduce new literals l 1 , . . . , l m of distinct variables which do not intersect with V ar(F ), and replace the i-th clause (l ′ ∨ l ′′ ) by two clauses (l ′ ∨ l i ) and (¬l i ∨ l ′′ ). Denote the resulting formula by F ′ . On the second stage we introduce two new literals l * 1 and l * 2 such that V ar(l * 1 ) / ∈ V ar(F ′ ), V ar(l * 2 ) / ∈ V ar(F ′ ), and V ar(l * 1 ) = V ar(l * 2 ). Then we replace in the clauses of F ′ each occurrence of a literal of L 1 by l * 1 , each occurrence of a literal of ¬L 1 by ¬l * 1 , each occurrence of a literal of L 2 by l * 2 , and each occurrence of a literal of ¬L 2 by ¬l * 2 . Let F * be the resulting formula.
We claim that (F * , {l * 1 }, l * 2 ) is a valid instance of the 2-aslasat problem. To show this we have to demonstrate that all the clauses of F * are pairwise different and that SW RT (F * , l * 1 ) is true. For the former, notice that all the clauses of F * are pairwise different because each clause is associated with the unique literal l i or ¬l i . This also allows us to introduce new notation. In particular, we denote the clause of F * containing l i by C(l i ) and the clause containing ¬l i by C(¬l i ).
For the latter let P * be a set of literals obtained from P by replacing L 1 by l * 1 and ¬L 2 by ¬l * 2 . Observe that for each i, P * satisfies either C(l i ) or C(¬l i ). Indeed, let (l ′ ∨ l ′′ ) be the origin of C(l i ) and C(¬l i ) i.e. the clause which is transformed into (l ′ ∨ l i ) and (¬l i ∨ l ′′ ) in F ′ , then (l ′ ∨ l i ) and (¬l i ∨ l ′′ ) become respectively C(l i ) and C(¬l i ) in F * (with possible replacement of l ′ or l ′′ or both). Since P is a satisfying assignment of F , l ′ ∈ P or l ′′ ∈ P . Assume the former. Then if C(l i ) = (l ′ ∨ l i ), l ′ ∈ P * . Otherwise, l ′ ∈ L 1 or l ′ ∈ ¬L 2 . In the former case C(l i ) = (l * 1 ∨ l i ) and l * 1 ∈ P * by definition; in the latter case C(l i ) = (¬l * 2 ∨ l i ) and ¬l * 2 ∈ P * by definition. So, in all the cases P * satisfies C(l i ). It can be shown analogously that if l ′′ ∈ P then P * satisfies C(¬l i ). Now, let P * 2 be a set of literals which includes P * and for each i exactly one of {l i ¬l i } selected as follows. If P * satisfies C(l i ) then ¬l i ∈ P * 2 . Otherwise l i ∈ P * 2 . Thus P * 2 satisfies all the clauses of F * . By definition l * 1 ∈ P * ⊆ P * 2 . It is also not hard to show that P * 2 is non-contradictory and that V ar(P * 2 ) = V ar(F * ). Thus P * 2 is a satisfying assignment of F * containing l * 1 which witnesses SW RT (F * , l * 1 ) is true.
We are going to show that there is a set S ⊆ Clauses(F ) such that |S| ≤ k and SW RT (F \ S, L) is true if and only if (F * , {l * 1 }, l * 2 ) has a CS of size at most k.
Assume that there is a set S as above. Let S * ⊆ Clauses(F * ) be the set consisting of all clauses C(l i ) such that the clause with index i belongs to S. It is clear that |S * | = |S|. Let us show that S * is a CS of (F * , {l * 1 }, l * 2 ). Let P be a satisfying assignment of F \ S which does not intersect with ¬L. Let P 1 be the set of literals obtained from P by replacing the set of all the occurrences of literals of L 1 by l * 1 and the set of all the occurrences of literals of L 2 by l * 2 . Observe that for each i, at least one of {C(l i ), C(¬l i )} either belongs to S * or is satisfied by P 1 . In particular, assume that for some i, C(l i ) / ∈ S * . Then the origin of C(l i ) and C(¬l i ) belongs to F \ S and it can be shown that P 1 satisfies C(l i ) or C(¬l i ) similarly to the way we have shown that P * satisfies C(l i ) or C(¬l i ) three paragraphs above.
For each i, add to P 1 an appropriate l i or ¬l i so that the remaining clauses of F * \ S * are satisfied, let P 2 be the resulting set of literals. Add to P 2 one arbitrary literal of each variable of V ar(F * \ S * ) \ V ar(P 2 ). It is not hard to see that the resulting set of literals P 3 is a satisfying assignment of F * \ S * , which does not contain ¬l * 1 nor ¬l * 2 . It follows that S * is a CS of (F * , {l * 1 }, l * 2 ) of size at most k.
Conversely, let S * be a CS of (F * , {l * 1 }, l * 2 ) of size at most k. Let S be a set of clauses of F such that the clause of index i belongs to S if and only if C(l i ) ∈ S * or C(¬l i ) ∈ S * . Clearly |S| ≤ |S * |. Let S * 2 ⊆ Clauses(F * ) be the set of all clauses C(l i ) and C(¬l i ) such that the clause of index i belongs to S. Since S * ⊆ S * 2 , we can specify a satisfying assignment P * 2 of F * \ S * 2 which does not contain ¬l * 1 nor ¬l * 2 . Let P be a set of literals obtained from P * 2 by removal of all l i , ¬l i , removal of l * 1 and l * 2 , and adding all the literals l ′ of L such that l ′ or ¬l ′ appear in the clauses of F \ S. It is not hard to see that V ar(P ) = V ar(F \ S) and that P does not intersect with ¬L.
To observe that P is a satisfying assignment of F \S, note that there is a bijection between the pairs C(l i ), C(¬l i ) of clauses of F * \ S * 2 and the clauses of F \ S. In particular, each clause of F \ S is the origin of exactly one pair {C(l i ), C(¬l i )} of F * \ S * 2 in the form described above and each pair {C(l i ), C(¬l i )} of F * \ S * 2 has exactly one origin in F \ S. Now, let (l ′ ∨l ′′ ) be a clause of F \ S which is the origin of C(l i ) = (t ′ ∨l i ) and C(¬l i ) = (¬l i ∨ t ′′ ) of F * \ S * 2 , where l ′ = t ′ or t ′ is the result of replacement of l ′ , t ′′ has the analogous correspondence to l ′′ . By definition of P * 2 , either t ′ ∈ P * 2 or t ′′ ∈ P * 2 . Assume the former. In this case if l ′ = t ′ then l ′ ∈ P . Otherwise t ′ ∈ {l * 1 , l * 2 } and, consequently l ′ ∈ L. By definition of P , l ′ ∈ P . It can be shown analogously that if t ′′ ∈ P * 2 then l ′′ ∈ P . It follows that any clause of F \ S is satisfied by P .
It follows from the above argumentation that Problem I1 with input (F, L, k) can be solved by solving the parameterized 2-aslasat problem with input (F * , {l * 1 }, l * 2 , k). In particular, if the output of the 2-aslasat problem on (F * , {l * 1 }, l * 2 , k) is a set S * , this set can be transformed into S as shown above and S can be returned; otherwise 'NO' is returned. Observe that |Clauses(F * )| = O(m) and |V ar(F * )| = O(m + |V ar(F )|). Taking into account our note in the proof of Theorem 6 that |V ar(F )| = O(m), |V ar(F * )| = O(m). Also note that we may assume that k < m because otherwise the algorithm can immediately returns Clauses(F * ).
Substituting this data into the runtime of 2-aslasat problem following from Theorem 6, we obtain that problem I1 can be solved in time O(5 k * k * m * (m + |{l * 1 }|)) = O(5 k * k * m 2 ).
Claim 9 Problem I2 with input (F, S, k) can be solved in time O(15 k * k * m 2 ), where, m = |Clauses(F )|.
Proof We solve Problem I2 by the following algorithm. Explore all possible subsets E of S of size at most k. For the given set E explore all the sets of literals L obtained by choosing l 1 or l 2 for each clause (l 1 ∨ l 2 ) of S \ E and creating L as the set of all chosen literals. For all the resulting pairs (E, L) such that L is non-contradictory, solve Problem I1 for input (F * , L, k − |E|) where F * = F \S. If for at least one pair (E, L) the output is a set S * then return E∪S * . Otherwise return 'NO'. Assume that this algorithm returns E ∪ S * such that S * has been obtained for a pair (E, L). Let P be a satisfying assignment of F * \ S * which does not intersect with ¬L. Observe that P ∪ L is non-contradictory, that P ∪ L satisfies all the clauses of Clauses(F * \ S * ) ∪ (S \ E) and that
| 15,982 |
0712.3331
|
2951897619
|
In recent years, considerable advances have been made in the study of properties of metric spaces in terms of their doubling dimension. This line of research has not only enhanced our understanding of finite metrics, but has also resulted in many algorithmic applications. However, we still do not understand the interaction between various graph-theoretic (topological) properties of graphs, and the doubling (geometric) properties of the shortest-path metrics induced by them. For instance, the following natural question suggests itself: graph @math with @math such that the shortest path metric @math on @math is still doubling, and which agrees with @math on @math . This is often useful, given that unweighted graphs are often easier to reason about. We show that for any metric space @math , there is an graph @math with shortest-path metric @math such that -- for all @math , the distances @math , and -- the doubling dimension for @math is not much more than that of @math , where this change depends only on @math and not on the size of the graph. We show a similar result when both @math and @math are restricted to be trees: this gives a simpler proof that doubling trees embed into constant dimensional Euclidean space with constant distortion. We also show that our results are tight in terms of the tradeoff between distortion and dimension blowup.
|
The notion of doubling dimension was introduced by Assouad @cite_20 and first used in algorithm design by Clarkson @cite_0 . The properties of doubling metrics and their algorithmic applications have since been studied extensively, a few examples of which appear in @cite_4 @cite_7 @cite_18 @cite_12 @cite_19 @cite_9 @cite_17 @cite_1 @cite_21 @cite_3 .
|
{
"abstract": [
"We present a simple deterministic data structure for maintaining a set S of points in a general metric space, while supporting proximity search (nearest neighbor and range queries) and updates to S (insertions and deletions). Our data structure consists of a sequence of progressively finer e-nets of S, with pointers that allow us to navigate easily from one scale to the next.We analyze the worst-case complexity of this data structure in terms of the \"abstract dimensionality\" of the metric S. Our data structure is extremely efficient for metrics of bounded dimension and is essentially optimal in a certain model of distance computation. Finally, as a special case, our approach improves over one recently devised by Karger and Ruhl [KR02].",
"The doubling constant of a metric space (X, d) is the smallest value spl lambda such that every ball in X can be covered by spl lambda balls of half the radius. The doubling dimension of X is then defined as dim (X) = log sub 2 spl lambda . A metric (or sequence of metrics) is called doubling precisely when its doubling dimension is bounded. This is a robust class of metric spaces which contains many families of metrics that occur in applied settings. We give tight bounds for embedding doubling metrics into (low-dimensional) normed spaces. We consider both general doubling metrics, as well as more restricted families such as those arising from trees, from graphs excluding a fixed minor, and from snowflaked metrics. Our techniques include decomposition theorems for doubling metrics, and an analysis of a fractal in the plane according to T. J. Laakso (2002). Finally, we discuss some applications and point out a central open question regarding dimensionality reduction in L sub 2 .",
"We resolve the following conjecture raised by Levin together with Linial, London, and Rabinovich [16]. Let Z ∞ d be the infinite graph whose vertex set is Zd and which has an edge (u,v) whenever ||u-v|| ∞ = 1. Let dim(G) be the smallest d such that G occurs as a (not necessarily induced) subgraph of Z ∞d . The growth rate of G, denoted ρ G , is the minimum ρ such that every ball of radius r > 1 in G contains at most rρ vertices. By simple volume arguments, dim(G) = Ω(ρ G ). Levin conjectured that this lower bound is tight, i.e., that dim(G) = O(ρ G ) for every graph G.Previously, it was not known whether dim(G) could be upper bounded by any function of ρ G , even in the special case of trees. We show that a weaker form of Levin's conjecture holds by proving that, for every graph G, dim(G) = O(ρ G log ρ G ). We disprove, however, the specific bound of the conjecture and show that our upper bound is tight by exhibiting graphs for which dim(G) =Ω(ρ G log ρ G ). For families of graphs which exclude a fixed minor, we salvage the strong form, showing that dim(G) = O(ρ G ). This holds also for graphs without long induced simple cycles. Our results extend to a variant of the conjecture for finite-dimensional Euclidean spaces due to Linial[15].",
"We present a tree data structure for fast nearest neighbor operations in general n-point metric spaces (where the data set consists of n points). The data structure requires O(n) space regardless of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant c, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in O (c6n log n) time. Furthermore, nearest neighbor queries require time only logarithmic in n, in particular O (c12 log n) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"We consider the problem of name-independent routing in doubling metrics. A doubling metric is a metric space whose doubling dimension is a constant, where the doubling dimension of a metric space is the least value α such that any ball of radius r can be covered by at most 2α balls of radius r 2.Given any δ>0 and a weighted undirected network G whose shortest path metric d is a doubling metric with doubling dimension α, we present a name-independent routing scheme for G with (9+δ)-stretch, (2+1 δ)O(α) (log δ)2 (log n)-bit routing information at each node, and packet headers of size O(log n), where δ is the ratio of the largest to the smallest shortest path distance in G.In addition, we prove that for any e ∈ (0,8), there is a doubling metric network G with n nodes, doubling dimension α ≤ 6 - log e, and Δ=O(21 en) such that any name-independent routing scheme on G with routing information at each node of size o(n(e 60)2)-bits has stretch larger than 9-e. Therefore assuming that Δ is bounded by a polynomial on n, our algorithm basically achieves optimal stretch for name-independent routing in doubling metrics with packet header size and routing information at each node both bounded by a polylogarithmic function of n.",
"In this article we introduce the notion of nearest-neighbor-preserving embeddings. These are randomized embeddings between two metric spaces which preserve the (approximate) nearest-neighbors. We give two examples of such embeddings for Euclidean metrics with low “intrinsic” dimension. Combining the embeddings with known data structures yields the best-known approximate nearest-neighbor data structures for such metrics.",
"",
"",
"We present a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension. This data-structure is then applied to obtain improved algorithms for the following problems: approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the (approximate) Lipschitz constant of a function. In all cases, the running (preprocessing) time is near linear and the space being used is linear.",
"The doubling dimension of a metric is the smallest k such that any ball of radius 2r can be covered using 2k balls of radius r. This concept for abstract metrics has been proposed as a natural analog to the dimension of a Euclidean space. If we could embed metrics with low doubling dimension into low dimensional Euclidean spaces, they would inherit several algorithmic and structural properties of the Euclidean spaces. Unfortunately however, such a restriction on dimension does not suffice to guarantee embeddibility in a normed space.In this paper we explore the option of bypassing the embedding. In particular we show the following for low dimensional metrics: Quasi-polynomial time (1+e)-approximation algorithm for various optimization problems such as TSP, k-median and facility location. (1+e)-approximate distance labeling scheme with optimal label length. (1+e)-stretch polylogarithmic storage routing scheme.",
"A Lipschitz embedding of a mctric space (X, d) into another one (Y, 8) is an application : X -»• Y such that : 3 1. B 6 ]0, + oo [. Vx. x ' e X, Ad(x, x ' ) 6( (x), f x' ) Bd(x, x'). We dcscribe here three mcthods to obtain Lipschitz embeddings of the metric space (R*, || ||) into some metric space (R\", || || ). The third method allows us to minimize, for k = 1, the rank ofsuch an embedding (i.c. to obtain the minimal value of the integer n).",
"We present a new data structure that facilitates approximate nearest neighbor searches on a dynamic set of points in a metric space that has a bounded doubling dimension. Our data structure has linear size and supports insertions and deletions in O(log n) time, and finds a (1+e)-approximate nearest neighbor in time O(log n) + (1 e)O(1). The search and update times hide multiplicative factors that depend on the doubling dimension; the space does not. These performance times are independent of the aspect ratio (or spread) of the points."
],
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"2034188144",
"2127248062",
"2011571064",
"2133296809",
"1981870397",
"2024930473",
"",
"1983067644",
"2045134120",
"2053578961",
"100489996",
"2144397897"
]
}
| 0 |
||
0712.3331
|
2951897619
|
In recent years, considerable advances have been made in the study of properties of metric spaces in terms of their doubling dimension. This line of research has not only enhanced our understanding of finite metrics, but has also resulted in many algorithmic applications. However, we still do not understand the interaction between various graph-theoretic (topological) properties of graphs, and the doubling (geometric) properties of the shortest-path metrics induced by them. For instance, the following natural question suggests itself: graph @math with @math such that the shortest path metric @math on @math is still doubling, and which agrees with @math on @math . This is often useful, given that unweighted graphs are often easier to reason about. We show that for any metric space @math , there is an graph @math with shortest-path metric @math such that -- for all @math , the distances @math , and -- the doubling dimension for @math is not much more than that of @math , where this change depends only on @math and not on the size of the graph. We show a similar result when both @math and @math are restricted to be trees: this gives a simpler proof that doubling trees embed into constant dimensional Euclidean space with constant distortion. We also show that our results are tight in terms of the tradeoff between distortion and dimension blowup.
|
Somewhat similar in spirit to our work is the @math -extension problem @cite_13 @cite_5 @cite_24 . Given a graph @math , the 0-extension ( cf. Lipschitz Extendability @cite_6 @cite_23 @cite_11 ) problem deals with extending a (Euclidean) embedding of the vertices of the graph to an embedding of the convex closure of the graph, while approximately preserving the Lipschitz constant of the embedding. Our results can be interpreted as analogues to the above where the goal is to approximately preserve the doubling dimension.
|
{
"abstract": [
"It is proved that ifY ⊂X are metric spaces withY havingn≧2 points then any mapf fromY into a Banach spaceZ can be extended to a map ( f ) fromX intoZ so that ( | f |_ lip c log n | f |_ lip ) wherec is an absolute constant. A related result is obtained for the case whereX is assumed to be a finite-dimensional normed space andY is an arbitrary subset ofX.",
"Given a graph G = (V, E), a set of terminals T ⊆ V, and a metric D on T, the 0-extension problem is to assign vertices in V to terminals, so that the sum, over all edges e, of the distance (under D) between the terminals to which the end points of e are assigned, is minimized. This problem was first studied by Karzanov. Calinescu, Karloff and Rabani gave an O(logk) approximation algorithm based on a linear programming relaxation for the problem, where k is the number of terminals. We improve on this bound, and give an O(log k log log k) approximation algorithm for the problem.",
"",
"In the 0-extension problem, we are given a weighted graph with some nodes marked as terminals and a semimetric on the set of terminals. Our goal is to assign the rest of the nodes to terminals so as to minimize the sum, over all edges, of the product of the edge's weight and the distance between the terminals to which its endpoints are assigned. This problem generalizes the multiway cut problem of [SIAM J. Comput. , 23 (1994), pp. 864--894] and is closely related to the metric labeling problem introduced by Kleinberg and Tardos [Proceedings of the 40th IEEE Annual Symposium on Foundations of Computer Science, New York, 1999, pp. 14--23]. We present approximation algorithms for 0-Extension . In arbitrary graphs, we present a O(log k)-approximation algorithm, k being the number of terminals. We also give O(1)-approximation guarantees for weighted planar graphs. Our results are based on a natural metric relaxation of the problem previously considered by Karzanov [European J. Combin., 19 (1998), pp. 71--101]. It is similar in flavor to the linear programming relaxation of Garg, Vazirani, and Yannakakis [SIAM J. Comput. , 25 (1996), pp. 235--251] for the multicut problem, and similar to relaxations for other graph partitioning problems. We prove that the integrality ratio of the metric relaxation is at least @math for a positive c for infinitely many k. Our results improve some of the results of Kleinberg and Tardos, and they further our understanding on how to use metric relaxations.",
"LetH=(T,U) be a connected graph,V?Ta set, andca non-negative function on the unordered pairs of elements ofV. In theminimum0-extension problem(*), one is asked to minimize the inner productcmover all metricsmonVsuch that (i)mcoincides with the distance function ofHwithinT; and (ii) eachv?Vis at zero distance from somes?T, i.e.m(v,s)=0. This problem is known to be NP-hard ifH=K3(as being equivalent to the minimum 3-terminal cut problem), while it is polynomially solvable ifH=K2(the minimum cut problem) orH=K2,r(the minimum (2,r)-metric problem). We study problem (*) for all fixedH. More precisely, we consider the linear programming relaxation (**) of (*) that is obtained by dropping condition (ii) above, and callHminimizableif the minima in (*) and (**) coincide for allVandc. Note that for such anHproblem (*) is solvable in strongly polynomial time. Our main theorem asserts thatHis minimizable if and only ifHis bipartite, has no isometric circuit with six or more nodes, and is orientable in the sense thatHcan be oriented so that nonadjacent edges of any 4-circuit are oppositely directed along this circuit. The proof is based on a combinatorial and topological study of tight and extreme extensions of graph metrics. Based on the idea of the proof of the NP-hardness for the minimum 3-terminal cut problem in 4, we then show that the minimum 0-extension problem is strongly NP-hard for many non-minimizable graphsH. Other results are also presented.",
"A metric space X is said to be absolutely Lipschitz extendable if every Lipschitz function f from X into any Banach space Z can be extended to any containing space Y⊇X, where the loss in the Lipschitz constant in the extension is independent of Y,Z, and f. We show that various classes of natural metric spaces are absolutely Lipschitz extendable. To cite this article: J.R. Lee, A. Naor, C. R. Acad. Sci. Paris, Ser. I 338 (2004)."
],
"cite_N": [
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2073031580",
"2048843200",
"1582875389",
"2038964811",
"2055636480",
"2061451339"
]
}
| 0 |
||
0712.1549
|
2145482252
|
We adapt multilevel, force-directed graph layout techniques to visualizing dynamic graphs in which vertices and edges are added and removed in an online fashion (i.e., unpredictably). We maintain multiple levels of coarseness using a dynamic, randomized coarsening algorithm. To ensure the vertices follow smooth trajectories, we employ dynamics simulation techniques, treating the vertices as point particles. We simulate fine and coarse levels of the graph simultaneously, coupling the dynamics of adjacent levels. Projection from coarser to finer levels is adaptive, with the projection determined by an affine transformation that evolves alongside the graph layouts. The result is a dynamic graph visualizer that quickly and smoothly adapts to changes in a graph.
|
Graph animation is widely used for exploring and navigating large graphs (e.g., @cite_14 .) There is a vast amount of work in this area, so we focus on systems that aim to animate changing graphs.
|
{
"abstract": [
"This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective."
],
"cite_N": [
"@cite_14"
],
"mid": [
"2147468287"
]
}
|
Dynamic Multilevel Graph Visualization
|
Our work is motivated by a need to visualize dynamic graphs, that is, graphs from which vertices and edges are being added and removed. Applications include visualizing complex algorithms (our initial motivation), ad hoc wireless networks, databases, monitoring distributed systems, realtime performance profiling, and so forth. Our design concerns are: D1. The system should support online revision of the graph, that is, changes to the graph that are not known in advance. Changes made to the graph may radically alter its structure.
D2. The animation should appear smooth. It should be possible to visually track vertices as they move, avoiding abrupt changes.
D3. Changes made to the graph should appear immediately, and the layout should stabilize rapidly after a change.
D4. The system should produce aesthetically pleasing, good quality layouts.
We make two principle contributions:
1. We adapt multilevel force-directed graph layout algorithms [Wal03] to the problem of dynamic graph layout.
2. We develop and analyze an efficient algorithm for dynamically maintaining the coarser versions of a graph needed for multilevel layout.
Force-directed graph layout
Forced-directed layout uses a physics metaphor to find graph layouts [Ead84,KK89,FLM94,FR91]. Each vertex is treated as a point particle in a space (usually R 2 or R 3 ). There are many variations on how to translate the graph into physics. We make fairly conventional choices, modelling edges as springs which pull connected vertices together. Repulsive forces between all pairs of vertices act to keep the vertices spread out. We use a potential energy V defined by 1
V = (v i ,v j )∈E 1 2 K x i − x j 2 spring potential + v i ,v j ∈V,v i =v j f 0 R + x i −x j repulsion potential (1)
where x i is the position of vertex v i , K is a spring constant, f 0 is a repulsion force constant, and R is a small constant used to avoid singularities.
To minimize the energy of Eqn (1), one typically uses 'trust region' methods, where the layout is advanced in the general direction of the gradient ∇V , but restricting the distance by which vertices may move in each step. The maximum move distance is often governed by an adaptive 'temperature' parameter as in annealing methods, so that step sizes decrease as the iteration converges.
One challenge in force-directed layout is that the repulsive forces that act to evenly space the vertices become weaker as the graph becomes larger. This results in large graph layouts converging slowly, a problem addressed by multilevel methods.
Multilevel graph layout algorithms [Wal03,KCH02] operate by repeatedly 'coarsening' a large graph to obtain a sequence of graphs G 0 , G 1 , . . . , G m , where each G i+1 has fewer vertices and edges than G i , but is structurally similar. For a pair (G i , G i+1 ), we refer to G i as the finer graph and G i+1 as the coarser graph. The coarsest graph G m is laid out using standard forcedirected layout. This layout is interpolated (projected) to produce an initial layout for the finer graph G m−1 . Once the force-directed layout of G m−1 converges, it is interpolated to provide an initial layout for G m−2 , and so forth.
Our approach
Roughly speaking, we develop a dynamic version of Walshaw's multilevel force-directed layout algorithm [Wal03].
Because of criterion D3, that changes to the graph appear immediately, we focused on approaches in which the optimization process is visualized directly, i.e., the vertex positions rendered reflect the current state of the energy minimization process.
A disadvantage of the gradient-following algorithms described above is that the layout can repeatedly overshoot a minima of the potential function, resulting in zig-zagging. This is unimportant for offline layouts, but can result in jerky trajectories if the layout process is being animated. We instead chose a dynamics-based approach in which vertices have momentum. Damping is used to minimize oscillations. This ensures that vertices follow smooth trajectories (criterion D2).
We use standard dynamics techniques to simultaneously simulate all levels of coarseness of a graph as one large dynamical system. We couple the dynamics of each graph V i to its coarser version V i+1 so that 'advice' about layouts can propagate from coarser to finer graphs.
Our approach entailed two major technical challenges:
1. How to maintain coarser versions of the graph as vertices and edges are added and removed.
2. How to couple the dynamics of finer and coarser graphs so that 'layout advice' can quickly propagate from coarser to finer graphs.
We have addressed the first challenge by developing a fully dynamic, Las Vegas-style randomized algorithm that requires O(1) operations per edge insertion or removal to maintain a coarser version of a bounded degree graph (Section 4).
We address the second challenge by using coarse graph vertices as inertial reference frames for vertices in the fine graph (Section 3). The projection from coarser to finer graphs is given dynamics, and evolves simultaneously with the vertex positions, converging to a least-squares fit of the coarse graph onto the finer graph (Section 3.2.1). We introduce time dilations between coarser and finer graphs, which reduces the problem of the finer graph reacting to cancel motions of the coarser graph (Section 3.2.2).
Demonstrations
Accompanying this paper are the following movies. 2 All movies are realtime screen captures on an 8-core Mac. Unless otherwise noted, the movies use one core and 4th order Runge-Kutta integration.
• ev1049_cube.mov: Layout of a 10x10x10 cube graph using singlelevel dynamics (Section 3.1), 8 cores, and Euler time steps.
• ev1049_coarsening.mov: Demonstration of the dynamic coarsening algorithm (Section 4).
• ev1049_twolevel.mov: Two-level dynamics, showing the projection dynamics and dynamic coarsening.
• ev1049_threelevel.mov: Three-level dynamics showing a graph moving quickly through assorted configurations. The coarsest graph is maintained automatically from modifications the first-level coarsener makes to the second graph.
• ev1049_compare.mov: Side-by-side comparison of single-level vs. threelevel dynamics, illustrating the quicker convergence achieved by multilevel dynamics.
• ev1049_multilevel.mov: Showing quick convergence times using multilevel dynamics (4-6 levels) on static graphs being reset to random vertex positions.
• ev1049_randomgraph.mov: Visualization of the emergence of the giant component in a random graph (Section 6). (8 cores, Euler step).
• ev1049_tree.mov: Visualization of rapid insertions into a binary tree (8 cores, Euler step).
(a) ev1049_cube
(b) ev1049_coarsening (c) ev1049_twolevel (d) ev1049_threelevel (e) ev1049_compare (f) ev1049_multilevel (g) ev1049_randomgraph (h) ev1049_tree
Figure 1: Still frames from the demonstration movies accompanying this paper.
Layout Dynamics
We use Lagrangian dynamics to derive the equations of motion for the simulation. Lagrangian dynamics is a bit excessive for a simple springs-andrepulsion graph layout. However, we have found that convergence times are greatly improved by dynamically adapting the interpolation between coarser and finer graphs. For this we use generalized forces, which are easily managed with a Lagrangian approach.
As is conventional, we writeẋ i for the velocity of vertex i, andẍ i for its acceleration. We take all masses to be 1, so that velocity and momentum are interchangeable.
In addition to the potential energy V (Eqn (1)), we define a kinetic energy T . For a single graph, this is simply:
T = v i ∈V 1 2 ẋ i 2 (2)
Roughly speaking, T describes channels through which potential energy (layout badness) can be converted to kinetic energy (vertex motion). Kinetic energy is then dissipated through friction, which results in the system settling into a local minimum of the potential V . We incorporate friction by adding extra terms to the basic equations of motion.
The equations of motion are obtained from the Euler-Lagrange equation:
d dt ∂ ∂ẋ i − ∂ ∂x i L = 0 (3)
where the quantity L = T − V is the Lagrangian.
Single level dynamics
The coarsest graph has straightforward dynamics. Substituting the definitions of (1,2) into the Euler-Lagrange equation yields the basic equation of motionẍ i = F i for a vertex, where F i is the net force:
x i = v i ,v j −K(x i − x j ) spring forces + v j =v i f 0 ( R + x i − x j ) 2 · x i − x j x i − x j repulsion forces (4)
We calculate the pairwise repulsion forces in O(n log n) time (with n = |V |, the number of vertices) using the Barnes-Hut algorithm [BH86].
The spring and repulsion forces are supplemented by a damping force defined by F d i = −dẋ i where d is a constant. Our system optionally adds a 'gravity' force that encourages directed edges to point in a specified direction (e.g., down).
Two-level dynamics
We now describe how the dynamics of a graph interacts with its coarser version. For notational clarity we write y i for the position of the coarse vertex corresponding to vertex i, understanding that each vertex in the coarse graph may correspond to multiple vertices in the finer graph. 3 In Walshaw's static multilevel layout algorithm [Wal03], each vertex x i simply uses as its starting position y i , the position of its coarser version. To adapt this idea to a dynamic setting, we begin by defining the position of x i to be y i plus some displacement δ i , i.e.,:
x i = δ i + y i
However, in practice this does not work as well as one might hope, and convergence is faster if one performs some scaling from the coarse to fine graph, for example
x i = δ i + ay i
A challenge in doing this is that the appropriate scaling depends on the characteristics of the particular graph. Suppose the coarse graph roughly halves the number of vertices in the fine graph. If the fine graph is, for example, a three-dimensional cube arrangement of vertices with 6-neighbours, then the expansion ratio needed will be ≈ 2 1/3 or about 1.26; a two-dimensional square arrangement of vertices needs an expansion ratio of ≈ √ 2 or about 1.41. Since the graph is dynamic, the best expansion ratio can also change over time. Moreover, the optimal amount of scaling might be different for each axis, and there might be differences in how the fine and coarse graph are oriented in space.
Such considerations led us to consider affine transformations from the coarse to fine graph. We use projections of the form
x i = δ i + αy j + β (5)
where α is a linear transformation (a 3x3 matrix) and β is a translation. The variables (α, β) are themselves given dynamics, so that the projection converges to a least-squares fit of the coarse graph to the fine graph.
Frame dynamics
We summarize here the derivation of the time evolution equations for the affine transformation (α, β). Conceptually, we think of the displacements δ i as "pulling" on the transformation: if all the displacements are to the right, the transformation will evolve to shift the coarse graph to the right; if they are all outward, the transformation will expand the projection of the coarse graph, and so forth. In this way the finer graph 'pulls' the projection of the coarse graph around it as tightly as possible.
We derive the equations forα andβ using Lagrangian dynamics. To simplify the derivation we pretend that both graph layouts are stationary, and that the displacements δ i behave like springs between the fine graph and the projected coarse graph, acting on α and β via 'generalized forces.' By setting up appropriate potential and kinetic energy terms, the Euler-Lagrange equations yield:α
= 1 n i δ i y T i + y i δ T i (6) β = 1 n i δ i(7)
To damp oscillations and dissipate energy we introduce damping terms of −d αα and −d ββ .
Time dilation
We now turn to the equations of motion for vertices in the two-level dynamics. The equations forδ i andδ i are obtained by differentiating Eqn (5):
x i = δ i + β + αy i proj. position(8)
x i =δ i +β +αy i + αẏ i proj. velocity (9)
x i =δ i +β +αy i + 2αẏ i + αÿ i proj. acceleration(10)
Let F i be the forces acting on the vertex x i . Substituting Eqn (10) intoẍ i = F i and rearranging, one obtains an equation of motion for the displacement:
δ i = F i − β +αy i + 2αẏ i + αÿ i proj. acceleration (11)
The projected acceleration of the coarse vertex can be interpreted as a 'pseudoforce' causing the vertex to react against motions of its coarser version. If Eqn (11) were used as the equation of motion, the coarse and fine graph would evolve independently, with no interaction. (We have used this useful property to check correctness of some aspects of our system.)
The challenge, then, is how to adjust Eqn (11) in some meaningful way to couple the finer and coarse graph dynamics. Our solution is based on the idea that the coarser graph layout evolves similarly to the finer graph, but on a different time scale: the coarse graph generally converges much more quickly. To achieve a good fit between the coarse and fine graph we might slow down the evolution of the coarse graph. Conceptually, we try to do the opposite, speeding up evolution of the fine graph to achieve a better fit. Rewriting Eqn (8) to make each variable an explicit function of time, and incorporating a time dilation, we obtain
x i (t) = δ i (t) + β(t) + α(t)y i (φt) proj. position (12)
where φ is a time dilation factor to account for the differing time scales of the coarse and fine graph. Carrying this through to the acceleration equation yields the equation of motion
δ i = F i − β +αy i + 2αφẏ i + αφ 2ÿ i proj. acceleration (13)
If for example the coarser graph layout converged at a rate twice that of the finer graph, we might take φ = 1 2 , with the effect that we would discount the projected accelerationÿ i by a factor of φ 2 = 1 4 . In practice we have used values of 0.1 ≤ φ ≤ 0.25. Applied across multiple levels of coarse graphs, we call this approach multilevel time dilation.
In addition to the spring and repulsion forces in F i , we include a drag term F d i = −dδ i in the forces of Eqn (13).
Multilevel dynamics
To handle multiple levels of coarse graphs, we iterate the two-level dynamics. The dynamics simulation simultaneously integrates the following equations:
• The equations of motion for the vertices in the coarsest graph, using the single-level dynamics of Section 3.1.
• The equations for the projection α, β between each coarser and finer graph pair (Section 3.2.1).
• The equations of motionδ i for the displacements of vertices in the finer graphs, using the two-level dynamics of Section 3.2.
In our implementation, the equations are integrated using an explicit, fourth-order Runge-Kutta method. (We also have a simple Euler-step method, which is fast, but not as reliably stable.)
Equilibrium positions of the multilevel dynamics
We prove here that a layout found using the multilevel dynamics is an equilibrium position of the potential energy function of Eqn (1). This establishes that the multilevel approach does not introduce spurious minima, and can be expected to converge to the same layouts as a single-level layout, only faster.
Theorem 3.1. Let (X,Ẋ) be an equilibrium position of the two-level dynamics, where X = (δ 1 , δ 2 , . . . , α, β, y 1 , y 2 , . . .), andẌ =Ẋ = 0. Then, (x 1 , x 2 , . . . , x n ) is an equilibrium position of the single-level dynamics, where x i = δ i + αy i + β, and the single-level potential gradient ∇V = 0 (Eqn (1)) vanishes there.
Proof. SinceẊ = 0, the drag terms vanish from all equations of motion. SubstitutingẊ = 0 andẌ = 0 into Eqn (13) yields F i = 0 for each vertex. Now consider the single-level dynamics (Section 3.1) using the positions x i obtained from x i = δ i + αy i + β (Eqn (5)). Fromẍ i = F i we have havë x i = 0 for each i. The Euler-Lagrange equations for the single level layout are (Eqn (3)):
d dt ∂L ∂ẋ i − ∂ ∂x i L = 0 Since d dt ∂L ∂ẋ i =ẍ i = 0, we have − ∂ ∂x i L = 0.
Using L = T − V and that the kinetic energy T does not depend on x i , we obtain
∂ ∂x i V = 0
for each i. Therefore ∇V = 0 at this point.
This result can be applied inductively over pairs of finer-coarser graphs, so that Theorem 3.1 holds also for multilevel dynamics.
Dynamic coarsening
As vertices and edges are added to and removed from the base graph G = (V, E), our system dynamically maintains the coarser graphs G 1 , G 2 , . . . , G m . Each vertex in a coarse graph may correspond to several vertices in the base graph, which is to say, each coarse graph defines a partition of the vertices in the base graph. It is useful to describe coarse vertices as subsets of V . For convenience we define a finest graph G 0 isomorphic to G, with vertices We have devised an algorithm that efficiently maintains G i+1 in response to changes in G i . By applying this algorithm at each level the entire chain G 1 , G 2 , . . . , G m is maintained.
V 0 = {{v} : v ∈ V }, and edges E 0 = {({v 1 }, {v 2 }) : (v 1 , v 2 ) ∈ E}.
We present a fully dynamic, Las Vegas-style randomized graph algorithm for maintaining a coarsened version of a graph. For graphs of bounded degree, this algorithm requires O(1) operations on average per edge insertion or removal.
Our algorithm is based on the traditional matching approach to coarsening developed by Hendrickson and Leland [HL95]. Recall that a matching of a graph G = (V, E) is a subset of edges M ⊆ E satisfying the restriction that no two edges of M share a common vertex. A matching is maximal if it is not properly contained in another matching. A maximal matching can be found by considering edges one at a time and adding them if they do not conflict with an edge already in M . (The problem of finding a maximal matching should not be confused with that of finding a maximum cardinality matching, a more difficult problem.)
Dynamically maintaining the matching
We begin by making the matching unique. We do this by fixing a total order < on the edges, chosen uniformly at random. (In practice, we compute < using a bijective hash function.) To produce a matching we can consider each edge in ascending order by <, adding it to the matching if it does not conflict with a previously matched edge. If e 1 < e 2 , we say that e 1 has priority over e 2 for matching.
Our basic analysis tool is the edge graph G * = (E, S) whose vertices are the edges of G, and e 1 Se 2 when the edges share a vertex. A set of edges M is a matching on G if and only if M is an independent set of vertices in G * . From G * we can define an edge dependence graph E = (E, →) which is a directed version of G * : e 1 → e 2 ≡ (e 1 < e 2 ) and e 1 Se 2 (share a common vertex)
Since < is a total order, the edge dependence graph E is acyclic. Figure 2 shows an example.
Building a matching by considering the edges in order of < is equivalent to a simple rule: e is matched if and only if there is no edge e ∈ M such that e → e. We can express this rule as a set of match equations whose solution can be maintained by simple change propagation. where by convention ∅ = .
To evaluate the match equations we place the edges to be considered for matching in a priority queue ordered by <, so that highest priority edges are considered first. The match equations can then be evaluated using a straightforward change propagation algorithm: While the priority queue is not empty:
1. Retrieve the highest priority edge e = (v 1 , v 2 ) from the queue and evaluate its match equation m(e). O O e 3 a a g g g g g g g g g g g g g Both match(e) and unmatch(e) add the dependent edges of e to the queue, so that changes ripple through the graph. Figure 3 summarizes the basic steps required to maintain the coarser graph (V , E ) as edges and vertices are added and removed to the finer graph.
[ [ U U U U U U U U U U U U U U U U U e 2 @ @ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ E E G G e 6 a a g g g g g U U C C C C C C C C C C C C C C C = = { { { { { e 1
Complexity of the dynamic matching
The following theorem establishes that for graphs of bounded degree, the expected cost of dynamically maintaining the coarsened graph is O(1) per edge inserted or removed in the fine graph. The cost does not depend on the number of edges in the graph. We first prove a lemma concerning the extent to which updates may need to propagate in the edge dependence graph. As usual for randomized algorithms, we analyze the complexity using "worst-case average time," i.e., the maximum (with respect to choices of edge to add or remove) of the expected time (with expectation taken over random priority assignments). For reasons that will become clear we define the priority order < by assigning to each edge a random real in [0, 1], with 1 being the highest priority. Proof. It is helpful to view the priority assignment ρ as inducing a linear arrangement of the vertices, i.e., we might draw G * by placing its vertices on the real line at their priorities. We obtain a directed graph (E, →) by making edges always point toward zero, i.e., from higher to lower priorities (cf. Figure 4). Note that vertices with low priorities will tend to have high indegree and low outdegree.
We write E[·] for expectation with respect to the random priorities ρ. following paths that move from higher to lower priority vertices. We bound the expected value of N (e) given its priority ρ(e) = η: we can always reach e from itself, and we can follow any edges to lower priorities:
E[N (e) | ρ(e) = η] ≤ 1 + e : eSe Pr(ρ(e ) < η) =η ·E[N (e ) | ρ(e ) < η] (14) Let f (η) = sup e∈E E[N (e) | ρ(e) = η]. Then, E[N (e ) | ρ(e ) < η] ≤ η 0 η −1 f (α)dα(15)
where the integration averages f over a uniform distribution on priorities [0, η). Since the degree of any vertex is ≤ k, there can be at most k terms in the summation of Eqn (14). Combining the above, we obtain
f (η) = sup e∈E E[N (e) | ρ(e) = η] (16) ≤ sup e∈E 1 + e Se ηE[N (e ) | ρ(e ) < η](17)≤ 1 + kη η 0 η −1 f (α)dα (18) ≤ 1 + k η 0 f (α)dα(19)
Therefore f (η) ≤ g(η), where g is the solution to the integral equation
g(η) = 1 + k η 0 g(α)dα(20)
Isolating the integral and differentiating yields the ODE g(η) = k −1 g (η), which has the solution g(η) = e ηk , using the boundary condition g(0) = 1 obtained from Eqn (20). Since 0 ≤ η ≤ 1, g(η) ≤ e k . Therefore, for every e ∈ E, the number of reachable vertices satisfies E[N (e)] ≤ e k .
Note that the upper bound of O(e k ) vertices reachable depends only on the maximum degree, and not on the size of the graph.
We now prove Theorem 4.1.
Proof. If a graph G = (V, E) has maximum degree d, its edge graph G * has maximum degree 2(d − 1). Inserting or removing an edge will cause us to reconsider the matching of ≤ e 2(d−1) edges on averages by Lemma 4.2.
If a max heap is used to implement the priority queue, O(de 2d ) operations are needed to insert and remove these edges. Therefore the randomized complexity is O(de 2d ).
In future work we hope to extend our analysis to show that the entire sequence of coarse graphs G 1 , G 2 , . . . , G m can be efficiently maintained. In practice, iterating the algorithm described here appears to work very well.
Implementation
Our system is implemented in C++, using OpenGL and pthreads. The graph animator runs in a separate thread from the user threads. The basic API is simple, with methods newVertex() and newEdge(v 1 , v 2 ) handling vertex and edge creation, and destructors handling their removal.
For static graphs, we have so far successfully used up to six levels of coarsening, with the coarsened graphs computed in advance. With more than six levels we are encountering numerical stability problems that seem to be related to the projection dynamics.
For dynamic graphs we have used three levels (the base graph plus two coarser versions), with the third-level graph being maintained from the actions of the dynamic coarsener for the first-level graph. At four levels we encounter a subtle bug in our dynamic coarsening implementation we have not yet resolved.
Parallelization
Our single-level dynamics implementation is parallelized. Each frame entails two expensive operations: rendering and force calculations. We use the Barnes-Hut tree to divide the force calculations evenly among the worker threads; this results in good locality of reference, since vertices that interact through edge forces or near-field repulsions are often handled by the same thread. Rendering is performed in a separate thread, with time step t being rendered while step t+δt is being computed. The accompanying animations were rendered on an 8-core (2x4) iMac using OpenGL, compiled with g++ at -O3.
Our multilevel dynamics engine is not yet parallelized, so the accompanying demonstrations of this are rendered on a single core. Parallelizing the multilevel dynamics engine remains for future work.
Applications
We include with this paper two demonstrations of applications:
• The emergence of the giant component in a random graph: In Erdös-Renyi G(n, p) random graphs on n vertices where each edge is present independently with probability p, there are a number of interesting phase transitions: when p < n −1 the largest connected component is almost surely of size Θ(log n); when p = n −1 it is a.s. of size Θ(n 2/3 ), and when p > n −1 it is a.s. of size Θ(n)-the "giant component." In this demonstration a large random graph is constructed by preassigning to all n 2 edges a probability trigger in [0, 1], and then slowly raising a probability parameter p(t) from 0 to 1 as the simulation progresses, with edges 'turning on' when their trigger is exceeded.
• Visualization of insertions of random elements into a binary tree, with an increasingly rapid rate of insertions.
In addition, we mention that the graph visualizer was of great use in debugging itself, particularly in tracking down errors in the dynamic matching implementation.
Conclusions
We have described a novel approach to real-time visualization of dynamic graphs. Our approach combines the benefits of multilevel force-directed graph layout with the ability to render rapidly changing graphs in real time. We have also contributed a novel and efficient method for dynamically maintaining coarser versions of a graph.
| 4,783 |
0712.1549
|
2145482252
|
We adapt multilevel, force-directed graph layout techniques to visualizing dynamic graphs in which vertices and edges are added and removed in an online fashion (i.e., unpredictably). We maintain multiple levels of coarseness using a dynamic, randomized coarsening algorithm. To ensure the vertices follow smooth trajectories, we employ dynamics simulation techniques, treating the vertices as point particles. We simulate fine and coarse levels of the graph simultaneously, coupling the dynamics of adjacent levels. Projection from coarser to finer levels is adaptive, with the projection determined by an affine transformation that evolves alongside the graph layouts. The result is a dynamic graph visualizer that quickly and smoothly adapts to changes in a graph.
|
Offline graph animation tools develop an animation from a sequence of key frames (layouts of static graphs). Such systems find layouts for key frame graphs, and then interpolate between the key frames in an appropriate way (e.g. @cite_0 ).
|
{
"abstract": [
"Enabling the user of a graph drawing system to preserve the mental map between two dierent layouts of a graph is a major problem. In this paper we present methods that smoothly transform one drawing of a graph into another without any restrictions to the class of graphs or type of layout algorithm."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2156521678"
]
}
|
Dynamic Multilevel Graph Visualization
|
Our work is motivated by a need to visualize dynamic graphs, that is, graphs from which vertices and edges are being added and removed. Applications include visualizing complex algorithms (our initial motivation), ad hoc wireless networks, databases, monitoring distributed systems, realtime performance profiling, and so forth. Our design concerns are: D1. The system should support online revision of the graph, that is, changes to the graph that are not known in advance. Changes made to the graph may radically alter its structure.
D2. The animation should appear smooth. It should be possible to visually track vertices as they move, avoiding abrupt changes.
D3. Changes made to the graph should appear immediately, and the layout should stabilize rapidly after a change.
D4. The system should produce aesthetically pleasing, good quality layouts.
We make two principle contributions:
1. We adapt multilevel force-directed graph layout algorithms [Wal03] to the problem of dynamic graph layout.
2. We develop and analyze an efficient algorithm for dynamically maintaining the coarser versions of a graph needed for multilevel layout.
Force-directed graph layout
Forced-directed layout uses a physics metaphor to find graph layouts [Ead84,KK89,FLM94,FR91]. Each vertex is treated as a point particle in a space (usually R 2 or R 3 ). There are many variations on how to translate the graph into physics. We make fairly conventional choices, modelling edges as springs which pull connected vertices together. Repulsive forces between all pairs of vertices act to keep the vertices spread out. We use a potential energy V defined by 1
V = (v i ,v j )∈E 1 2 K x i − x j 2 spring potential + v i ,v j ∈V,v i =v j f 0 R + x i −x j repulsion potential (1)
where x i is the position of vertex v i , K is a spring constant, f 0 is a repulsion force constant, and R is a small constant used to avoid singularities.
To minimize the energy of Eqn (1), one typically uses 'trust region' methods, where the layout is advanced in the general direction of the gradient ∇V , but restricting the distance by which vertices may move in each step. The maximum move distance is often governed by an adaptive 'temperature' parameter as in annealing methods, so that step sizes decrease as the iteration converges.
One challenge in force-directed layout is that the repulsive forces that act to evenly space the vertices become weaker as the graph becomes larger. This results in large graph layouts converging slowly, a problem addressed by multilevel methods.
Multilevel graph layout algorithms [Wal03,KCH02] operate by repeatedly 'coarsening' a large graph to obtain a sequence of graphs G 0 , G 1 , . . . , G m , where each G i+1 has fewer vertices and edges than G i , but is structurally similar. For a pair (G i , G i+1 ), we refer to G i as the finer graph and G i+1 as the coarser graph. The coarsest graph G m is laid out using standard forcedirected layout. This layout is interpolated (projected) to produce an initial layout for the finer graph G m−1 . Once the force-directed layout of G m−1 converges, it is interpolated to provide an initial layout for G m−2 , and so forth.
Our approach
Roughly speaking, we develop a dynamic version of Walshaw's multilevel force-directed layout algorithm [Wal03].
Because of criterion D3, that changes to the graph appear immediately, we focused on approaches in which the optimization process is visualized directly, i.e., the vertex positions rendered reflect the current state of the energy minimization process.
A disadvantage of the gradient-following algorithms described above is that the layout can repeatedly overshoot a minima of the potential function, resulting in zig-zagging. This is unimportant for offline layouts, but can result in jerky trajectories if the layout process is being animated. We instead chose a dynamics-based approach in which vertices have momentum. Damping is used to minimize oscillations. This ensures that vertices follow smooth trajectories (criterion D2).
We use standard dynamics techniques to simultaneously simulate all levels of coarseness of a graph as one large dynamical system. We couple the dynamics of each graph V i to its coarser version V i+1 so that 'advice' about layouts can propagate from coarser to finer graphs.
Our approach entailed two major technical challenges:
1. How to maintain coarser versions of the graph as vertices and edges are added and removed.
2. How to couple the dynamics of finer and coarser graphs so that 'layout advice' can quickly propagate from coarser to finer graphs.
We have addressed the first challenge by developing a fully dynamic, Las Vegas-style randomized algorithm that requires O(1) operations per edge insertion or removal to maintain a coarser version of a bounded degree graph (Section 4).
We address the second challenge by using coarse graph vertices as inertial reference frames for vertices in the fine graph (Section 3). The projection from coarser to finer graphs is given dynamics, and evolves simultaneously with the vertex positions, converging to a least-squares fit of the coarse graph onto the finer graph (Section 3.2.1). We introduce time dilations between coarser and finer graphs, which reduces the problem of the finer graph reacting to cancel motions of the coarser graph (Section 3.2.2).
Demonstrations
Accompanying this paper are the following movies. 2 All movies are realtime screen captures on an 8-core Mac. Unless otherwise noted, the movies use one core and 4th order Runge-Kutta integration.
• ev1049_cube.mov: Layout of a 10x10x10 cube graph using singlelevel dynamics (Section 3.1), 8 cores, and Euler time steps.
• ev1049_coarsening.mov: Demonstration of the dynamic coarsening algorithm (Section 4).
• ev1049_twolevel.mov: Two-level dynamics, showing the projection dynamics and dynamic coarsening.
• ev1049_threelevel.mov: Three-level dynamics showing a graph moving quickly through assorted configurations. The coarsest graph is maintained automatically from modifications the first-level coarsener makes to the second graph.
• ev1049_compare.mov: Side-by-side comparison of single-level vs. threelevel dynamics, illustrating the quicker convergence achieved by multilevel dynamics.
• ev1049_multilevel.mov: Showing quick convergence times using multilevel dynamics (4-6 levels) on static graphs being reset to random vertex positions.
• ev1049_randomgraph.mov: Visualization of the emergence of the giant component in a random graph (Section 6). (8 cores, Euler step).
• ev1049_tree.mov: Visualization of rapid insertions into a binary tree (8 cores, Euler step).
(a) ev1049_cube
(b) ev1049_coarsening (c) ev1049_twolevel (d) ev1049_threelevel (e) ev1049_compare (f) ev1049_multilevel (g) ev1049_randomgraph (h) ev1049_tree
Figure 1: Still frames from the demonstration movies accompanying this paper.
Layout Dynamics
We use Lagrangian dynamics to derive the equations of motion for the simulation. Lagrangian dynamics is a bit excessive for a simple springs-andrepulsion graph layout. However, we have found that convergence times are greatly improved by dynamically adapting the interpolation between coarser and finer graphs. For this we use generalized forces, which are easily managed with a Lagrangian approach.
As is conventional, we writeẋ i for the velocity of vertex i, andẍ i for its acceleration. We take all masses to be 1, so that velocity and momentum are interchangeable.
In addition to the potential energy V (Eqn (1)), we define a kinetic energy T . For a single graph, this is simply:
T = v i ∈V 1 2 ẋ i 2 (2)
Roughly speaking, T describes channels through which potential energy (layout badness) can be converted to kinetic energy (vertex motion). Kinetic energy is then dissipated through friction, which results in the system settling into a local minimum of the potential V . We incorporate friction by adding extra terms to the basic equations of motion.
The equations of motion are obtained from the Euler-Lagrange equation:
d dt ∂ ∂ẋ i − ∂ ∂x i L = 0 (3)
where the quantity L = T − V is the Lagrangian.
Single level dynamics
The coarsest graph has straightforward dynamics. Substituting the definitions of (1,2) into the Euler-Lagrange equation yields the basic equation of motionẍ i = F i for a vertex, where F i is the net force:
x i = v i ,v j −K(x i − x j ) spring forces + v j =v i f 0 ( R + x i − x j ) 2 · x i − x j x i − x j repulsion forces (4)
We calculate the pairwise repulsion forces in O(n log n) time (with n = |V |, the number of vertices) using the Barnes-Hut algorithm [BH86].
The spring and repulsion forces are supplemented by a damping force defined by F d i = −dẋ i where d is a constant. Our system optionally adds a 'gravity' force that encourages directed edges to point in a specified direction (e.g., down).
Two-level dynamics
We now describe how the dynamics of a graph interacts with its coarser version. For notational clarity we write y i for the position of the coarse vertex corresponding to vertex i, understanding that each vertex in the coarse graph may correspond to multiple vertices in the finer graph. 3 In Walshaw's static multilevel layout algorithm [Wal03], each vertex x i simply uses as its starting position y i , the position of its coarser version. To adapt this idea to a dynamic setting, we begin by defining the position of x i to be y i plus some displacement δ i , i.e.,:
x i = δ i + y i
However, in practice this does not work as well as one might hope, and convergence is faster if one performs some scaling from the coarse to fine graph, for example
x i = δ i + ay i
A challenge in doing this is that the appropriate scaling depends on the characteristics of the particular graph. Suppose the coarse graph roughly halves the number of vertices in the fine graph. If the fine graph is, for example, a three-dimensional cube arrangement of vertices with 6-neighbours, then the expansion ratio needed will be ≈ 2 1/3 or about 1.26; a two-dimensional square arrangement of vertices needs an expansion ratio of ≈ √ 2 or about 1.41. Since the graph is dynamic, the best expansion ratio can also change over time. Moreover, the optimal amount of scaling might be different for each axis, and there might be differences in how the fine and coarse graph are oriented in space.
Such considerations led us to consider affine transformations from the coarse to fine graph. We use projections of the form
x i = δ i + αy j + β (5)
where α is a linear transformation (a 3x3 matrix) and β is a translation. The variables (α, β) are themselves given dynamics, so that the projection converges to a least-squares fit of the coarse graph to the fine graph.
Frame dynamics
We summarize here the derivation of the time evolution equations for the affine transformation (α, β). Conceptually, we think of the displacements δ i as "pulling" on the transformation: if all the displacements are to the right, the transformation will evolve to shift the coarse graph to the right; if they are all outward, the transformation will expand the projection of the coarse graph, and so forth. In this way the finer graph 'pulls' the projection of the coarse graph around it as tightly as possible.
We derive the equations forα andβ using Lagrangian dynamics. To simplify the derivation we pretend that both graph layouts are stationary, and that the displacements δ i behave like springs between the fine graph and the projected coarse graph, acting on α and β via 'generalized forces.' By setting up appropriate potential and kinetic energy terms, the Euler-Lagrange equations yield:α
= 1 n i δ i y T i + y i δ T i (6) β = 1 n i δ i(7)
To damp oscillations and dissipate energy we introduce damping terms of −d αα and −d ββ .
Time dilation
We now turn to the equations of motion for vertices in the two-level dynamics. The equations forδ i andδ i are obtained by differentiating Eqn (5):
x i = δ i + β + αy i proj. position(8)
x i =δ i +β +αy i + αẏ i proj. velocity (9)
x i =δ i +β +αy i + 2αẏ i + αÿ i proj. acceleration(10)
Let F i be the forces acting on the vertex x i . Substituting Eqn (10) intoẍ i = F i and rearranging, one obtains an equation of motion for the displacement:
δ i = F i − β +αy i + 2αẏ i + αÿ i proj. acceleration (11)
The projected acceleration of the coarse vertex can be interpreted as a 'pseudoforce' causing the vertex to react against motions of its coarser version. If Eqn (11) were used as the equation of motion, the coarse and fine graph would evolve independently, with no interaction. (We have used this useful property to check correctness of some aspects of our system.)
The challenge, then, is how to adjust Eqn (11) in some meaningful way to couple the finer and coarse graph dynamics. Our solution is based on the idea that the coarser graph layout evolves similarly to the finer graph, but on a different time scale: the coarse graph generally converges much more quickly. To achieve a good fit between the coarse and fine graph we might slow down the evolution of the coarse graph. Conceptually, we try to do the opposite, speeding up evolution of the fine graph to achieve a better fit. Rewriting Eqn (8) to make each variable an explicit function of time, and incorporating a time dilation, we obtain
x i (t) = δ i (t) + β(t) + α(t)y i (φt) proj. position (12)
where φ is a time dilation factor to account for the differing time scales of the coarse and fine graph. Carrying this through to the acceleration equation yields the equation of motion
δ i = F i − β +αy i + 2αφẏ i + αφ 2ÿ i proj. acceleration (13)
If for example the coarser graph layout converged at a rate twice that of the finer graph, we might take φ = 1 2 , with the effect that we would discount the projected accelerationÿ i by a factor of φ 2 = 1 4 . In practice we have used values of 0.1 ≤ φ ≤ 0.25. Applied across multiple levels of coarse graphs, we call this approach multilevel time dilation.
In addition to the spring and repulsion forces in F i , we include a drag term F d i = −dδ i in the forces of Eqn (13).
Multilevel dynamics
To handle multiple levels of coarse graphs, we iterate the two-level dynamics. The dynamics simulation simultaneously integrates the following equations:
• The equations of motion for the vertices in the coarsest graph, using the single-level dynamics of Section 3.1.
• The equations for the projection α, β between each coarser and finer graph pair (Section 3.2.1).
• The equations of motionδ i for the displacements of vertices in the finer graphs, using the two-level dynamics of Section 3.2.
In our implementation, the equations are integrated using an explicit, fourth-order Runge-Kutta method. (We also have a simple Euler-step method, which is fast, but not as reliably stable.)
Equilibrium positions of the multilevel dynamics
We prove here that a layout found using the multilevel dynamics is an equilibrium position of the potential energy function of Eqn (1). This establishes that the multilevel approach does not introduce spurious minima, and can be expected to converge to the same layouts as a single-level layout, only faster.
Theorem 3.1. Let (X,Ẋ) be an equilibrium position of the two-level dynamics, where X = (δ 1 , δ 2 , . . . , α, β, y 1 , y 2 , . . .), andẌ =Ẋ = 0. Then, (x 1 , x 2 , . . . , x n ) is an equilibrium position of the single-level dynamics, where x i = δ i + αy i + β, and the single-level potential gradient ∇V = 0 (Eqn (1)) vanishes there.
Proof. SinceẊ = 0, the drag terms vanish from all equations of motion. SubstitutingẊ = 0 andẌ = 0 into Eqn (13) yields F i = 0 for each vertex. Now consider the single-level dynamics (Section 3.1) using the positions x i obtained from x i = δ i + αy i + β (Eqn (5)). Fromẍ i = F i we have havë x i = 0 for each i. The Euler-Lagrange equations for the single level layout are (Eqn (3)):
d dt ∂L ∂ẋ i − ∂ ∂x i L = 0 Since d dt ∂L ∂ẋ i =ẍ i = 0, we have − ∂ ∂x i L = 0.
Using L = T − V and that the kinetic energy T does not depend on x i , we obtain
∂ ∂x i V = 0
for each i. Therefore ∇V = 0 at this point.
This result can be applied inductively over pairs of finer-coarser graphs, so that Theorem 3.1 holds also for multilevel dynamics.
Dynamic coarsening
As vertices and edges are added to and removed from the base graph G = (V, E), our system dynamically maintains the coarser graphs G 1 , G 2 , . . . , G m . Each vertex in a coarse graph may correspond to several vertices in the base graph, which is to say, each coarse graph defines a partition of the vertices in the base graph. It is useful to describe coarse vertices as subsets of V . For convenience we define a finest graph G 0 isomorphic to G, with vertices We have devised an algorithm that efficiently maintains G i+1 in response to changes in G i . By applying this algorithm at each level the entire chain G 1 , G 2 , . . . , G m is maintained.
V 0 = {{v} : v ∈ V }, and edges E 0 = {({v 1 }, {v 2 }) : (v 1 , v 2 ) ∈ E}.
We present a fully dynamic, Las Vegas-style randomized graph algorithm for maintaining a coarsened version of a graph. For graphs of bounded degree, this algorithm requires O(1) operations on average per edge insertion or removal.
Our algorithm is based on the traditional matching approach to coarsening developed by Hendrickson and Leland [HL95]. Recall that a matching of a graph G = (V, E) is a subset of edges M ⊆ E satisfying the restriction that no two edges of M share a common vertex. A matching is maximal if it is not properly contained in another matching. A maximal matching can be found by considering edges one at a time and adding them if they do not conflict with an edge already in M . (The problem of finding a maximal matching should not be confused with that of finding a maximum cardinality matching, a more difficult problem.)
Dynamically maintaining the matching
We begin by making the matching unique. We do this by fixing a total order < on the edges, chosen uniformly at random. (In practice, we compute < using a bijective hash function.) To produce a matching we can consider each edge in ascending order by <, adding it to the matching if it does not conflict with a previously matched edge. If e 1 < e 2 , we say that e 1 has priority over e 2 for matching.
Our basic analysis tool is the edge graph G * = (E, S) whose vertices are the edges of G, and e 1 Se 2 when the edges share a vertex. A set of edges M is a matching on G if and only if M is an independent set of vertices in G * . From G * we can define an edge dependence graph E = (E, →) which is a directed version of G * : e 1 → e 2 ≡ (e 1 < e 2 ) and e 1 Se 2 (share a common vertex)
Since < is a total order, the edge dependence graph E is acyclic. Figure 2 shows an example.
Building a matching by considering the edges in order of < is equivalent to a simple rule: e is matched if and only if there is no edge e ∈ M such that e → e. We can express this rule as a set of match equations whose solution can be maintained by simple change propagation. where by convention ∅ = .
To evaluate the match equations we place the edges to be considered for matching in a priority queue ordered by <, so that highest priority edges are considered first. The match equations can then be evaluated using a straightforward change propagation algorithm: While the priority queue is not empty:
1. Retrieve the highest priority edge e = (v 1 , v 2 ) from the queue and evaluate its match equation m(e). O O e 3 a a g g g g g g g g g g g g g Both match(e) and unmatch(e) add the dependent edges of e to the queue, so that changes ripple through the graph. Figure 3 summarizes the basic steps required to maintain the coarser graph (V , E ) as edges and vertices are added and removed to the finer graph.
[ [ U U U U U U U U U U U U U U U U U e 2 @ @ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ E E G G e 6 a a g g g g g U U C C C C C C C C C C C C C C C = = { { { { { e 1
Complexity of the dynamic matching
The following theorem establishes that for graphs of bounded degree, the expected cost of dynamically maintaining the coarsened graph is O(1) per edge inserted or removed in the fine graph. The cost does not depend on the number of edges in the graph. We first prove a lemma concerning the extent to which updates may need to propagate in the edge dependence graph. As usual for randomized algorithms, we analyze the complexity using "worst-case average time," i.e., the maximum (with respect to choices of edge to add or remove) of the expected time (with expectation taken over random priority assignments). For reasons that will become clear we define the priority order < by assigning to each edge a random real in [0, 1], with 1 being the highest priority. Proof. It is helpful to view the priority assignment ρ as inducing a linear arrangement of the vertices, i.e., we might draw G * by placing its vertices on the real line at their priorities. We obtain a directed graph (E, →) by making edges always point toward zero, i.e., from higher to lower priorities (cf. Figure 4). Note that vertices with low priorities will tend to have high indegree and low outdegree.
We write E[·] for expectation with respect to the random priorities ρ. following paths that move from higher to lower priority vertices. We bound the expected value of N (e) given its priority ρ(e) = η: we can always reach e from itself, and we can follow any edges to lower priorities:
E[N (e) | ρ(e) = η] ≤ 1 + e : eSe Pr(ρ(e ) < η) =η ·E[N (e ) | ρ(e ) < η] (14) Let f (η) = sup e∈E E[N (e) | ρ(e) = η]. Then, E[N (e ) | ρ(e ) < η] ≤ η 0 η −1 f (α)dα(15)
where the integration averages f over a uniform distribution on priorities [0, η). Since the degree of any vertex is ≤ k, there can be at most k terms in the summation of Eqn (14). Combining the above, we obtain
f (η) = sup e∈E E[N (e) | ρ(e) = η] (16) ≤ sup e∈E 1 + e Se ηE[N (e ) | ρ(e ) < η](17)≤ 1 + kη η 0 η −1 f (α)dα (18) ≤ 1 + k η 0 f (α)dα(19)
Therefore f (η) ≤ g(η), where g is the solution to the integral equation
g(η) = 1 + k η 0 g(α)dα(20)
Isolating the integral and differentiating yields the ODE g(η) = k −1 g (η), which has the solution g(η) = e ηk , using the boundary condition g(0) = 1 obtained from Eqn (20). Since 0 ≤ η ≤ 1, g(η) ≤ e k . Therefore, for every e ∈ E, the number of reachable vertices satisfies E[N (e)] ≤ e k .
Note that the upper bound of O(e k ) vertices reachable depends only on the maximum degree, and not on the size of the graph.
We now prove Theorem 4.1.
Proof. If a graph G = (V, E) has maximum degree d, its edge graph G * has maximum degree 2(d − 1). Inserting or removing an edge will cause us to reconsider the matching of ≤ e 2(d−1) edges on averages by Lemma 4.2.
If a max heap is used to implement the priority queue, O(de 2d ) operations are needed to insert and remove these edges. Therefore the randomized complexity is O(de 2d ).
In future work we hope to extend our analysis to show that the entire sequence of coarse graphs G 1 , G 2 , . . . , G m can be efficiently maintained. In practice, iterating the algorithm described here appears to work very well.
Implementation
Our system is implemented in C++, using OpenGL and pthreads. The graph animator runs in a separate thread from the user threads. The basic API is simple, with methods newVertex() and newEdge(v 1 , v 2 ) handling vertex and edge creation, and destructors handling their removal.
For static graphs, we have so far successfully used up to six levels of coarsening, with the coarsened graphs computed in advance. With more than six levels we are encountering numerical stability problems that seem to be related to the projection dynamics.
For dynamic graphs we have used three levels (the base graph plus two coarser versions), with the third-level graph being maintained from the actions of the dynamic coarsener for the first-level graph. At four levels we encounter a subtle bug in our dynamic coarsening implementation we have not yet resolved.
Parallelization
Our single-level dynamics implementation is parallelized. Each frame entails two expensive operations: rendering and force calculations. We use the Barnes-Hut tree to divide the force calculations evenly among the worker threads; this results in good locality of reference, since vertices that interact through edge forces or near-field repulsions are often handled by the same thread. Rendering is performed in a separate thread, with time step t being rendered while step t+δt is being computed. The accompanying animations were rendered on an 8-core (2x4) iMac using OpenGL, compiled with g++ at -O3.
Our multilevel dynamics engine is not yet parallelized, so the accompanying demonstrations of this are rendered on a single core. Parallelizing the multilevel dynamics engine remains for future work.
Applications
We include with this paper two demonstrations of applications:
• The emergence of the giant component in a random graph: In Erdös-Renyi G(n, p) random graphs on n vertices where each edge is present independently with probability p, there are a number of interesting phase transitions: when p < n −1 the largest connected component is almost surely of size Θ(log n); when p = n −1 it is a.s. of size Θ(n 2/3 ), and when p > n −1 it is a.s. of size Θ(n)-the "giant component." In this demonstration a large random graph is constructed by preassigning to all n 2 edges a probability trigger in [0, 1], and then slowly raising a probability parameter p(t) from 0 to 1 as the simulation progresses, with edges 'turning on' when their trigger is exceeded.
• Visualization of insertions of random elements into a binary tree, with an increasingly rapid rate of insertions.
In addition, we mention that the graph visualizer was of great use in debugging itself, particularly in tracking down errors in the dynamic matching implementation.
Conclusions
We have described a novel approach to real-time visualization of dynamic graphs. Our approach combines the benefits of multilevel force-directed graph layout with the ability to render rapidly changing graphs in real time. We have also contributed a novel and efficient method for dynamically maintaining coarser versions of a graph.
| 4,783 |
0712.1549
|
2145482252
|
We adapt multilevel, force-directed graph layout techniques to visualizing dynamic graphs in which vertices and edges are added and removed in an online fashion (i.e., unpredictably). We maintain multiple levels of coarseness using a dynamic, randomized coarsening algorithm. To ensure the vertices follow smooth trajectories, we employ dynamics simulation techniques, treating the vertices as point particles. We simulate fine and coarse levels of the graph simultaneously, coupling the dynamics of adjacent levels. Projection from coarser to finer levels is adaptive, with the projection determined by an affine transformation that evolves alongside the graph layouts. The result is a dynamic graph visualizer that quickly and smoothly adapts to changes in a graph.
|
The key frame approach can be adapted to address the online problem by computing a new key frame each time a request arrives, and then interpolating to the new key frame. For example, @cite_11 developed a system for browsing large, partially known graphs, where navigation actions add and remove subgraphs. They use force-directed layout for key frame graphs, and interpolate between them.
|
{
"abstract": [
"On-line graph drawing deals with huge graphs which are partially unknown. At any time, a tiny part of the graph is displayed on the screen. Examples include web graphs and graphs of links in distributed file systems. This paper discusses issues arising in the presentation of such graphs. The paper describes a system for dealing with web graphs using on-line graph drawing."
],
"cite_N": [
"@cite_11"
],
"mid": [
"2024389296"
]
}
|
Dynamic Multilevel Graph Visualization
|
Our work is motivated by a need to visualize dynamic graphs, that is, graphs from which vertices and edges are being added and removed. Applications include visualizing complex algorithms (our initial motivation), ad hoc wireless networks, databases, monitoring distributed systems, realtime performance profiling, and so forth. Our design concerns are: D1. The system should support online revision of the graph, that is, changes to the graph that are not known in advance. Changes made to the graph may radically alter its structure.
D2. The animation should appear smooth. It should be possible to visually track vertices as they move, avoiding abrupt changes.
D3. Changes made to the graph should appear immediately, and the layout should stabilize rapidly after a change.
D4. The system should produce aesthetically pleasing, good quality layouts.
We make two principle contributions:
1. We adapt multilevel force-directed graph layout algorithms [Wal03] to the problem of dynamic graph layout.
2. We develop and analyze an efficient algorithm for dynamically maintaining the coarser versions of a graph needed for multilevel layout.
Force-directed graph layout
Forced-directed layout uses a physics metaphor to find graph layouts [Ead84,KK89,FLM94,FR91]. Each vertex is treated as a point particle in a space (usually R 2 or R 3 ). There are many variations on how to translate the graph into physics. We make fairly conventional choices, modelling edges as springs which pull connected vertices together. Repulsive forces between all pairs of vertices act to keep the vertices spread out. We use a potential energy V defined by 1
V = (v i ,v j )∈E 1 2 K x i − x j 2 spring potential + v i ,v j ∈V,v i =v j f 0 R + x i −x j repulsion potential (1)
where x i is the position of vertex v i , K is a spring constant, f 0 is a repulsion force constant, and R is a small constant used to avoid singularities.
To minimize the energy of Eqn (1), one typically uses 'trust region' methods, where the layout is advanced in the general direction of the gradient ∇V , but restricting the distance by which vertices may move in each step. The maximum move distance is often governed by an adaptive 'temperature' parameter as in annealing methods, so that step sizes decrease as the iteration converges.
One challenge in force-directed layout is that the repulsive forces that act to evenly space the vertices become weaker as the graph becomes larger. This results in large graph layouts converging slowly, a problem addressed by multilevel methods.
Multilevel graph layout algorithms [Wal03,KCH02] operate by repeatedly 'coarsening' a large graph to obtain a sequence of graphs G 0 , G 1 , . . . , G m , where each G i+1 has fewer vertices and edges than G i , but is structurally similar. For a pair (G i , G i+1 ), we refer to G i as the finer graph and G i+1 as the coarser graph. The coarsest graph G m is laid out using standard forcedirected layout. This layout is interpolated (projected) to produce an initial layout for the finer graph G m−1 . Once the force-directed layout of G m−1 converges, it is interpolated to provide an initial layout for G m−2 , and so forth.
Our approach
Roughly speaking, we develop a dynamic version of Walshaw's multilevel force-directed layout algorithm [Wal03].
Because of criterion D3, that changes to the graph appear immediately, we focused on approaches in which the optimization process is visualized directly, i.e., the vertex positions rendered reflect the current state of the energy minimization process.
A disadvantage of the gradient-following algorithms described above is that the layout can repeatedly overshoot a minima of the potential function, resulting in zig-zagging. This is unimportant for offline layouts, but can result in jerky trajectories if the layout process is being animated. We instead chose a dynamics-based approach in which vertices have momentum. Damping is used to minimize oscillations. This ensures that vertices follow smooth trajectories (criterion D2).
We use standard dynamics techniques to simultaneously simulate all levels of coarseness of a graph as one large dynamical system. We couple the dynamics of each graph V i to its coarser version V i+1 so that 'advice' about layouts can propagate from coarser to finer graphs.
Our approach entailed two major technical challenges:
1. How to maintain coarser versions of the graph as vertices and edges are added and removed.
2. How to couple the dynamics of finer and coarser graphs so that 'layout advice' can quickly propagate from coarser to finer graphs.
We have addressed the first challenge by developing a fully dynamic, Las Vegas-style randomized algorithm that requires O(1) operations per edge insertion or removal to maintain a coarser version of a bounded degree graph (Section 4).
We address the second challenge by using coarse graph vertices as inertial reference frames for vertices in the fine graph (Section 3). The projection from coarser to finer graphs is given dynamics, and evolves simultaneously with the vertex positions, converging to a least-squares fit of the coarse graph onto the finer graph (Section 3.2.1). We introduce time dilations between coarser and finer graphs, which reduces the problem of the finer graph reacting to cancel motions of the coarser graph (Section 3.2.2).
Demonstrations
Accompanying this paper are the following movies. 2 All movies are realtime screen captures on an 8-core Mac. Unless otherwise noted, the movies use one core and 4th order Runge-Kutta integration.
• ev1049_cube.mov: Layout of a 10x10x10 cube graph using singlelevel dynamics (Section 3.1), 8 cores, and Euler time steps.
• ev1049_coarsening.mov: Demonstration of the dynamic coarsening algorithm (Section 4).
• ev1049_twolevel.mov: Two-level dynamics, showing the projection dynamics and dynamic coarsening.
• ev1049_threelevel.mov: Three-level dynamics showing a graph moving quickly through assorted configurations. The coarsest graph is maintained automatically from modifications the first-level coarsener makes to the second graph.
• ev1049_compare.mov: Side-by-side comparison of single-level vs. threelevel dynamics, illustrating the quicker convergence achieved by multilevel dynamics.
• ev1049_multilevel.mov: Showing quick convergence times using multilevel dynamics (4-6 levels) on static graphs being reset to random vertex positions.
• ev1049_randomgraph.mov: Visualization of the emergence of the giant component in a random graph (Section 6). (8 cores, Euler step).
• ev1049_tree.mov: Visualization of rapid insertions into a binary tree (8 cores, Euler step).
(a) ev1049_cube
(b) ev1049_coarsening (c) ev1049_twolevel (d) ev1049_threelevel (e) ev1049_compare (f) ev1049_multilevel (g) ev1049_randomgraph (h) ev1049_tree
Figure 1: Still frames from the demonstration movies accompanying this paper.
Layout Dynamics
We use Lagrangian dynamics to derive the equations of motion for the simulation. Lagrangian dynamics is a bit excessive for a simple springs-andrepulsion graph layout. However, we have found that convergence times are greatly improved by dynamically adapting the interpolation between coarser and finer graphs. For this we use generalized forces, which are easily managed with a Lagrangian approach.
As is conventional, we writeẋ i for the velocity of vertex i, andẍ i for its acceleration. We take all masses to be 1, so that velocity and momentum are interchangeable.
In addition to the potential energy V (Eqn (1)), we define a kinetic energy T . For a single graph, this is simply:
T = v i ∈V 1 2 ẋ i 2 (2)
Roughly speaking, T describes channels through which potential energy (layout badness) can be converted to kinetic energy (vertex motion). Kinetic energy is then dissipated through friction, which results in the system settling into a local minimum of the potential V . We incorporate friction by adding extra terms to the basic equations of motion.
The equations of motion are obtained from the Euler-Lagrange equation:
d dt ∂ ∂ẋ i − ∂ ∂x i L = 0 (3)
where the quantity L = T − V is the Lagrangian.
Single level dynamics
The coarsest graph has straightforward dynamics. Substituting the definitions of (1,2) into the Euler-Lagrange equation yields the basic equation of motionẍ i = F i for a vertex, where F i is the net force:
x i = v i ,v j −K(x i − x j ) spring forces + v j =v i f 0 ( R + x i − x j ) 2 · x i − x j x i − x j repulsion forces (4)
We calculate the pairwise repulsion forces in O(n log n) time (with n = |V |, the number of vertices) using the Barnes-Hut algorithm [BH86].
The spring and repulsion forces are supplemented by a damping force defined by F d i = −dẋ i where d is a constant. Our system optionally adds a 'gravity' force that encourages directed edges to point in a specified direction (e.g., down).
Two-level dynamics
We now describe how the dynamics of a graph interacts with its coarser version. For notational clarity we write y i for the position of the coarse vertex corresponding to vertex i, understanding that each vertex in the coarse graph may correspond to multiple vertices in the finer graph. 3 In Walshaw's static multilevel layout algorithm [Wal03], each vertex x i simply uses as its starting position y i , the position of its coarser version. To adapt this idea to a dynamic setting, we begin by defining the position of x i to be y i plus some displacement δ i , i.e.,:
x i = δ i + y i
However, in practice this does not work as well as one might hope, and convergence is faster if one performs some scaling from the coarse to fine graph, for example
x i = δ i + ay i
A challenge in doing this is that the appropriate scaling depends on the characteristics of the particular graph. Suppose the coarse graph roughly halves the number of vertices in the fine graph. If the fine graph is, for example, a three-dimensional cube arrangement of vertices with 6-neighbours, then the expansion ratio needed will be ≈ 2 1/3 or about 1.26; a two-dimensional square arrangement of vertices needs an expansion ratio of ≈ √ 2 or about 1.41. Since the graph is dynamic, the best expansion ratio can also change over time. Moreover, the optimal amount of scaling might be different for each axis, and there might be differences in how the fine and coarse graph are oriented in space.
Such considerations led us to consider affine transformations from the coarse to fine graph. We use projections of the form
x i = δ i + αy j + β (5)
where α is a linear transformation (a 3x3 matrix) and β is a translation. The variables (α, β) are themselves given dynamics, so that the projection converges to a least-squares fit of the coarse graph to the fine graph.
Frame dynamics
We summarize here the derivation of the time evolution equations for the affine transformation (α, β). Conceptually, we think of the displacements δ i as "pulling" on the transformation: if all the displacements are to the right, the transformation will evolve to shift the coarse graph to the right; if they are all outward, the transformation will expand the projection of the coarse graph, and so forth. In this way the finer graph 'pulls' the projection of the coarse graph around it as tightly as possible.
We derive the equations forα andβ using Lagrangian dynamics. To simplify the derivation we pretend that both graph layouts are stationary, and that the displacements δ i behave like springs between the fine graph and the projected coarse graph, acting on α and β via 'generalized forces.' By setting up appropriate potential and kinetic energy terms, the Euler-Lagrange equations yield:α
= 1 n i δ i y T i + y i δ T i (6) β = 1 n i δ i(7)
To damp oscillations and dissipate energy we introduce damping terms of −d αα and −d ββ .
Time dilation
We now turn to the equations of motion for vertices in the two-level dynamics. The equations forδ i andδ i are obtained by differentiating Eqn (5):
x i = δ i + β + αy i proj. position(8)
x i =δ i +β +αy i + αẏ i proj. velocity (9)
x i =δ i +β +αy i + 2αẏ i + αÿ i proj. acceleration(10)
Let F i be the forces acting on the vertex x i . Substituting Eqn (10) intoẍ i = F i and rearranging, one obtains an equation of motion for the displacement:
δ i = F i − β +αy i + 2αẏ i + αÿ i proj. acceleration (11)
The projected acceleration of the coarse vertex can be interpreted as a 'pseudoforce' causing the vertex to react against motions of its coarser version. If Eqn (11) were used as the equation of motion, the coarse and fine graph would evolve independently, with no interaction. (We have used this useful property to check correctness of some aspects of our system.)
The challenge, then, is how to adjust Eqn (11) in some meaningful way to couple the finer and coarse graph dynamics. Our solution is based on the idea that the coarser graph layout evolves similarly to the finer graph, but on a different time scale: the coarse graph generally converges much more quickly. To achieve a good fit between the coarse and fine graph we might slow down the evolution of the coarse graph. Conceptually, we try to do the opposite, speeding up evolution of the fine graph to achieve a better fit. Rewriting Eqn (8) to make each variable an explicit function of time, and incorporating a time dilation, we obtain
x i (t) = δ i (t) + β(t) + α(t)y i (φt) proj. position (12)
where φ is a time dilation factor to account for the differing time scales of the coarse and fine graph. Carrying this through to the acceleration equation yields the equation of motion
δ i = F i − β +αy i + 2αφẏ i + αφ 2ÿ i proj. acceleration (13)
If for example the coarser graph layout converged at a rate twice that of the finer graph, we might take φ = 1 2 , with the effect that we would discount the projected accelerationÿ i by a factor of φ 2 = 1 4 . In practice we have used values of 0.1 ≤ φ ≤ 0.25. Applied across multiple levels of coarse graphs, we call this approach multilevel time dilation.
In addition to the spring and repulsion forces in F i , we include a drag term F d i = −dδ i in the forces of Eqn (13).
Multilevel dynamics
To handle multiple levels of coarse graphs, we iterate the two-level dynamics. The dynamics simulation simultaneously integrates the following equations:
• The equations of motion for the vertices in the coarsest graph, using the single-level dynamics of Section 3.1.
• The equations for the projection α, β between each coarser and finer graph pair (Section 3.2.1).
• The equations of motionδ i for the displacements of vertices in the finer graphs, using the two-level dynamics of Section 3.2.
In our implementation, the equations are integrated using an explicit, fourth-order Runge-Kutta method. (We also have a simple Euler-step method, which is fast, but not as reliably stable.)
Equilibrium positions of the multilevel dynamics
We prove here that a layout found using the multilevel dynamics is an equilibrium position of the potential energy function of Eqn (1). This establishes that the multilevel approach does not introduce spurious minima, and can be expected to converge to the same layouts as a single-level layout, only faster.
Theorem 3.1. Let (X,Ẋ) be an equilibrium position of the two-level dynamics, where X = (δ 1 , δ 2 , . . . , α, β, y 1 , y 2 , . . .), andẌ =Ẋ = 0. Then, (x 1 , x 2 , . . . , x n ) is an equilibrium position of the single-level dynamics, where x i = δ i + αy i + β, and the single-level potential gradient ∇V = 0 (Eqn (1)) vanishes there.
Proof. SinceẊ = 0, the drag terms vanish from all equations of motion. SubstitutingẊ = 0 andẌ = 0 into Eqn (13) yields F i = 0 for each vertex. Now consider the single-level dynamics (Section 3.1) using the positions x i obtained from x i = δ i + αy i + β (Eqn (5)). Fromẍ i = F i we have havë x i = 0 for each i. The Euler-Lagrange equations for the single level layout are (Eqn (3)):
d dt ∂L ∂ẋ i − ∂ ∂x i L = 0 Since d dt ∂L ∂ẋ i =ẍ i = 0, we have − ∂ ∂x i L = 0.
Using L = T − V and that the kinetic energy T does not depend on x i , we obtain
∂ ∂x i V = 0
for each i. Therefore ∇V = 0 at this point.
This result can be applied inductively over pairs of finer-coarser graphs, so that Theorem 3.1 holds also for multilevel dynamics.
Dynamic coarsening
As vertices and edges are added to and removed from the base graph G = (V, E), our system dynamically maintains the coarser graphs G 1 , G 2 , . . . , G m . Each vertex in a coarse graph may correspond to several vertices in the base graph, which is to say, each coarse graph defines a partition of the vertices in the base graph. It is useful to describe coarse vertices as subsets of V . For convenience we define a finest graph G 0 isomorphic to G, with vertices We have devised an algorithm that efficiently maintains G i+1 in response to changes in G i . By applying this algorithm at each level the entire chain G 1 , G 2 , . . . , G m is maintained.
V 0 = {{v} : v ∈ V }, and edges E 0 = {({v 1 }, {v 2 }) : (v 1 , v 2 ) ∈ E}.
We present a fully dynamic, Las Vegas-style randomized graph algorithm for maintaining a coarsened version of a graph. For graphs of bounded degree, this algorithm requires O(1) operations on average per edge insertion or removal.
Our algorithm is based on the traditional matching approach to coarsening developed by Hendrickson and Leland [HL95]. Recall that a matching of a graph G = (V, E) is a subset of edges M ⊆ E satisfying the restriction that no two edges of M share a common vertex. A matching is maximal if it is not properly contained in another matching. A maximal matching can be found by considering edges one at a time and adding them if they do not conflict with an edge already in M . (The problem of finding a maximal matching should not be confused with that of finding a maximum cardinality matching, a more difficult problem.)
Dynamically maintaining the matching
We begin by making the matching unique. We do this by fixing a total order < on the edges, chosen uniformly at random. (In practice, we compute < using a bijective hash function.) To produce a matching we can consider each edge in ascending order by <, adding it to the matching if it does not conflict with a previously matched edge. If e 1 < e 2 , we say that e 1 has priority over e 2 for matching.
Our basic analysis tool is the edge graph G * = (E, S) whose vertices are the edges of G, and e 1 Se 2 when the edges share a vertex. A set of edges M is a matching on G if and only if M is an independent set of vertices in G * . From G * we can define an edge dependence graph E = (E, →) which is a directed version of G * : e 1 → e 2 ≡ (e 1 < e 2 ) and e 1 Se 2 (share a common vertex)
Since < is a total order, the edge dependence graph E is acyclic. Figure 2 shows an example.
Building a matching by considering the edges in order of < is equivalent to a simple rule: e is matched if and only if there is no edge e ∈ M such that e → e. We can express this rule as a set of match equations whose solution can be maintained by simple change propagation. where by convention ∅ = .
To evaluate the match equations we place the edges to be considered for matching in a priority queue ordered by <, so that highest priority edges are considered first. The match equations can then be evaluated using a straightforward change propagation algorithm: While the priority queue is not empty:
1. Retrieve the highest priority edge e = (v 1 , v 2 ) from the queue and evaluate its match equation m(e). O O e 3 a a g g g g g g g g g g g g g Both match(e) and unmatch(e) add the dependent edges of e to the queue, so that changes ripple through the graph. Figure 3 summarizes the basic steps required to maintain the coarser graph (V , E ) as edges and vertices are added and removed to the finer graph.
[ [ U U U U U U U U U U U U U U U U U e 2 @ @ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ E E G G e 6 a a g g g g g U U C C C C C C C C C C C C C C C = = { { { { { e 1
Complexity of the dynamic matching
The following theorem establishes that for graphs of bounded degree, the expected cost of dynamically maintaining the coarsened graph is O(1) per edge inserted or removed in the fine graph. The cost does not depend on the number of edges in the graph. We first prove a lemma concerning the extent to which updates may need to propagate in the edge dependence graph. As usual for randomized algorithms, we analyze the complexity using "worst-case average time," i.e., the maximum (with respect to choices of edge to add or remove) of the expected time (with expectation taken over random priority assignments). For reasons that will become clear we define the priority order < by assigning to each edge a random real in [0, 1], with 1 being the highest priority. Proof. It is helpful to view the priority assignment ρ as inducing a linear arrangement of the vertices, i.e., we might draw G * by placing its vertices on the real line at their priorities. We obtain a directed graph (E, →) by making edges always point toward zero, i.e., from higher to lower priorities (cf. Figure 4). Note that vertices with low priorities will tend to have high indegree and low outdegree.
We write E[·] for expectation with respect to the random priorities ρ. following paths that move from higher to lower priority vertices. We bound the expected value of N (e) given its priority ρ(e) = η: we can always reach e from itself, and we can follow any edges to lower priorities:
E[N (e) | ρ(e) = η] ≤ 1 + e : eSe Pr(ρ(e ) < η) =η ·E[N (e ) | ρ(e ) < η] (14) Let f (η) = sup e∈E E[N (e) | ρ(e) = η]. Then, E[N (e ) | ρ(e ) < η] ≤ η 0 η −1 f (α)dα(15)
where the integration averages f over a uniform distribution on priorities [0, η). Since the degree of any vertex is ≤ k, there can be at most k terms in the summation of Eqn (14). Combining the above, we obtain
f (η) = sup e∈E E[N (e) | ρ(e) = η] (16) ≤ sup e∈E 1 + e Se ηE[N (e ) | ρ(e ) < η](17)≤ 1 + kη η 0 η −1 f (α)dα (18) ≤ 1 + k η 0 f (α)dα(19)
Therefore f (η) ≤ g(η), where g is the solution to the integral equation
g(η) = 1 + k η 0 g(α)dα(20)
Isolating the integral and differentiating yields the ODE g(η) = k −1 g (η), which has the solution g(η) = e ηk , using the boundary condition g(0) = 1 obtained from Eqn (20). Since 0 ≤ η ≤ 1, g(η) ≤ e k . Therefore, for every e ∈ E, the number of reachable vertices satisfies E[N (e)] ≤ e k .
Note that the upper bound of O(e k ) vertices reachable depends only on the maximum degree, and not on the size of the graph.
We now prove Theorem 4.1.
Proof. If a graph G = (V, E) has maximum degree d, its edge graph G * has maximum degree 2(d − 1). Inserting or removing an edge will cause us to reconsider the matching of ≤ e 2(d−1) edges on averages by Lemma 4.2.
If a max heap is used to implement the priority queue, O(de 2d ) operations are needed to insert and remove these edges. Therefore the randomized complexity is O(de 2d ).
In future work we hope to extend our analysis to show that the entire sequence of coarse graphs G 1 , G 2 , . . . , G m can be efficiently maintained. In practice, iterating the algorithm described here appears to work very well.
Implementation
Our system is implemented in C++, using OpenGL and pthreads. The graph animator runs in a separate thread from the user threads. The basic API is simple, with methods newVertex() and newEdge(v 1 , v 2 ) handling vertex and edge creation, and destructors handling their removal.
For static graphs, we have so far successfully used up to six levels of coarsening, with the coarsened graphs computed in advance. With more than six levels we are encountering numerical stability problems that seem to be related to the projection dynamics.
For dynamic graphs we have used three levels (the base graph plus two coarser versions), with the third-level graph being maintained from the actions of the dynamic coarsener for the first-level graph. At four levels we encounter a subtle bug in our dynamic coarsening implementation we have not yet resolved.
Parallelization
Our single-level dynamics implementation is parallelized. Each frame entails two expensive operations: rendering and force calculations. We use the Barnes-Hut tree to divide the force calculations evenly among the worker threads; this results in good locality of reference, since vertices that interact through edge forces or near-field repulsions are often handled by the same thread. Rendering is performed in a separate thread, with time step t being rendered while step t+δt is being computed. The accompanying animations were rendered on an 8-core (2x4) iMac using OpenGL, compiled with g++ at -O3.
Our multilevel dynamics engine is not yet parallelized, so the accompanying demonstrations of this are rendered on a single core. Parallelizing the multilevel dynamics engine remains for future work.
Applications
We include with this paper two demonstrations of applications:
• The emergence of the giant component in a random graph: In Erdös-Renyi G(n, p) random graphs on n vertices where each edge is present independently with probability p, there are a number of interesting phase transitions: when p < n −1 the largest connected component is almost surely of size Θ(log n); when p = n −1 it is a.s. of size Θ(n 2/3 ), and when p > n −1 it is a.s. of size Θ(n)-the "giant component." In this demonstration a large random graph is constructed by preassigning to all n 2 edges a probability trigger in [0, 1], and then slowly raising a probability parameter p(t) from 0 to 1 as the simulation progresses, with edges 'turning on' when their trigger is exceeded.
• Visualization of insertions of random elements into a binary tree, with an increasingly rapid rate of insertions.
In addition, we mention that the graph visualizer was of great use in debugging itself, particularly in tracking down errors in the dynamic matching implementation.
Conclusions
We have described a novel approach to real-time visualization of dynamic graphs. Our approach combines the benefits of multilevel force-directed graph layout with the ability to render rapidly changing graphs in real time. We have also contributed a novel and efficient method for dynamically maintaining coarser versions of a graph.
| 4,783 |
0712.1549
|
2145482252
|
We adapt multilevel, force-directed graph layout techniques to visualizing dynamic graphs in which vertices and edges are added and removed in an online fashion (i.e., unpredictably). We maintain multiple levels of coarseness using a dynamic, randomized coarsening algorithm. To ensure the vertices follow smooth trajectories, we employ dynamics simulation techniques, treating the vertices as point particles. We simulate fine and coarse levels of the graph simultaneously, coupling the dynamics of adjacent levels. Projection from coarser to finer levels is adaptive, with the projection determined by an affine transformation that evolves alongside the graph layouts. The result is a dynamic graph visualizer that quickly and smoothly adapts to changes in a graph.
|
Another approach is to take an existing graph layout algorithm and incrementalize (or dynamize) it. For example, the Dynagraph system @cite_12 @cite_2 uses an incrementalized version of the batch Sugiyama-Tagawa-Toda algorithm @cite_1 .
|
{
"abstract": [
"",
"We propose a heuristic for dynamic hierarchical graph drawing. Applications include incremental graph browsing and editing, display of dynamic data structures and networks, and browsing large graphs. The heuristic is an on-line interpretation of the static layout algorithm of Sugiyama, Togawa and Toda. It incorporates topological and geometric information with the objective of making layout animations that are incrementally stable and readable through long editing sequences. We measured the performance of a prototype implementation.",
"Graphviz is a collection of software for viewing and manipulating abstract graphs. It provides graph visualization for tools and web sites in domains such as software engineering, networking, databases, knowledge representation, and bioinformatics. Hundreds of thousands of copies have been distributed under an open source license."
],
"cite_N": [
"@cite_1",
"@cite_12",
"@cite_2"
],
"mid": [
"1555032851",
"1502972208",
"241117969"
]
}
|
Dynamic Multilevel Graph Visualization
|
Our work is motivated by a need to visualize dynamic graphs, that is, graphs from which vertices and edges are being added and removed. Applications include visualizing complex algorithms (our initial motivation), ad hoc wireless networks, databases, monitoring distributed systems, realtime performance profiling, and so forth. Our design concerns are: D1. The system should support online revision of the graph, that is, changes to the graph that are not known in advance. Changes made to the graph may radically alter its structure.
D2. The animation should appear smooth. It should be possible to visually track vertices as they move, avoiding abrupt changes.
D3. Changes made to the graph should appear immediately, and the layout should stabilize rapidly after a change.
D4. The system should produce aesthetically pleasing, good quality layouts.
We make two principle contributions:
1. We adapt multilevel force-directed graph layout algorithms [Wal03] to the problem of dynamic graph layout.
2. We develop and analyze an efficient algorithm for dynamically maintaining the coarser versions of a graph needed for multilevel layout.
Force-directed graph layout
Forced-directed layout uses a physics metaphor to find graph layouts [Ead84,KK89,FLM94,FR91]. Each vertex is treated as a point particle in a space (usually R 2 or R 3 ). There are many variations on how to translate the graph into physics. We make fairly conventional choices, modelling edges as springs which pull connected vertices together. Repulsive forces between all pairs of vertices act to keep the vertices spread out. We use a potential energy V defined by 1
V = (v i ,v j )∈E 1 2 K x i − x j 2 spring potential + v i ,v j ∈V,v i =v j f 0 R + x i −x j repulsion potential (1)
where x i is the position of vertex v i , K is a spring constant, f 0 is a repulsion force constant, and R is a small constant used to avoid singularities.
To minimize the energy of Eqn (1), one typically uses 'trust region' methods, where the layout is advanced in the general direction of the gradient ∇V , but restricting the distance by which vertices may move in each step. The maximum move distance is often governed by an adaptive 'temperature' parameter as in annealing methods, so that step sizes decrease as the iteration converges.
One challenge in force-directed layout is that the repulsive forces that act to evenly space the vertices become weaker as the graph becomes larger. This results in large graph layouts converging slowly, a problem addressed by multilevel methods.
Multilevel graph layout algorithms [Wal03,KCH02] operate by repeatedly 'coarsening' a large graph to obtain a sequence of graphs G 0 , G 1 , . . . , G m , where each G i+1 has fewer vertices and edges than G i , but is structurally similar. For a pair (G i , G i+1 ), we refer to G i as the finer graph and G i+1 as the coarser graph. The coarsest graph G m is laid out using standard forcedirected layout. This layout is interpolated (projected) to produce an initial layout for the finer graph G m−1 . Once the force-directed layout of G m−1 converges, it is interpolated to provide an initial layout for G m−2 , and so forth.
Our approach
Roughly speaking, we develop a dynamic version of Walshaw's multilevel force-directed layout algorithm [Wal03].
Because of criterion D3, that changes to the graph appear immediately, we focused on approaches in which the optimization process is visualized directly, i.e., the vertex positions rendered reflect the current state of the energy minimization process.
A disadvantage of the gradient-following algorithms described above is that the layout can repeatedly overshoot a minima of the potential function, resulting in zig-zagging. This is unimportant for offline layouts, but can result in jerky trajectories if the layout process is being animated. We instead chose a dynamics-based approach in which vertices have momentum. Damping is used to minimize oscillations. This ensures that vertices follow smooth trajectories (criterion D2).
We use standard dynamics techniques to simultaneously simulate all levels of coarseness of a graph as one large dynamical system. We couple the dynamics of each graph V i to its coarser version V i+1 so that 'advice' about layouts can propagate from coarser to finer graphs.
Our approach entailed two major technical challenges:
1. How to maintain coarser versions of the graph as vertices and edges are added and removed.
2. How to couple the dynamics of finer and coarser graphs so that 'layout advice' can quickly propagate from coarser to finer graphs.
We have addressed the first challenge by developing a fully dynamic, Las Vegas-style randomized algorithm that requires O(1) operations per edge insertion or removal to maintain a coarser version of a bounded degree graph (Section 4).
We address the second challenge by using coarse graph vertices as inertial reference frames for vertices in the fine graph (Section 3). The projection from coarser to finer graphs is given dynamics, and evolves simultaneously with the vertex positions, converging to a least-squares fit of the coarse graph onto the finer graph (Section 3.2.1). We introduce time dilations between coarser and finer graphs, which reduces the problem of the finer graph reacting to cancel motions of the coarser graph (Section 3.2.2).
Demonstrations
Accompanying this paper are the following movies. 2 All movies are realtime screen captures on an 8-core Mac. Unless otherwise noted, the movies use one core and 4th order Runge-Kutta integration.
• ev1049_cube.mov: Layout of a 10x10x10 cube graph using singlelevel dynamics (Section 3.1), 8 cores, and Euler time steps.
• ev1049_coarsening.mov: Demonstration of the dynamic coarsening algorithm (Section 4).
• ev1049_twolevel.mov: Two-level dynamics, showing the projection dynamics and dynamic coarsening.
• ev1049_threelevel.mov: Three-level dynamics showing a graph moving quickly through assorted configurations. The coarsest graph is maintained automatically from modifications the first-level coarsener makes to the second graph.
• ev1049_compare.mov: Side-by-side comparison of single-level vs. threelevel dynamics, illustrating the quicker convergence achieved by multilevel dynamics.
• ev1049_multilevel.mov: Showing quick convergence times using multilevel dynamics (4-6 levels) on static graphs being reset to random vertex positions.
• ev1049_randomgraph.mov: Visualization of the emergence of the giant component in a random graph (Section 6). (8 cores, Euler step).
• ev1049_tree.mov: Visualization of rapid insertions into a binary tree (8 cores, Euler step).
(a) ev1049_cube
(b) ev1049_coarsening (c) ev1049_twolevel (d) ev1049_threelevel (e) ev1049_compare (f) ev1049_multilevel (g) ev1049_randomgraph (h) ev1049_tree
Figure 1: Still frames from the demonstration movies accompanying this paper.
Layout Dynamics
We use Lagrangian dynamics to derive the equations of motion for the simulation. Lagrangian dynamics is a bit excessive for a simple springs-andrepulsion graph layout. However, we have found that convergence times are greatly improved by dynamically adapting the interpolation between coarser and finer graphs. For this we use generalized forces, which are easily managed with a Lagrangian approach.
As is conventional, we writeẋ i for the velocity of vertex i, andẍ i for its acceleration. We take all masses to be 1, so that velocity and momentum are interchangeable.
In addition to the potential energy V (Eqn (1)), we define a kinetic energy T . For a single graph, this is simply:
T = v i ∈V 1 2 ẋ i 2 (2)
Roughly speaking, T describes channels through which potential energy (layout badness) can be converted to kinetic energy (vertex motion). Kinetic energy is then dissipated through friction, which results in the system settling into a local minimum of the potential V . We incorporate friction by adding extra terms to the basic equations of motion.
The equations of motion are obtained from the Euler-Lagrange equation:
d dt ∂ ∂ẋ i − ∂ ∂x i L = 0 (3)
where the quantity L = T − V is the Lagrangian.
Single level dynamics
The coarsest graph has straightforward dynamics. Substituting the definitions of (1,2) into the Euler-Lagrange equation yields the basic equation of motionẍ i = F i for a vertex, where F i is the net force:
x i = v i ,v j −K(x i − x j ) spring forces + v j =v i f 0 ( R + x i − x j ) 2 · x i − x j x i − x j repulsion forces (4)
We calculate the pairwise repulsion forces in O(n log n) time (with n = |V |, the number of vertices) using the Barnes-Hut algorithm [BH86].
The spring and repulsion forces are supplemented by a damping force defined by F d i = −dẋ i where d is a constant. Our system optionally adds a 'gravity' force that encourages directed edges to point in a specified direction (e.g., down).
Two-level dynamics
We now describe how the dynamics of a graph interacts with its coarser version. For notational clarity we write y i for the position of the coarse vertex corresponding to vertex i, understanding that each vertex in the coarse graph may correspond to multiple vertices in the finer graph. 3 In Walshaw's static multilevel layout algorithm [Wal03], each vertex x i simply uses as its starting position y i , the position of its coarser version. To adapt this idea to a dynamic setting, we begin by defining the position of x i to be y i plus some displacement δ i , i.e.,:
x i = δ i + y i
However, in practice this does not work as well as one might hope, and convergence is faster if one performs some scaling from the coarse to fine graph, for example
x i = δ i + ay i
A challenge in doing this is that the appropriate scaling depends on the characteristics of the particular graph. Suppose the coarse graph roughly halves the number of vertices in the fine graph. If the fine graph is, for example, a three-dimensional cube arrangement of vertices with 6-neighbours, then the expansion ratio needed will be ≈ 2 1/3 or about 1.26; a two-dimensional square arrangement of vertices needs an expansion ratio of ≈ √ 2 or about 1.41. Since the graph is dynamic, the best expansion ratio can also change over time. Moreover, the optimal amount of scaling might be different for each axis, and there might be differences in how the fine and coarse graph are oriented in space.
Such considerations led us to consider affine transformations from the coarse to fine graph. We use projections of the form
x i = δ i + αy j + β (5)
where α is a linear transformation (a 3x3 matrix) and β is a translation. The variables (α, β) are themselves given dynamics, so that the projection converges to a least-squares fit of the coarse graph to the fine graph.
Frame dynamics
We summarize here the derivation of the time evolution equations for the affine transformation (α, β). Conceptually, we think of the displacements δ i as "pulling" on the transformation: if all the displacements are to the right, the transformation will evolve to shift the coarse graph to the right; if they are all outward, the transformation will expand the projection of the coarse graph, and so forth. In this way the finer graph 'pulls' the projection of the coarse graph around it as tightly as possible.
We derive the equations forα andβ using Lagrangian dynamics. To simplify the derivation we pretend that both graph layouts are stationary, and that the displacements δ i behave like springs between the fine graph and the projected coarse graph, acting on α and β via 'generalized forces.' By setting up appropriate potential and kinetic energy terms, the Euler-Lagrange equations yield:α
= 1 n i δ i y T i + y i δ T i (6) β = 1 n i δ i(7)
To damp oscillations and dissipate energy we introduce damping terms of −d αα and −d ββ .
Time dilation
We now turn to the equations of motion for vertices in the two-level dynamics. The equations forδ i andδ i are obtained by differentiating Eqn (5):
x i = δ i + β + αy i proj. position(8)
x i =δ i +β +αy i + αẏ i proj. velocity (9)
x i =δ i +β +αy i + 2αẏ i + αÿ i proj. acceleration(10)
Let F i be the forces acting on the vertex x i . Substituting Eqn (10) intoẍ i = F i and rearranging, one obtains an equation of motion for the displacement:
δ i = F i − β +αy i + 2αẏ i + αÿ i proj. acceleration (11)
The projected acceleration of the coarse vertex can be interpreted as a 'pseudoforce' causing the vertex to react against motions of its coarser version. If Eqn (11) were used as the equation of motion, the coarse and fine graph would evolve independently, with no interaction. (We have used this useful property to check correctness of some aspects of our system.)
The challenge, then, is how to adjust Eqn (11) in some meaningful way to couple the finer and coarse graph dynamics. Our solution is based on the idea that the coarser graph layout evolves similarly to the finer graph, but on a different time scale: the coarse graph generally converges much more quickly. To achieve a good fit between the coarse and fine graph we might slow down the evolution of the coarse graph. Conceptually, we try to do the opposite, speeding up evolution of the fine graph to achieve a better fit. Rewriting Eqn (8) to make each variable an explicit function of time, and incorporating a time dilation, we obtain
x i (t) = δ i (t) + β(t) + α(t)y i (φt) proj. position (12)
where φ is a time dilation factor to account for the differing time scales of the coarse and fine graph. Carrying this through to the acceleration equation yields the equation of motion
δ i = F i − β +αy i + 2αφẏ i + αφ 2ÿ i proj. acceleration (13)
If for example the coarser graph layout converged at a rate twice that of the finer graph, we might take φ = 1 2 , with the effect that we would discount the projected accelerationÿ i by a factor of φ 2 = 1 4 . In practice we have used values of 0.1 ≤ φ ≤ 0.25. Applied across multiple levels of coarse graphs, we call this approach multilevel time dilation.
In addition to the spring and repulsion forces in F i , we include a drag term F d i = −dδ i in the forces of Eqn (13).
Multilevel dynamics
To handle multiple levels of coarse graphs, we iterate the two-level dynamics. The dynamics simulation simultaneously integrates the following equations:
• The equations of motion for the vertices in the coarsest graph, using the single-level dynamics of Section 3.1.
• The equations for the projection α, β between each coarser and finer graph pair (Section 3.2.1).
• The equations of motionδ i for the displacements of vertices in the finer graphs, using the two-level dynamics of Section 3.2.
In our implementation, the equations are integrated using an explicit, fourth-order Runge-Kutta method. (We also have a simple Euler-step method, which is fast, but not as reliably stable.)
Equilibrium positions of the multilevel dynamics
We prove here that a layout found using the multilevel dynamics is an equilibrium position of the potential energy function of Eqn (1). This establishes that the multilevel approach does not introduce spurious minima, and can be expected to converge to the same layouts as a single-level layout, only faster.
Theorem 3.1. Let (X,Ẋ) be an equilibrium position of the two-level dynamics, where X = (δ 1 , δ 2 , . . . , α, β, y 1 , y 2 , . . .), andẌ =Ẋ = 0. Then, (x 1 , x 2 , . . . , x n ) is an equilibrium position of the single-level dynamics, where x i = δ i + αy i + β, and the single-level potential gradient ∇V = 0 (Eqn (1)) vanishes there.
Proof. SinceẊ = 0, the drag terms vanish from all equations of motion. SubstitutingẊ = 0 andẌ = 0 into Eqn (13) yields F i = 0 for each vertex. Now consider the single-level dynamics (Section 3.1) using the positions x i obtained from x i = δ i + αy i + β (Eqn (5)). Fromẍ i = F i we have havë x i = 0 for each i. The Euler-Lagrange equations for the single level layout are (Eqn (3)):
d dt ∂L ∂ẋ i − ∂ ∂x i L = 0 Since d dt ∂L ∂ẋ i =ẍ i = 0, we have − ∂ ∂x i L = 0.
Using L = T − V and that the kinetic energy T does not depend on x i , we obtain
∂ ∂x i V = 0
for each i. Therefore ∇V = 0 at this point.
This result can be applied inductively over pairs of finer-coarser graphs, so that Theorem 3.1 holds also for multilevel dynamics.
Dynamic coarsening
As vertices and edges are added to and removed from the base graph G = (V, E), our system dynamically maintains the coarser graphs G 1 , G 2 , . . . , G m . Each vertex in a coarse graph may correspond to several vertices in the base graph, which is to say, each coarse graph defines a partition of the vertices in the base graph. It is useful to describe coarse vertices as subsets of V . For convenience we define a finest graph G 0 isomorphic to G, with vertices We have devised an algorithm that efficiently maintains G i+1 in response to changes in G i . By applying this algorithm at each level the entire chain G 1 , G 2 , . . . , G m is maintained.
V 0 = {{v} : v ∈ V }, and edges E 0 = {({v 1 }, {v 2 }) : (v 1 , v 2 ) ∈ E}.
We present a fully dynamic, Las Vegas-style randomized graph algorithm for maintaining a coarsened version of a graph. For graphs of bounded degree, this algorithm requires O(1) operations on average per edge insertion or removal.
Our algorithm is based on the traditional matching approach to coarsening developed by Hendrickson and Leland [HL95]. Recall that a matching of a graph G = (V, E) is a subset of edges M ⊆ E satisfying the restriction that no two edges of M share a common vertex. A matching is maximal if it is not properly contained in another matching. A maximal matching can be found by considering edges one at a time and adding them if they do not conflict with an edge already in M . (The problem of finding a maximal matching should not be confused with that of finding a maximum cardinality matching, a more difficult problem.)
Dynamically maintaining the matching
We begin by making the matching unique. We do this by fixing a total order < on the edges, chosen uniformly at random. (In practice, we compute < using a bijective hash function.) To produce a matching we can consider each edge in ascending order by <, adding it to the matching if it does not conflict with a previously matched edge. If e 1 < e 2 , we say that e 1 has priority over e 2 for matching.
Our basic analysis tool is the edge graph G * = (E, S) whose vertices are the edges of G, and e 1 Se 2 when the edges share a vertex. A set of edges M is a matching on G if and only if M is an independent set of vertices in G * . From G * we can define an edge dependence graph E = (E, →) which is a directed version of G * : e 1 → e 2 ≡ (e 1 < e 2 ) and e 1 Se 2 (share a common vertex)
Since < is a total order, the edge dependence graph E is acyclic. Figure 2 shows an example.
Building a matching by considering the edges in order of < is equivalent to a simple rule: e is matched if and only if there is no edge e ∈ M such that e → e. We can express this rule as a set of match equations whose solution can be maintained by simple change propagation. where by convention ∅ = .
To evaluate the match equations we place the edges to be considered for matching in a priority queue ordered by <, so that highest priority edges are considered first. The match equations can then be evaluated using a straightforward change propagation algorithm: While the priority queue is not empty:
1. Retrieve the highest priority edge e = (v 1 , v 2 ) from the queue and evaluate its match equation m(e). O O e 3 a a g g g g g g g g g g g g g Both match(e) and unmatch(e) add the dependent edges of e to the queue, so that changes ripple through the graph. Figure 3 summarizes the basic steps required to maintain the coarser graph (V , E ) as edges and vertices are added and removed to the finer graph.
[ [ U U U U U U U U U U U U U U U U U e 2 @ @ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ Ñ E E G G e 6 a a g g g g g U U C C C C C C C C C C C C C C C = = { { { { { e 1
Complexity of the dynamic matching
The following theorem establishes that for graphs of bounded degree, the expected cost of dynamically maintaining the coarsened graph is O(1) per edge inserted or removed in the fine graph. The cost does not depend on the number of edges in the graph. We first prove a lemma concerning the extent to which updates may need to propagate in the edge dependence graph. As usual for randomized algorithms, we analyze the complexity using "worst-case average time," i.e., the maximum (with respect to choices of edge to add or remove) of the expected time (with expectation taken over random priority assignments). For reasons that will become clear we define the priority order < by assigning to each edge a random real in [0, 1], with 1 being the highest priority. Proof. It is helpful to view the priority assignment ρ as inducing a linear arrangement of the vertices, i.e., we might draw G * by placing its vertices on the real line at their priorities. We obtain a directed graph (E, →) by making edges always point toward zero, i.e., from higher to lower priorities (cf. Figure 4). Note that vertices with low priorities will tend to have high indegree and low outdegree.
We write E[·] for expectation with respect to the random priorities ρ. following paths that move from higher to lower priority vertices. We bound the expected value of N (e) given its priority ρ(e) = η: we can always reach e from itself, and we can follow any edges to lower priorities:
E[N (e) | ρ(e) = η] ≤ 1 + e : eSe Pr(ρ(e ) < η) =η ·E[N (e ) | ρ(e ) < η] (14) Let f (η) = sup e∈E E[N (e) | ρ(e) = η]. Then, E[N (e ) | ρ(e ) < η] ≤ η 0 η −1 f (α)dα(15)
where the integration averages f over a uniform distribution on priorities [0, η). Since the degree of any vertex is ≤ k, there can be at most k terms in the summation of Eqn (14). Combining the above, we obtain
f (η) = sup e∈E E[N (e) | ρ(e) = η] (16) ≤ sup e∈E 1 + e Se ηE[N (e ) | ρ(e ) < η](17)≤ 1 + kη η 0 η −1 f (α)dα (18) ≤ 1 + k η 0 f (α)dα(19)
Therefore f (η) ≤ g(η), where g is the solution to the integral equation
g(η) = 1 + k η 0 g(α)dα(20)
Isolating the integral and differentiating yields the ODE g(η) = k −1 g (η), which has the solution g(η) = e ηk , using the boundary condition g(0) = 1 obtained from Eqn (20). Since 0 ≤ η ≤ 1, g(η) ≤ e k . Therefore, for every e ∈ E, the number of reachable vertices satisfies E[N (e)] ≤ e k .
Note that the upper bound of O(e k ) vertices reachable depends only on the maximum degree, and not on the size of the graph.
We now prove Theorem 4.1.
Proof. If a graph G = (V, E) has maximum degree d, its edge graph G * has maximum degree 2(d − 1). Inserting or removing an edge will cause us to reconsider the matching of ≤ e 2(d−1) edges on averages by Lemma 4.2.
If a max heap is used to implement the priority queue, O(de 2d ) operations are needed to insert and remove these edges. Therefore the randomized complexity is O(de 2d ).
In future work we hope to extend our analysis to show that the entire sequence of coarse graphs G 1 , G 2 , . . . , G m can be efficiently maintained. In practice, iterating the algorithm described here appears to work very well.
Implementation
Our system is implemented in C++, using OpenGL and pthreads. The graph animator runs in a separate thread from the user threads. The basic API is simple, with methods newVertex() and newEdge(v 1 , v 2 ) handling vertex and edge creation, and destructors handling their removal.
For static graphs, we have so far successfully used up to six levels of coarsening, with the coarsened graphs computed in advance. With more than six levels we are encountering numerical stability problems that seem to be related to the projection dynamics.
For dynamic graphs we have used three levels (the base graph plus two coarser versions), with the third-level graph being maintained from the actions of the dynamic coarsener for the first-level graph. At four levels we encounter a subtle bug in our dynamic coarsening implementation we have not yet resolved.
Parallelization
Our single-level dynamics implementation is parallelized. Each frame entails two expensive operations: rendering and force calculations. We use the Barnes-Hut tree to divide the force calculations evenly among the worker threads; this results in good locality of reference, since vertices that interact through edge forces or near-field repulsions are often handled by the same thread. Rendering is performed in a separate thread, with time step t being rendered while step t+δt is being computed. The accompanying animations were rendered on an 8-core (2x4) iMac using OpenGL, compiled with g++ at -O3.
Our multilevel dynamics engine is not yet parallelized, so the accompanying demonstrations of this are rendered on a single core. Parallelizing the multilevel dynamics engine remains for future work.
Applications
We include with this paper two demonstrations of applications:
• The emergence of the giant component in a random graph: In Erdös-Renyi G(n, p) random graphs on n vertices where each edge is present independently with probability p, there are a number of interesting phase transitions: when p < n −1 the largest connected component is almost surely of size Θ(log n); when p = n −1 it is a.s. of size Θ(n 2/3 ), and when p > n −1 it is a.s. of size Θ(n)-the "giant component." In this demonstration a large random graph is constructed by preassigning to all n 2 edges a probability trigger in [0, 1], and then slowly raising a probability parameter p(t) from 0 to 1 as the simulation progresses, with edges 'turning on' when their trigger is exceeded.
• Visualization of insertions of random elements into a binary tree, with an increasingly rapid rate of insertions.
In addition, we mention that the graph visualizer was of great use in debugging itself, particularly in tracking down errors in the dynamic matching implementation.
Conclusions
We have described a novel approach to real-time visualization of dynamic graphs. Our approach combines the benefits of multilevel force-directed graph layout with the ability to render rapidly changing graphs in real time. We have also contributed a novel and efficient method for dynamically maintaining coarser versions of a graph.
| 4,783 |
0710.4975
|
1493695992
|
Methods to solve a node discovery problem for a social network are presented. Covert nodes refer to the nodes which are not observable directly. They transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the collaborative activities. Discovering the covert nodes is identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. The performance of the methods is demonstrated with a test dataset generated from computationally synthesized networks and a real organization.
|
Research interests have been moving from describing organizational structure to discovering dynamical phenomena on a social network. A link discovery predicts the existence of an unknown link between two nodes from the information on the known attributes of the nodes and the known links @cite_13 . It is one of the tasks of link mining @cite_1 . The link discovery techniques are combined with domain-specific heuristics. The collaboration between scientists can be predicted from the published co-authorship @cite_22 . The friendship between people is inferred from the information available on their web pages @cite_21 .
|
{
"abstract": [
"Many datasets of interest today are best described as a linked collection of interrelated objects. These may represent homogeneous networks, in which there is a single-object type and link type, or richer, heterogeneous networks, in which there may be multiple object and link types (and possibly other semantic information). Examples of homogeneous networks include single mode social networks, such as people connected by friendship links, or the WWW, a collection of linked web pages. Examples of heterogeneous networks include those in medical domains describing patients, diseases, treatments and contacts, or in bibliographic domains describing publications, authors, and venues. Link mining refers to data mining techniques that explicitly consider these links when building predictive or descriptive models of the linked data. Commonly addressed link mining tasks include object ranking, group detection, collective classification, link prediction and subgraph discovery. While network analysis has been studied in depth in particular areas such as social network analysis, hypertext mining, and web analysis, only recently has there been a cross-fertilization of ideas among these different communities. This is an exciting, rapidly expanding area. In this article, we review some of the common emerging themes.",
"Abstract The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.",
"Networks have recently emerged as a powerful tool to describe and quantify many complex systems, with applications in engineering, communications, ecology, biochemistry and genetics. A general technique to divide network vertices in groups and sub-groups is reported. Revealing such underlying hierarchies in turn allows the predicting of missing links from partial data with higher accuracy than previous methods.",
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc."
],
"cite_N": [
"@cite_1",
"@cite_21",
"@cite_13",
"@cite_22"
],
"mid": [
"2017102965",
"2154454189",
"2157082398",
"2148847267"
]
}
|
Node discovery problem for a social network
|
Covert nodes refer to persons who transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the activities. The covert nodes are not observable directly. It aids us in discovering and approaching to the covert nodes to identify the suspicious surveillance logs where the covert nodes would appear if they became overt. I call this problem a node discovery problem for a social network.
Where do we encounter such a problem? Globally networked clandestine organizations such as terrorists, criminals, or drug smugglers are great threat to the civilized societies [Sageman (2004)]. Terrorism attacks cause great economic, social and environmental damage. Active non-routine responses to the attacks are necessary as well as the damage recovery management. The short-term target of the responses is the arrest of the perpetrators. The long-term target of the responses is identifying and dismantling the covert organizational foundation which raises, encourages, and helps the perpetrators. The threat will be mitigated and eliminated by discovering covert leaders and critical conspirators of the clandestine organizations. The difficulty of such discovery lies in the limited capability of surveillance. Information on the leaders and critical conspirators are missing because it is usually hidden by the organization intentionally.
Let me show an example in the 9/11 terrorist attack in 2001 [Krebs (2002)]. Mustafa A. Al-Hisawi, whose alternate name was Mustafa Al-Hawsawi, was alleged to be a wire-puller who had acted as a financial manager of Al Qaeda. He had attempted to help terrorists enter the United States, and provided the hijackers of the 4 aircrafts with financial support worth more than 300,000 dollars. Furthermore, Osama bin Laden is suspected to be a wire-puller behind Mustafa A. Al-Hisawi and the conspirators behind the hijackers. These persons were not recognized as wire-pullers at the time of the attack. They were the nodes to discover from the information on the collaborative activities of the perpetrators and conspirators known at that moment.
In this paper, I present two methods to solve the node discovery problem. One is a heuristic method in [Maeno (2009)], which demonstrates a simulation experiment of the node discovery problem for the social network of the 9/11 perpetrators. The other is a statistical inference method which I propose in this paper. The method employs the maximal likelihood estimation and an anomaly detection technique. Section 3 defines the node discovery problem mathematically. Section 4 presents the two methods. Section 5 introduces the test dataset generated from computationally synthesized networks and a real clandestine organization. Section 6 demonstrates the performance characteristics of the methods (precision, recall, and van Rijsbergen's F measure [Korfhuge (1997)]). Section 7 presents the issues and future perspectives as concluding remarks. Section 2 summarizes the related works.
Problem definition
The node discovery problem is defined mathematically in this section. A node represents a person in a social network. A link represents a relationship which transmits the influence between persons. The symbols n j (j = 0, 1, · · ·) represent the nodes. Some nodes are overt (observable), but the others are covert (unobservable). O denotes the overt nodes; {n 0 , n 1 , · · · , n N −1 }. Its cardinality is |O| = N . C = O denotes the covert nodes; {n N , n N +1 , · · · , n M−1 }. Its cardinality is |C| = M − N . The whole nodes in a social network is O ∪ C. The number of the nodes is M . The unobservability of the covert nodes arises either from a technical defect of surveillance means or an intentional cover-up operation.
The symbol δ i represent a set of participants in a particular collaborative activity. It is the i-th activity pattern among the nodes. A pattern δ i is a set of nodes; δ i is a subset of O ∪ C. For example, the nodes in an collaborative activity pattern are those who joined a particular conference call. That is, a pattern is a co-occurrence among the nodes [Rabbat (2008)]. The unobservability of the covert nodes does not affect the activity patterns themselves.
A simple hub-and-spoke model is assumed as a model of the influence transmission over the links resulting the collaborative activities among the nodes. The way how the influence is transmitted governs the set of possible activity patterns {δ i }. The network topology and the influence transmission are described by some probability parameters. The probability where the influence transmits from an initiating node n j to a responder node n k is r jk . The influence transmits to multiple responders independently in parallel. It is similar to the degree of collaboration probability in trust modeling [Lavrac (2007)]. The constraints are 0 ≤ r jk and k =j r jk ≤ 1. The quantity f j is the probability where the node n j becomes an initiator. The constraints are 0 ≤ f j and N −1 j=0 f j = 1. These parameters are defined for the whole nodes in a social network (both the nodes in O and C).
A surveillance log d i records a set of the overt nodes in a collaborative activity pattern; δ i . It is given by eq.(1). A log d i is a subset of O. The number of data is D. A set {d i } is the whole surveillance logs dataset.
d i = δ i ∩ O (0 ≤ i < D).
(1)
Note that neither an individual node nor a single link alone can be observed directly, but nodes can be observed collectively as a collaborative activity pattern. The dataset {d i } can be expressed by a 2-dimensional D × N matrix of binary variables d. The presence or absence of the node n j in the data d i is indicated by the elements in eq.(2).
d ij = 1 if n j ∈ d i 0 otherwise (0 ≤ i < D, 0 ≤ j < N ). (2)
Solving the node discovery problem means identifying all the surveillance logs where covert nodes would appear if they became overt. In other words, it means to identifying the logs for which d i = δ i holds because of the covert nodes belonging to C.
Solution
Heuristic method
A heuristic method to solve the node discovery problem is studied in [Maeno (2009)]. The method is reviewed briefly.
At first, every node which appears in the dataset {d i } is classified into one of the clusters c l (0 ≤ l < C). The number of clusters is C, which depends on the prior knowledge. Mutually close nodes form a cluster. The measure of closeness between a pair of nodes is evaluated by the Jaccard's coefficient [Liben-Nowell (2004)]. It is used widely in link discovery, web mining, or text processing. The Jaccard's coefficient between the nodes n and n ′ is defined by eq.(3). The function B(s) in eq.(3) is a Boolean function which returns 1 if the proposition s is trueCor 0 otherwise. The operators ∧ and ∨ are logical AND and OR.
J(n, n ′ ) = D−1 i=0 B(n ∈ d i ∧ n ′ ∈ d i ) D−1 i=0 B(n ∈ d i ∨ n ′ ∈ d i )
.
(
The k-medoids clustering algorithm [Hastie (2001)] is employed for classification of the nodes. It is an EM (expectation-maximization) algorithm similar to the kmeans algorithm for numerical data. A medoid node locates most centrally within a cluster. It corresponds to the center of gravity in the k-means algorithm. The clusters and the modoid nodes are re-calculated iteratively until they converge into a stable structure. The k-medoids clustering algorithm may be substituted by other clustering algorithms such as hierarchical clustering or self-organizing mapping.
Then, suspiciousness of every surveillance log d i as a candidate where the covert nodes would appear is evaluated with a ranking function s(d i ). The ranking function returns higher value for a more suspicious log. The strength of the correlation between the log d i and the cluster c l is defined by w(d i , c l ) in eq.(4) as a preparation.
w(d i , c l ) = max nj ∈c l B(n j ∈ d i ) D−1 i=0 B(n j ∈ d i )
.
(4)
The ranking function takes w(d i , c l ) as an input. Various forms of ranking functions can be constructed. For example, [Maeno (2009)] studied a simple form in eq.(5) where the function u(x) returns 1 if the real variable x is positive, or 0 otherwise.
s(d i ) ∝ C−1 l=0 u(w(d i , c l )) = C−1 l=0 B(d i ∩ c l = φ).(5)
The i-th most suspicious log is given by d σ(i) where σ(i) is calculated by eq.(6). Suspiciousness s(d σ(i) ) is always larger than s(d σ(i ′ ) ) for any i < i ′ .
σ(i) = arg max m =σ(n) for ∀ n<i s(d m ) (1 ≤ i ≤ D).(6)
The computational burden of the method remains light as the number of nodes and surveillance logs increases. The method is expected to work generally for clustered networks but moderately even if the network topological and stochastic mechanism to generate the surveillance logs is not understood well. The method works without the knowledge about the hub-and-spoke model; the parametric form with r jk and f j in Section 3. The result, however, can not be very accurate because of the heuristic nature. A statistical inference method which requires heavy computational burden, but outputs more accurate results is presented next.
Statistical inference method
The statistical inference method employs the maximal likelihood estimation to infer the topology of the network, and applies an anomaly detection technique to retrieve the suspicious surveillance logs which are not likely to realize without the covert nodes. The maximal likelihood estimation is a basic statistical method used for fitting a statistical model to data and for providing estimates for the model's parameters. The anomaly detection refers to detecting patterns in a given dataset that do not conform to an established normal behavior.
A single symbol θ represent both of the parameters r jk and f j for the nodes in O. θ is the target variable, the value of which needs to be inferred from the surveillance log dataset. The logarithmic likelihood function [Hastie (2001)] is defined by eq.(7). The quantity p({d i }|θ) denote the probability where the surveillance log dataset {d i } realizes under a given θ.
L(θ) = log(p({d i }|θ)).(7)
The individual surveillance logs are assumed to be independent. eq.(7) becomes eq.(8).
L(θ) = log( D−1 i=0 p(d i |θ)) = D−1 i=0 log(p(d i |θ)).(8)
The quantity q i|jk in eq.(9) is the probability where the presence or absence of the node n k as a responder to the stimulating node n j coincides with the surveillance log d i .
q i|jk = r jk if d ik = 1 for given i and j 1 − r jk otherwise .
Eq.(9) is equivalent to eq.(10) since the value of d ik is either 0 or 1.
q i|jk = d ik r jk + (1 − d ik )(1 − r jk ).(10)
The probability p({d i }|θ) in eq. (8) is expressed by eq.(11).
p(d i |θ) = N −1 j=0 d ij f j 0≤k<N ∧ k =j q i|jk .(11)
The logarithmic likelihood function takes an explicit formula in eq.(12). The case k = j in multiplication ( k ) is included since d 2 ik = d ik always holds.
L(θ) = D−1 i=0 log( N −1 j=0 d ij f j N −1 k=0 {1 − d ik +(2d ik − 1)r jk }).(12)
The maximal likelihood estimatorθ is obtained by solving eq.(13). It gives the values of the parameters r jk and f j . A pair of nodes n j and n k for which r jk > 0 possesses a link between them.
θ = arg max θ L(θ).(13)
A simple incremental optimization technique; the hill climbing method (or the method of steepest descent) is employed to solve eq.(13). Non-deterministic methods such as simulated annealing [Hastie (2001)] can be employed to strengthen the search ability and to avoid sub-optimal solutions. These methods search more optimal parameter values around the present values and update them as in eq.(14) until the values converge.
r jk → r jk + ∆r jk f j → f j + ∆f j (0 ≤ j, k < N ).(14)
The change in the logarithmic likelihood function can be calculated as a product of the derivatives (differential coefficients with regard to r and f ) and the amount of the updates in eq.(15). The update ∆r nm and ∆f n should be in the direction of the steepest ascent in the landscape of the logarithmic likelihood function.
∆L(θ) = N −1 n,m=0 ∂L(θ) ∂r nm ∆r nm + N −1 n=0 ∂L(θ) ∂f n ∆f n .(15)
The derivatives with regard to r are given by eq. (16).
∂L(θ) ∂r nm = D−1 i=0 [f n d in (2d im − 1) × k =m {1 − d ik + (2d ik − 1)r nk } ÷ N −1 j=0 d ij f j N −1 k=0 {1 − d ik + (2d ik − 1)r jk }].(16)
The derivatives with regard to f are given by eq. (17).
∂L(θ) ∂f n = D−1 i=0 [d in N −1 k=0 {1 − d ik + (2d ik − 1)r nk } ÷ N −1 j=0 d ij f j N −1 k=0 {1 − d ik + (2d ik − 1)r jk }].(17)
The ranking function s(d i ) is the inverse of the probability at which d i realizes under the maximal likelihood estimatorθ. According to the anomaly detection technique, it gives a higher return value to the suspicious surveillance logs which are less likely to realize without the covert nodes. The ranking function is given by eq.(18).
s(d i ) = 1 p(d i |θ) .(18)
The i-th most suspicious log is given by d σ(i) by the same formula in eq.(6).
Test Dataset
Network
Two classes of networks are employed to generate a test dataset for performance evaluation of the two methods. The first class is computationally synthesized networks. The second class is a real clandestine organization.
The networks [A] in Figure 1 and [B] in Figure 2 are synthesized computationally. They are based on the Barabási-Albert model [ Barabási (1999)] with a cluster structure. The Barabási-Albert model grows with preferential attachment. The probability where a newly coming node n k connects a link to an existing node n j is proportional to the nodal degree of n j (p(k → j) ∝ K(n j )). The occurrence frequency of the nodal degree tends to be scale-free (F (K) ∝ K a ). In the Barabási-Albert model with a cluster structure, every node n j is assigned a pre-determined cluster attribute c(n j ) to which it belongs. The number of clusters is C. The probability p(k → j) is modified to eq.(19). cluster contrast parameter η is introduced. Links between the which consists of 101 nodes and 5 clusters. Cluster contrast parameter is η = 50. The network is relatively more clustered. The node n 12 is a typical hub node. The node n 75 is a typical peripheral node. which consists of 101 nodes and 5 clusters. Cluster contrast parameter is η = 2.5. The network is relatively less clustered. The node n 12 is a typical hub node. The node n 48 is a typical peripheral node. clusters appear less frequently as η increases. The initial links between the clusters are connected at random before growth by preferential attachment starts.
p(k → j) ∝ η(C − 1)K(n j ) if c(n j ) = c(n k ) K(n j ) otherwise .(19)
Hub nodes are those which have a nodal degree larger than the average. The node n 12 in the network [A] in Figure 1 is a typical hub node. Peripheral nodes are those which have a nodal degree smaller than the average. The node n 75 in the network [A] in Figure 1 is a typical peripheral node.
The network in Figure 3 represents a real clandestine organization. It is a global mujahedin organization which was analyzed in [Sageman (2004)]. The mujahedin in the global Salafi jihad means Muslim fighters in Salafism (Sunni Islamic school of thought) who struggle to establish justice on earth. Note that jihad does not necessarily refer to military exertion. The organization consists of 107 persons and 4 regional sub-networks. The sub-networks represent Central Staffs (n CSj ) including the node n ObL , Core Arabs (n CAj ) from the Arabian Peninsula countries and Egypt, Maghreb Arabs (n MAj ) from the North African countries, and Southeast Asians (n SAj ). The network topology is not simply hierarchical. The 4 regional sub-networks are connected mutually in a complex manner.
The node representing Osama bin Laden; n ObL is a hub (K(n ObL ) = 8). He is believed to be the founder of the organization, and said to be the covert leader who provides operational commanders in regional subnetworks with financial support in many terrorism attacks including 9/11 in 2001. His whereabouts are not known despite many efforts in investigation and capture.
The topological characteristics of the above mentioned networks are summarized in Table 1. The global mujahedin organization has a relatively large Gini coefficient of the nodal degree; G = 0.35 and a relatively large average clustering coefficient [Watts (1998)]; W (n j ) = 0.54. In economics, the Gini coefficient is a measure of inequality of income distribution or of wealth distribution. A larger Gini coefficient indicates lower equality. The values mean that the organization possesses hubs and a cluster structure. The values also indicate that the computationally synthesized network [A] is more clustered and close to the global mujahedin organization while the network [B] is less clustered.
Test Dataset
The test dataset {d i } is generated from each network in 5.1 in the 2 steps below.
In the first step, the collaborative activity patterns {δ i } are generated D times according to the influence Figure 3: Social network representing a global mujahedin (Jihad fighters) organization [Sageman (2004)], which consists of 107 nodes and 4 regional sub-networks. The sub-networks represent Central Staffs (n CSj ) including the node n ObL , Core Arabs (n CAj ), Maghreb Arabs (n MAj ), and Southeast Asians (n SAj ). The node n ObL is Osama bin Laden who many believe is the founder of the organization. transmission under the true value of θ. A pattern includes both an initiator node n j and multiple responder nodes n k . An example is δ ex1 ={n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n ObL , n CS11 , n CS12 , n CS14 } for the global mujahedin organization in Figure 3.
In the second step, the surveillance log dataset {d i } is generated by deleting the covert nodes belonging to C from the patterns {δ i }. The example δ ex1 results in the surveillance log d ex1 = δ ex1 ∩ C = {n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n CS11 , n CS12 , n CS14 } if Osama bin Laden is a cover node; C = n ObL . The covert node in C may appear multiple times in the collaborative activity patterns {δ i }. The number of the target logs to identify D t is given by eq.(20).
D t = D−1 i=0 B(d i = δ i ).(20)
In the performance evaluation in Section 6, a few assumptions are made for simplicity. The probability f j does not depend on the nodes (f j = 1/M ). The value of the probability r jk is either 1 when a link is present between nodes, or 1 otherwise. It means that the number of the possible collaborative activity patterns is bounded. The influence transmission is symmetrically bi-directional; r jk = r kj . 6 Performance 6.1 Performance measure Three measures, precision, recall, and van Rijsbergen's F measure [Korfhuge (1997)], are used to evaluate the performance of the methods. They are commonly used in information retrieval such as search, document classification, and query classification. The precision p is used as evaluation criteria, which is the fraction of the number of relevant data to the number of the all data retrieved by search. The recall r is the fraction of the number of the data retrieved by search to the number of the all relevant data. The relevant data refers to the data where d i = δ i . They are given by eq.(21) and eq.(22) They are functions of the number of the retrieved data D r . It can take the value from 1 to D. The data is retrieved in the order of d σ(1) , d σ(2) , to d σ(Dr) .
p(D r ) = Dr i=1 B(d σ(i) = δ σ(i) ) D r .(21)r(D r ) = Dr i=1 B(d σ(i) = δ σ(i) ) D t .(22)
The F measure F is the harmonic mean of the precision and recall. It is given by eq.(23).
F (D r ) = 1 1 2 ( 1 p(Dr) + 1 r(Dr) ) = 2p(D r )r(D r ) p(D r ) + r(D r ) .(23)
The precision, recall, and F measure range from 0 to 1. All the measures take larger values as the performance of retrieval becomes better.
Comparison
The performance of the heuristic method and statistical inference method is compared with the test dataset generated from the computationally synthesized networks. Figure 4 shows the precision p(D r ) as a function of the rate of the retrieved data to the whole data D r /D in case the hub node n 12 in the computationally synthesized network [A] in Figure 1 is the target covert node to discover, C = {n 12 }. The three graphs are for [a] the statistical inference method, [b] the heuristic method (C = 5), and [c] the heuristic method (C = 10). The number of the surveillance logs in a test dataset is D = 100. The broken lines indicate the theoretical limit (the upper bound) and the random retrieval (the lower bound). The vertical solid line indicates the position where D r = D t . Figure 5 shows the recall r(D r ) as a function of the rate D r /D. Figure 6 shows the F measure F (D r ) as a function of the rate D r /D. The experimental conditions are the same as those for Figure 4. The performance of the heuristic method is moderately good if the number of clusters is known as prior knowledge. Otherwise, the performance would be worse. On the other hand, the statistical inference method surpasses the heuristic method and approaches to the theoretical limit. Figure 7 shows the F measure F (D r ) as a function of the rate D r /D in case the hub node n 12 in the network [B] in Figure 2 is the target covert node to discover. The two graphs are for [a] the statistical inference method and [b] the heuristic method (C = 5). The performance of the statistical inference method is still good while that of the heuristic method becomes worse in a less clustered network. Figure 8 shows the F measure F (D r ) as a function of the rate D r /D in case the peripheral node n 75 in the network [A] in Figure 1 is the target covert node to discover. Figure 9 shows the F measure F (D r ) as a function of the rate D r /D when the peripheral node n 48 in the network [B] in Figure 2 is the target covert node to discover. The statistical inference method works fine while the heuristic method fails.
Application
I illustrate how the method aids the investigators in achieving the long-term target of the non-routine responses to the terrorism attacks. Let's assume that the investigators have surveillance logs of the members of the global mujahedin organization except Osama bin Laden by the time of the attack. Osama bin Laden Figure 1 is the target covert node to discover. Two graphs are for [a] the statistical inference method, and [b] the heuristic method (C = 5).
Figure 9: F measure F (D r ) as a function of the rate D r /D when the peripheral node n 48 in the computationally synthesized network [B] in Figure 2 is the target covert node to discover. Two graphs are for [a] the statistical inference method, and [b] the heuristic method (C = 5).
Figure 10: F measure F (D r ) as a function of the rate of the retrieved data to the whole data D r /D when the statistical inference method is applied in case the node n ObL in Figure 3 is the target covert node to discover. C = {n ObL }. |C| = 1. |O| = 106. The graph is for the statistical inference method. The broken lines indicate the theoretical limit and the random retrieval. The vertical solid line indicates the position where D r = D t .
does not appear in the logs. This is the assumption that the investigators neither know the presence of a wire-puller behind the attack nor recognize Osama bin Laden at the time of the attack.
The situation is simulated computationally like the problems addressed in 6.2. In this case, the node n ObL in Figure 3 is the target covert node to discover, C = {n ObL }. Figure 10 shows F (D r ) as a function of the rate of the retrieved data to the whole data D r /D when the statistical inference method is applied. The result is close to the theoretical limit. The most suspicious surveillance log d σ(1) includes all and only the neighbor nodes n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n CS11 , n CS12 , and n CS14 . This encourages the investigators to take an action to investigate an unknown wire-puller near these 8 neighbors; the most suspicious close associates. The investigators will decide to collect more detailed information on the suspicious neighbors. It may result in approaching to and finally capturing the covert wirepuller responsible for the attack.
The method, however, fails to identify two suspicious records δ fl1 ={n ObL , n CS11 } and δ fl2 = {n ObL , n CS12 }. These nodes have a small nodal degree; K(n CS11 ) = 1 and K(n CS12 ) = 1. This shows that the surveillance logs on the nodes having small nodal degree do not provide the investigators with much clues for the covert nodes.
Conclusion
In this paper, I define the node discovery problem for a social network and present methods to solve the problem. The statistical inference method employs the maximal likelihood estimation to infer the topology of the network, and applies an anomaly detection technique to retrieve the suspicious surveillance logs which are not likely to realize without the covert nodes. The precision, recall, and F measure characteristics are close to the theoretical limit for the discovery of the covert nodes in computationally synthesized networks and a real clandestine organization. In the investigation of a clandestine organization, the method aids the investigators in identifying the close associates and approaching to a covert leader or a critical conspirator.
The node discovery problem is encountered in many areas of business and social sciences. For example, in addition to the analysis of a clandestine organization, the method contributes to detecting an individual employee who transmits the influence to colleagues, but whose catalytic role is not recognized by company managers, may be critical in reorganizing a company structure.
I plan to address two issues for the future works. The first issue is to extend the hub-and-spoke model for the influence transmission. The model represents the radial transmission from an initiating node toward multiple responder nodes. Other types of influence transmission are present in many real social networks. Examples are serial chain-shaped influence transmission model or tree-like influence transmission model. The second issue is to develop a method to solve the variants of the node discovery problem. Discovering fake nodes, or spoofing nodes are also interesting problems to uncover the malicious intentions of the organization. A fake node is the person who does not exist in the organization, but appears in the surveillance. A spoofing node is the person who belongs to an organization, but appears as a different node in the surveillance logs.
| 4,867 |
0710.4975
|
1493695992
|
Methods to solve a node discovery problem for a social network are presented. Covert nodes refer to the nodes which are not observable directly. They transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the collaborative activities. Discovering the covert nodes is identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. The performance of the methods is demonstrated with a test dataset generated from computationally synthesized networks and a real organization.
|
Markov random network is a model of the joint probability distribution of random variables. It is an undirected graphical model similar to a Bayesian network. The Markov random network is used to learn the dependency between the links which shares a node. The Markov random network is one of the dependence graphs @cite_17 , which models the dependency between links. Extension to hierarchical models @cite_25 , multiple networks (treating different types of relationships) @cite_9 , valued networks (with nodal attributes) @cite_23 , higher order dependency between the links which share no nodes @cite_12 , and 2-block chain graphs (associating one set of explanatory variables with the other set of outcome variables) @cite_0 are studied. A family of such extensions and model elaborations is named the exponential random graph @cite_2 .
|
{
"abstract": [
"",
"This paper generalizes thep* class of models for social network data to predict individual-level attributes from network ties. Thep* model for social networks permits the modeling of social relationships in terms of particular local relational or network configurations. In this paper we present methods for modeling attribute measures in terms of network ties, and so constructp* models for the patterns of social influence within a network. Attribute variables are included in a directed dependence graph and the Hammersley-Clifford theorem is employed to derive probability models whose parameters can be estimated using maximum pseudo-likelihood. The models are compared to existing network effects models. They can be interpreted in terms of public or private social influence phenomena within groups. The models are illustrated by an empirical example involving a training course, with trainees' reactions to aspects of the course found to relate to those of their network partners.",
"This paper generalizes thep* model for dichotomous social network data (Wasserman & Pattison, 1996) to the polytomous case. The generalization is achieved by transforming valued social networks into three-way binary arrays. This data transformation requires a modification of the Hammersley-Clifford theorem that underpins thep* class of models. We demonstrate that, provided that certain (non-observed) data patterns are excluded from consideration, a suitable version of the theorem can be developed. We also show that the approach amounts to a model for multiple logits derived from a pseudo-likelihood function. Estimation within this model is analogous to the separate fitting of multinomial baseline logits, except that the Hammersley-Clifford theorem requires the equating of certain parameters across logits. The paper describes how to convert a valued network into a data array suitable for fitting the model and provides some illustrative empirical examples.",
"Abstract A major criticism of the statistical models for analyzing social networks developed by Holland, Leinhardt, and others [Holland, P.W., Leinhardt, S., 1977. Notes on the statistical analysis of social network data; Holland, P.W., Leinhardt, S., 1981. An exponential family of probability distributions for directed graphs. Journal of the American Statistical Association. 76, pp. 33–65 (with discussion); Fienberg, S.E., Wasserman, S., 1981. Categorical data analysis of single sociometric relations. In: Leinhardt, S. (Ed.), Sociological Methodology 1981, San Francisco: Jossey-Bass, pp. 156–192; Fienberg, S.E., Meyer, M.M., Wasserman, S., 1985. Statistical analysis of multiple sociometric relations. Journal of the American Statistical Association, 80, pp. 51–67; Wasserman, S., Weaver, S., 1985. Statistical analysis of binary relational data: Parameter estimation. Journal of Mathematical Psychology. 29, pp. 406–427; Wasserman, S., 1987. Conformity of two sociometric relations. Psychometrika. 52, pp. 3–18] is the very strong independence assumption made on interacting individuals or units within a network or group. This limiting assumption is no longer necessary given recent developments on models for random graphs made by Frank and Strauss [Frank, O., Strauss, D., 1986. Markov graphs. Journal of the American Statistical Association. 81, pp. 832–842] and Strauss and Ikeda [Strauss, D., Ikeda, M., 1990. Pseudolikelihood estimation for social networks. Journal of the American Statistical Association. 85, pp. 204–212]. The resulting models are extremely flexible and easy to fit to data. Although Wasserman and Pattison [Wasserman, S., Pattison, P., 1996. Logit models and logistic regressions for social networks: I. An introduction to Markov random graphs and p*. Psychometrika. 60, pp. 401–426] present a derivation and extension of these models, this paper is a primer on how to use these important breakthroughs to model the relationships between actors (individuals, units) within a single network and provides an extension of the models to multiple networks. The models for multiple networks permit researchers to study how groups are similar and or how they are different. The models for single and multiple networks and the modeling process are illustrated using friendship data from elementary school children from a study by Parker and Asher [Parker, J.G., Asher, S.R., 1993. Friendship and friendship quality in middle childhood: Links with peer group acceptance and feelings of loneliness and social dissatisfaction. Developmental Psychology. 29, pp. 611–621].",
"Many communication and social networks have power-law link distributions, containing a few nodes that have a very high degree and many with low degree. The high connectivity nodes play the important role of hubs in communication and networking, a fact that can be exploited when designing efficient search algorithms. We introduce a number of local search strategies that utilize high degree nodes in power-law graphs and that have costs scaling sublinearly with the size of the graph. We also demonstrate the utility of these strategies on the GNUTELLA peer-to-peer network.",
"We argue that social networks can be modeled as the outcome of processes that occur in overlapping local regions of the network, termed local social neighborhoods. Each neighborhood is conceived as a possible site of interaction and corresponds to a subset of possible network ties. In this paper, we discuss hypotheses about the form of these neighborhoods, and we present two new and theoretically plausible ways in which neighborhood-based models for networks can be constructed. In the first, we introduce the notion of a setting structure, a directly hypothesized (or observed) set of exogenous constraints on possible neighborhood forms. In the second, we propose higher-order neighborhoods that are generated, in part, by the outcome of interactive network processes themselves. Applications of both approaches to model construction are presented, and the developments are considered within a general conceptual framework of locale for social networks. We show how assumptions about neighborhoods can be cast with...",
""
],
"cite_N": [
"@cite_9",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2032296016",
"1970493189",
"2002810579",
"1527463082",
"2057583855",
""
]
}
|
Node discovery problem for a social network
|
Covert nodes refer to persons who transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the activities. The covert nodes are not observable directly. It aids us in discovering and approaching to the covert nodes to identify the suspicious surveillance logs where the covert nodes would appear if they became overt. I call this problem a node discovery problem for a social network.
Where do we encounter such a problem? Globally networked clandestine organizations such as terrorists, criminals, or drug smugglers are great threat to the civilized societies [Sageman (2004)]. Terrorism attacks cause great economic, social and environmental damage. Active non-routine responses to the attacks are necessary as well as the damage recovery management. The short-term target of the responses is the arrest of the perpetrators. The long-term target of the responses is identifying and dismantling the covert organizational foundation which raises, encourages, and helps the perpetrators. The threat will be mitigated and eliminated by discovering covert leaders and critical conspirators of the clandestine organizations. The difficulty of such discovery lies in the limited capability of surveillance. Information on the leaders and critical conspirators are missing because it is usually hidden by the organization intentionally.
Let me show an example in the 9/11 terrorist attack in 2001 [Krebs (2002)]. Mustafa A. Al-Hisawi, whose alternate name was Mustafa Al-Hawsawi, was alleged to be a wire-puller who had acted as a financial manager of Al Qaeda. He had attempted to help terrorists enter the United States, and provided the hijackers of the 4 aircrafts with financial support worth more than 300,000 dollars. Furthermore, Osama bin Laden is suspected to be a wire-puller behind Mustafa A. Al-Hisawi and the conspirators behind the hijackers. These persons were not recognized as wire-pullers at the time of the attack. They were the nodes to discover from the information on the collaborative activities of the perpetrators and conspirators known at that moment.
In this paper, I present two methods to solve the node discovery problem. One is a heuristic method in [Maeno (2009)], which demonstrates a simulation experiment of the node discovery problem for the social network of the 9/11 perpetrators. The other is a statistical inference method which I propose in this paper. The method employs the maximal likelihood estimation and an anomaly detection technique. Section 3 defines the node discovery problem mathematically. Section 4 presents the two methods. Section 5 introduces the test dataset generated from computationally synthesized networks and a real clandestine organization. Section 6 demonstrates the performance characteristics of the methods (precision, recall, and van Rijsbergen's F measure [Korfhuge (1997)]). Section 7 presents the issues and future perspectives as concluding remarks. Section 2 summarizes the related works.
Problem definition
The node discovery problem is defined mathematically in this section. A node represents a person in a social network. A link represents a relationship which transmits the influence between persons. The symbols n j (j = 0, 1, · · ·) represent the nodes. Some nodes are overt (observable), but the others are covert (unobservable). O denotes the overt nodes; {n 0 , n 1 , · · · , n N −1 }. Its cardinality is |O| = N . C = O denotes the covert nodes; {n N , n N +1 , · · · , n M−1 }. Its cardinality is |C| = M − N . The whole nodes in a social network is O ∪ C. The number of the nodes is M . The unobservability of the covert nodes arises either from a technical defect of surveillance means or an intentional cover-up operation.
The symbol δ i represent a set of participants in a particular collaborative activity. It is the i-th activity pattern among the nodes. A pattern δ i is a set of nodes; δ i is a subset of O ∪ C. For example, the nodes in an collaborative activity pattern are those who joined a particular conference call. That is, a pattern is a co-occurrence among the nodes [Rabbat (2008)]. The unobservability of the covert nodes does not affect the activity patterns themselves.
A simple hub-and-spoke model is assumed as a model of the influence transmission over the links resulting the collaborative activities among the nodes. The way how the influence is transmitted governs the set of possible activity patterns {δ i }. The network topology and the influence transmission are described by some probability parameters. The probability where the influence transmits from an initiating node n j to a responder node n k is r jk . The influence transmits to multiple responders independently in parallel. It is similar to the degree of collaboration probability in trust modeling [Lavrac (2007)]. The constraints are 0 ≤ r jk and k =j r jk ≤ 1. The quantity f j is the probability where the node n j becomes an initiator. The constraints are 0 ≤ f j and N −1 j=0 f j = 1. These parameters are defined for the whole nodes in a social network (both the nodes in O and C).
A surveillance log d i records a set of the overt nodes in a collaborative activity pattern; δ i . It is given by eq.(1). A log d i is a subset of O. The number of data is D. A set {d i } is the whole surveillance logs dataset.
d i = δ i ∩ O (0 ≤ i < D).
(1)
Note that neither an individual node nor a single link alone can be observed directly, but nodes can be observed collectively as a collaborative activity pattern. The dataset {d i } can be expressed by a 2-dimensional D × N matrix of binary variables d. The presence or absence of the node n j in the data d i is indicated by the elements in eq.(2).
d ij = 1 if n j ∈ d i 0 otherwise (0 ≤ i < D, 0 ≤ j < N ). (2)
Solving the node discovery problem means identifying all the surveillance logs where covert nodes would appear if they became overt. In other words, it means to identifying the logs for which d i = δ i holds because of the covert nodes belonging to C.
Solution
Heuristic method
A heuristic method to solve the node discovery problem is studied in [Maeno (2009)]. The method is reviewed briefly.
At first, every node which appears in the dataset {d i } is classified into one of the clusters c l (0 ≤ l < C). The number of clusters is C, which depends on the prior knowledge. Mutually close nodes form a cluster. The measure of closeness between a pair of nodes is evaluated by the Jaccard's coefficient [Liben-Nowell (2004)]. It is used widely in link discovery, web mining, or text processing. The Jaccard's coefficient between the nodes n and n ′ is defined by eq.(3). The function B(s) in eq.(3) is a Boolean function which returns 1 if the proposition s is trueCor 0 otherwise. The operators ∧ and ∨ are logical AND and OR.
J(n, n ′ ) = D−1 i=0 B(n ∈ d i ∧ n ′ ∈ d i ) D−1 i=0 B(n ∈ d i ∨ n ′ ∈ d i )
.
(
The k-medoids clustering algorithm [Hastie (2001)] is employed for classification of the nodes. It is an EM (expectation-maximization) algorithm similar to the kmeans algorithm for numerical data. A medoid node locates most centrally within a cluster. It corresponds to the center of gravity in the k-means algorithm. The clusters and the modoid nodes are re-calculated iteratively until they converge into a stable structure. The k-medoids clustering algorithm may be substituted by other clustering algorithms such as hierarchical clustering or self-organizing mapping.
Then, suspiciousness of every surveillance log d i as a candidate where the covert nodes would appear is evaluated with a ranking function s(d i ). The ranking function returns higher value for a more suspicious log. The strength of the correlation between the log d i and the cluster c l is defined by w(d i , c l ) in eq.(4) as a preparation.
w(d i , c l ) = max nj ∈c l B(n j ∈ d i ) D−1 i=0 B(n j ∈ d i )
.
(4)
The ranking function takes w(d i , c l ) as an input. Various forms of ranking functions can be constructed. For example, [Maeno (2009)] studied a simple form in eq.(5) where the function u(x) returns 1 if the real variable x is positive, or 0 otherwise.
s(d i ) ∝ C−1 l=0 u(w(d i , c l )) = C−1 l=0 B(d i ∩ c l = φ).(5)
The i-th most suspicious log is given by d σ(i) where σ(i) is calculated by eq.(6). Suspiciousness s(d σ(i) ) is always larger than s(d σ(i ′ ) ) for any i < i ′ .
σ(i) = arg max m =σ(n) for ∀ n<i s(d m ) (1 ≤ i ≤ D).(6)
The computational burden of the method remains light as the number of nodes and surveillance logs increases. The method is expected to work generally for clustered networks but moderately even if the network topological and stochastic mechanism to generate the surveillance logs is not understood well. The method works without the knowledge about the hub-and-spoke model; the parametric form with r jk and f j in Section 3. The result, however, can not be very accurate because of the heuristic nature. A statistical inference method which requires heavy computational burden, but outputs more accurate results is presented next.
Statistical inference method
The statistical inference method employs the maximal likelihood estimation to infer the topology of the network, and applies an anomaly detection technique to retrieve the suspicious surveillance logs which are not likely to realize without the covert nodes. The maximal likelihood estimation is a basic statistical method used for fitting a statistical model to data and for providing estimates for the model's parameters. The anomaly detection refers to detecting patterns in a given dataset that do not conform to an established normal behavior.
A single symbol θ represent both of the parameters r jk and f j for the nodes in O. θ is the target variable, the value of which needs to be inferred from the surveillance log dataset. The logarithmic likelihood function [Hastie (2001)] is defined by eq.(7). The quantity p({d i }|θ) denote the probability where the surveillance log dataset {d i } realizes under a given θ.
L(θ) = log(p({d i }|θ)).(7)
The individual surveillance logs are assumed to be independent. eq.(7) becomes eq.(8).
L(θ) = log( D−1 i=0 p(d i |θ)) = D−1 i=0 log(p(d i |θ)).(8)
The quantity q i|jk in eq.(9) is the probability where the presence or absence of the node n k as a responder to the stimulating node n j coincides with the surveillance log d i .
q i|jk = r jk if d ik = 1 for given i and j 1 − r jk otherwise .
Eq.(9) is equivalent to eq.(10) since the value of d ik is either 0 or 1.
q i|jk = d ik r jk + (1 − d ik )(1 − r jk ).(10)
The probability p({d i }|θ) in eq. (8) is expressed by eq.(11).
p(d i |θ) = N −1 j=0 d ij f j 0≤k<N ∧ k =j q i|jk .(11)
The logarithmic likelihood function takes an explicit formula in eq.(12). The case k = j in multiplication ( k ) is included since d 2 ik = d ik always holds.
L(θ) = D−1 i=0 log( N −1 j=0 d ij f j N −1 k=0 {1 − d ik +(2d ik − 1)r jk }).(12)
The maximal likelihood estimatorθ is obtained by solving eq.(13). It gives the values of the parameters r jk and f j . A pair of nodes n j and n k for which r jk > 0 possesses a link between them.
θ = arg max θ L(θ).(13)
A simple incremental optimization technique; the hill climbing method (or the method of steepest descent) is employed to solve eq.(13). Non-deterministic methods such as simulated annealing [Hastie (2001)] can be employed to strengthen the search ability and to avoid sub-optimal solutions. These methods search more optimal parameter values around the present values and update them as in eq.(14) until the values converge.
r jk → r jk + ∆r jk f j → f j + ∆f j (0 ≤ j, k < N ).(14)
The change in the logarithmic likelihood function can be calculated as a product of the derivatives (differential coefficients with regard to r and f ) and the amount of the updates in eq.(15). The update ∆r nm and ∆f n should be in the direction of the steepest ascent in the landscape of the logarithmic likelihood function.
∆L(θ) = N −1 n,m=0 ∂L(θ) ∂r nm ∆r nm + N −1 n=0 ∂L(θ) ∂f n ∆f n .(15)
The derivatives with regard to r are given by eq. (16).
∂L(θ) ∂r nm = D−1 i=0 [f n d in (2d im − 1) × k =m {1 − d ik + (2d ik − 1)r nk } ÷ N −1 j=0 d ij f j N −1 k=0 {1 − d ik + (2d ik − 1)r jk }].(16)
The derivatives with regard to f are given by eq. (17).
∂L(θ) ∂f n = D−1 i=0 [d in N −1 k=0 {1 − d ik + (2d ik − 1)r nk } ÷ N −1 j=0 d ij f j N −1 k=0 {1 − d ik + (2d ik − 1)r jk }].(17)
The ranking function s(d i ) is the inverse of the probability at which d i realizes under the maximal likelihood estimatorθ. According to the anomaly detection technique, it gives a higher return value to the suspicious surveillance logs which are less likely to realize without the covert nodes. The ranking function is given by eq.(18).
s(d i ) = 1 p(d i |θ) .(18)
The i-th most suspicious log is given by d σ(i) by the same formula in eq.(6).
Test Dataset
Network
Two classes of networks are employed to generate a test dataset for performance evaluation of the two methods. The first class is computationally synthesized networks. The second class is a real clandestine organization.
The networks [A] in Figure 1 and [B] in Figure 2 are synthesized computationally. They are based on the Barabási-Albert model [ Barabási (1999)] with a cluster structure. The Barabási-Albert model grows with preferential attachment. The probability where a newly coming node n k connects a link to an existing node n j is proportional to the nodal degree of n j (p(k → j) ∝ K(n j )). The occurrence frequency of the nodal degree tends to be scale-free (F (K) ∝ K a ). In the Barabási-Albert model with a cluster structure, every node n j is assigned a pre-determined cluster attribute c(n j ) to which it belongs. The number of clusters is C. The probability p(k → j) is modified to eq.(19). cluster contrast parameter η is introduced. Links between the which consists of 101 nodes and 5 clusters. Cluster contrast parameter is η = 50. The network is relatively more clustered. The node n 12 is a typical hub node. The node n 75 is a typical peripheral node. which consists of 101 nodes and 5 clusters. Cluster contrast parameter is η = 2.5. The network is relatively less clustered. The node n 12 is a typical hub node. The node n 48 is a typical peripheral node. clusters appear less frequently as η increases. The initial links between the clusters are connected at random before growth by preferential attachment starts.
p(k → j) ∝ η(C − 1)K(n j ) if c(n j ) = c(n k ) K(n j ) otherwise .(19)
Hub nodes are those which have a nodal degree larger than the average. The node n 12 in the network [A] in Figure 1 is a typical hub node. Peripheral nodes are those which have a nodal degree smaller than the average. The node n 75 in the network [A] in Figure 1 is a typical peripheral node.
The network in Figure 3 represents a real clandestine organization. It is a global mujahedin organization which was analyzed in [Sageman (2004)]. The mujahedin in the global Salafi jihad means Muslim fighters in Salafism (Sunni Islamic school of thought) who struggle to establish justice on earth. Note that jihad does not necessarily refer to military exertion. The organization consists of 107 persons and 4 regional sub-networks. The sub-networks represent Central Staffs (n CSj ) including the node n ObL , Core Arabs (n CAj ) from the Arabian Peninsula countries and Egypt, Maghreb Arabs (n MAj ) from the North African countries, and Southeast Asians (n SAj ). The network topology is not simply hierarchical. The 4 regional sub-networks are connected mutually in a complex manner.
The node representing Osama bin Laden; n ObL is a hub (K(n ObL ) = 8). He is believed to be the founder of the organization, and said to be the covert leader who provides operational commanders in regional subnetworks with financial support in many terrorism attacks including 9/11 in 2001. His whereabouts are not known despite many efforts in investigation and capture.
The topological characteristics of the above mentioned networks are summarized in Table 1. The global mujahedin organization has a relatively large Gini coefficient of the nodal degree; G = 0.35 and a relatively large average clustering coefficient [Watts (1998)]; W (n j ) = 0.54. In economics, the Gini coefficient is a measure of inequality of income distribution or of wealth distribution. A larger Gini coefficient indicates lower equality. The values mean that the organization possesses hubs and a cluster structure. The values also indicate that the computationally synthesized network [A] is more clustered and close to the global mujahedin organization while the network [B] is less clustered.
Test Dataset
The test dataset {d i } is generated from each network in 5.1 in the 2 steps below.
In the first step, the collaborative activity patterns {δ i } are generated D times according to the influence Figure 3: Social network representing a global mujahedin (Jihad fighters) organization [Sageman (2004)], which consists of 107 nodes and 4 regional sub-networks. The sub-networks represent Central Staffs (n CSj ) including the node n ObL , Core Arabs (n CAj ), Maghreb Arabs (n MAj ), and Southeast Asians (n SAj ). The node n ObL is Osama bin Laden who many believe is the founder of the organization. transmission under the true value of θ. A pattern includes both an initiator node n j and multiple responder nodes n k . An example is δ ex1 ={n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n ObL , n CS11 , n CS12 , n CS14 } for the global mujahedin organization in Figure 3.
In the second step, the surveillance log dataset {d i } is generated by deleting the covert nodes belonging to C from the patterns {δ i }. The example δ ex1 results in the surveillance log d ex1 = δ ex1 ∩ C = {n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n CS11 , n CS12 , n CS14 } if Osama bin Laden is a cover node; C = n ObL . The covert node in C may appear multiple times in the collaborative activity patterns {δ i }. The number of the target logs to identify D t is given by eq.(20).
D t = D−1 i=0 B(d i = δ i ).(20)
In the performance evaluation in Section 6, a few assumptions are made for simplicity. The probability f j does not depend on the nodes (f j = 1/M ). The value of the probability r jk is either 1 when a link is present between nodes, or 1 otherwise. It means that the number of the possible collaborative activity patterns is bounded. The influence transmission is symmetrically bi-directional; r jk = r kj . 6 Performance 6.1 Performance measure Three measures, precision, recall, and van Rijsbergen's F measure [Korfhuge (1997)], are used to evaluate the performance of the methods. They are commonly used in information retrieval such as search, document classification, and query classification. The precision p is used as evaluation criteria, which is the fraction of the number of relevant data to the number of the all data retrieved by search. The recall r is the fraction of the number of the data retrieved by search to the number of the all relevant data. The relevant data refers to the data where d i = δ i . They are given by eq.(21) and eq.(22) They are functions of the number of the retrieved data D r . It can take the value from 1 to D. The data is retrieved in the order of d σ(1) , d σ(2) , to d σ(Dr) .
p(D r ) = Dr i=1 B(d σ(i) = δ σ(i) ) D r .(21)r(D r ) = Dr i=1 B(d σ(i) = δ σ(i) ) D t .(22)
The F measure F is the harmonic mean of the precision and recall. It is given by eq.(23).
F (D r ) = 1 1 2 ( 1 p(Dr) + 1 r(Dr) ) = 2p(D r )r(D r ) p(D r ) + r(D r ) .(23)
The precision, recall, and F measure range from 0 to 1. All the measures take larger values as the performance of retrieval becomes better.
Comparison
The performance of the heuristic method and statistical inference method is compared with the test dataset generated from the computationally synthesized networks. Figure 4 shows the precision p(D r ) as a function of the rate of the retrieved data to the whole data D r /D in case the hub node n 12 in the computationally synthesized network [A] in Figure 1 is the target covert node to discover, C = {n 12 }. The three graphs are for [a] the statistical inference method, [b] the heuristic method (C = 5), and [c] the heuristic method (C = 10). The number of the surveillance logs in a test dataset is D = 100. The broken lines indicate the theoretical limit (the upper bound) and the random retrieval (the lower bound). The vertical solid line indicates the position where D r = D t . Figure 5 shows the recall r(D r ) as a function of the rate D r /D. Figure 6 shows the F measure F (D r ) as a function of the rate D r /D. The experimental conditions are the same as those for Figure 4. The performance of the heuristic method is moderately good if the number of clusters is known as prior knowledge. Otherwise, the performance would be worse. On the other hand, the statistical inference method surpasses the heuristic method and approaches to the theoretical limit. Figure 7 shows the F measure F (D r ) as a function of the rate D r /D in case the hub node n 12 in the network [B] in Figure 2 is the target covert node to discover. The two graphs are for [a] the statistical inference method and [b] the heuristic method (C = 5). The performance of the statistical inference method is still good while that of the heuristic method becomes worse in a less clustered network. Figure 8 shows the F measure F (D r ) as a function of the rate D r /D in case the peripheral node n 75 in the network [A] in Figure 1 is the target covert node to discover. Figure 9 shows the F measure F (D r ) as a function of the rate D r /D when the peripheral node n 48 in the network [B] in Figure 2 is the target covert node to discover. The statistical inference method works fine while the heuristic method fails.
Application
I illustrate how the method aids the investigators in achieving the long-term target of the non-routine responses to the terrorism attacks. Let's assume that the investigators have surveillance logs of the members of the global mujahedin organization except Osama bin Laden by the time of the attack. Osama bin Laden Figure 1 is the target covert node to discover. Two graphs are for [a] the statistical inference method, and [b] the heuristic method (C = 5).
Figure 9: F measure F (D r ) as a function of the rate D r /D when the peripheral node n 48 in the computationally synthesized network [B] in Figure 2 is the target covert node to discover. Two graphs are for [a] the statistical inference method, and [b] the heuristic method (C = 5).
Figure 10: F measure F (D r ) as a function of the rate of the retrieved data to the whole data D r /D when the statistical inference method is applied in case the node n ObL in Figure 3 is the target covert node to discover. C = {n ObL }. |C| = 1. |O| = 106. The graph is for the statistical inference method. The broken lines indicate the theoretical limit and the random retrieval. The vertical solid line indicates the position where D r = D t .
does not appear in the logs. This is the assumption that the investigators neither know the presence of a wire-puller behind the attack nor recognize Osama bin Laden at the time of the attack.
The situation is simulated computationally like the problems addressed in 6.2. In this case, the node n ObL in Figure 3 is the target covert node to discover, C = {n ObL }. Figure 10 shows F (D r ) as a function of the rate of the retrieved data to the whole data D r /D when the statistical inference method is applied. The result is close to the theoretical limit. The most suspicious surveillance log d σ(1) includes all and only the neighbor nodes n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n CS11 , n CS12 , and n CS14 . This encourages the investigators to take an action to investigate an unknown wire-puller near these 8 neighbors; the most suspicious close associates. The investigators will decide to collect more detailed information on the suspicious neighbors. It may result in approaching to and finally capturing the covert wirepuller responsible for the attack.
The method, however, fails to identify two suspicious records δ fl1 ={n ObL , n CS11 } and δ fl2 = {n ObL , n CS12 }. These nodes have a small nodal degree; K(n CS11 ) = 1 and K(n CS12 ) = 1. This shows that the surveillance logs on the nodes having small nodal degree do not provide the investigators with much clues for the covert nodes.
Conclusion
In this paper, I define the node discovery problem for a social network and present methods to solve the problem. The statistical inference method employs the maximal likelihood estimation to infer the topology of the network, and applies an anomaly detection technique to retrieve the suspicious surveillance logs which are not likely to realize without the covert nodes. The precision, recall, and F measure characteristics are close to the theoretical limit for the discovery of the covert nodes in computationally synthesized networks and a real clandestine organization. In the investigation of a clandestine organization, the method aids the investigators in identifying the close associates and approaching to a covert leader or a critical conspirator.
The node discovery problem is encountered in many areas of business and social sciences. For example, in addition to the analysis of a clandestine organization, the method contributes to detecting an individual employee who transmits the influence to colleagues, but whose catalytic role is not recognized by company managers, may be critical in reorganizing a company structure.
I plan to address two issues for the future works. The first issue is to extend the hub-and-spoke model for the influence transmission. The model represents the radial transmission from an initiating node toward multiple responder nodes. Other types of influence transmission are present in many real social networks. Examples are serial chain-shaped influence transmission model or tree-like influence transmission model. The second issue is to develop a method to solve the variants of the node discovery problem. Discovering fake nodes, or spoofing nodes are also interesting problems to uncover the malicious intentions of the organization. A fake node is the person who does not exist in the organization, but appears in the surveillance. A spoofing node is the person who belongs to an organization, but appears as a different node in the surveillance logs.
| 4,867 |
0710.4975
|
1493695992
|
Methods to solve a node discovery problem for a social network are presented. Covert nodes refer to the nodes which are not observable directly. They transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the collaborative activities. Discovering the covert nodes is identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. The performance of the methods is demonstrated with a test dataset generated from computationally synthesized networks and a real organization.
|
In addition to the link discovery, the related research topics are the exploration of an unknown network structure @cite_19 , the discovery of a community structure @cite_15 , the inference of a network topology @cite_16 , the detection of an anomaly in a network @cite_8 , and the discovery of unknown nodes @cite_7 , @cite_20 . Stochastic modeling to predict terrorism attacks @cite_5 is relevant practically. The idea of machine learning of latent variables @cite_26 is potentially applicable to discovering an unknown network structure.
|
{
"abstract": [
"We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is point-wise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for non-linear systems.",
"Experts of chance discovery have recognized a new class of problems where the previous methods fail to visualize a latent structure behind observation. There are invisible events that play an important role in the dynamics of visible events. An invisible leader in a communication network is a typical example. Such an event is named a dark event. A novel technique has been proposed to understand a dark event and to extend the process of chance discovery. This paper presents a new method named \"human-computer interactive annealing\" for revealing latent structures along with the algorithm for discovering dark events. Demonstration using test data generated from a scale-free network shows that the precision regarding the algorithm ranges from 80 to 90 . An experiment on discovering an invisible leader under an online collective decision-making circumstance is successful",
"This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational expectation-maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the false discovery rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.",
"Networks are widely used in the biological, physical, and social sciences as a concise mathematical representation of the topology of systems of interacting components. Understanding the structure of these networks is one of the outstanding challenges in the study of complex systems. Here we describe a general technique for detecting structural features in large-scale network data that works by dividing the nodes of a network into classes such that the members of each class have similar patterns of connection to other nodes. Using the machinery of probabilistic mixture models and the expectation–maximization algorithm, we show that it is possible to detect, without prior knowledge of what we are looking for, a very broad range of types of structure in networks. We give a number of examples demonstrating how the method can be used to shed light on the properties of real-world networks, including social and information networks.",
"The ability of terrorist networks to conduct sophisticated and simultaneous attacks - the most recent one on March 11, 2004 in Madrid, Spain - suggests that there is a significant need for developing information technology tools for counter-terrorism analysis. These technologies could empower intelligence analysts to find information faster, share, and collaborate across agencies, \"connect the dots\" better, and conduct quicker and better analyses. One such technology, the adaptive safety analysis and monitoring (ASAM) system, is under development at the University of Connecticut. In this paper, the ASAM system is introduced and its capabilities are discussed. The vulnerabilities at the Athens 2004 Olympics are modeled and patterns of anomalous behavior are identified using a combination of feature-aided multiple target tracking, hidden Markov models (HMMs), and Bayesian networks (BNs). Functionality of the ASAM system is illustrated by way of application to two hypothetical models of terrorist activities at the Athens 2004 Olympics.",
"A network is a network — be it between words (those associated with ‘bright’ in this case) or protein structures. Many complex systems in nature and society can be described in terms of networks capturing the intricate web of connections among the units they are made of1,2,3,4. A key question is how to interpret the global organization of such networks as the coexistence of their structural subunits (communities) associated with more highly interconnected parts. Identifying these a priori unknown building blocks (such as functionally related proteins5,6, industrial sectors7 and groups of people8,9) is crucial to the understanding of the structural and functional properties of networks. The existing deterministic methods used for large networks find separated communities, whereas most of the actual networks are made of highly overlapping cohesive groups of nodes. Here we introduce an approach to analysing the main statistical features of the interwoven sets of overlapping communities that makes a step towards uncovering the modular structure of complex systems. After defining a set of new characteristic quantities for the statistics of communities, we apply an efficient technique for exploring overlapping communities on a large scale. We find that overlaps are significant, and the distributions we introduce reveal universal features of networks. Our studies of collaboration, word-association and protein interaction graphs show that the web of communities has non-trivial correlations and specific scaling properties.",
"The discovery of networks is a fundamental problem arising in numerous fields of science and technology, including communication systems, biology, sociology, and neuroscience. Unfortunately, it is often difficult, or impossible, to obtain data that directly reveal network structure, and so one must infer a network from incomplete data. This paper considers inferring network structure from \"co-occurrence\" data: observations that identify which network components (e.g., switches, routers, genes) carry each transmission but do not indicate the order in which they handle the transmission. Without order information, the number of networks that are consistent with the data grows exponentially with the size of the network (i.e., the number of nodes). Yet, the basic engineering evolutionary principles underlying most networks strongly suggest that not all data-consistent networks are equally likely. In particular, nodes that co-occur in many observations are probably closely connected. With this in mind, we model the co-occurrence observations as independent realizations of a random walk on the network, subjected to a random permutation to account for the lack of order information. Treating permutations as missing data, we derive an expectation-maximization (EM) algorithm for estimating the random walk parameters. The model and EM algorithm significantly simplify the problem, but the computational complexity of the reconstruction process does grow exponentially in the length of each transmission path. For networks with long paths, the exact e-step may be computationally intractable. We propose a polynomial-time Monte Carlo EM algorithm based on importance sampling and derive conditions that ensure convergence of the algorithm with high probability. Simulations and experiments with Internet measurements demonstrate the promise of this approach.",
"This paper addresses a method to analyse the covert social network foundation hidden behind the terrorism disaster. It is to solve a node discovery problem, which means to discover a node, which functions relevantly in a social network, but escaped from monitoring on the presence and mutual relationship of nodes. The method aims at integrating the expert investigator's prior understanding, insight on the terrorists' social network nature derived from the complex graph theory and computational data processing. The social network responsible for the 9 11 attack in 2001 is used to execute simulation experiment to evaluate the performance of the method."
],
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_20"
],
"mid": [
"2137099275",
"2167819790",
"2133874182",
"2139818818",
"2171369953",
"2164928285",
"2106335950",
"1970857195"
]
}
|
Node discovery problem for a social network
|
Covert nodes refer to persons who transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the activities. The covert nodes are not observable directly. It aids us in discovering and approaching to the covert nodes to identify the suspicious surveillance logs where the covert nodes would appear if they became overt. I call this problem a node discovery problem for a social network.
Where do we encounter such a problem? Globally networked clandestine organizations such as terrorists, criminals, or drug smugglers are great threat to the civilized societies [Sageman (2004)]. Terrorism attacks cause great economic, social and environmental damage. Active non-routine responses to the attacks are necessary as well as the damage recovery management. The short-term target of the responses is the arrest of the perpetrators. The long-term target of the responses is identifying and dismantling the covert organizational foundation which raises, encourages, and helps the perpetrators. The threat will be mitigated and eliminated by discovering covert leaders and critical conspirators of the clandestine organizations. The difficulty of such discovery lies in the limited capability of surveillance. Information on the leaders and critical conspirators are missing because it is usually hidden by the organization intentionally.
Let me show an example in the 9/11 terrorist attack in 2001 [Krebs (2002)]. Mustafa A. Al-Hisawi, whose alternate name was Mustafa Al-Hawsawi, was alleged to be a wire-puller who had acted as a financial manager of Al Qaeda. He had attempted to help terrorists enter the United States, and provided the hijackers of the 4 aircrafts with financial support worth more than 300,000 dollars. Furthermore, Osama bin Laden is suspected to be a wire-puller behind Mustafa A. Al-Hisawi and the conspirators behind the hijackers. These persons were not recognized as wire-pullers at the time of the attack. They were the nodes to discover from the information on the collaborative activities of the perpetrators and conspirators known at that moment.
In this paper, I present two methods to solve the node discovery problem. One is a heuristic method in [Maeno (2009)], which demonstrates a simulation experiment of the node discovery problem for the social network of the 9/11 perpetrators. The other is a statistical inference method which I propose in this paper. The method employs the maximal likelihood estimation and an anomaly detection technique. Section 3 defines the node discovery problem mathematically. Section 4 presents the two methods. Section 5 introduces the test dataset generated from computationally synthesized networks and a real clandestine organization. Section 6 demonstrates the performance characteristics of the methods (precision, recall, and van Rijsbergen's F measure [Korfhuge (1997)]). Section 7 presents the issues and future perspectives as concluding remarks. Section 2 summarizes the related works.
Problem definition
The node discovery problem is defined mathematically in this section. A node represents a person in a social network. A link represents a relationship which transmits the influence between persons. The symbols n j (j = 0, 1, · · ·) represent the nodes. Some nodes are overt (observable), but the others are covert (unobservable). O denotes the overt nodes; {n 0 , n 1 , · · · , n N −1 }. Its cardinality is |O| = N . C = O denotes the covert nodes; {n N , n N +1 , · · · , n M−1 }. Its cardinality is |C| = M − N . The whole nodes in a social network is O ∪ C. The number of the nodes is M . The unobservability of the covert nodes arises either from a technical defect of surveillance means or an intentional cover-up operation.
The symbol δ i represent a set of participants in a particular collaborative activity. It is the i-th activity pattern among the nodes. A pattern δ i is a set of nodes; δ i is a subset of O ∪ C. For example, the nodes in an collaborative activity pattern are those who joined a particular conference call. That is, a pattern is a co-occurrence among the nodes [Rabbat (2008)]. The unobservability of the covert nodes does not affect the activity patterns themselves.
A simple hub-and-spoke model is assumed as a model of the influence transmission over the links resulting the collaborative activities among the nodes. The way how the influence is transmitted governs the set of possible activity patterns {δ i }. The network topology and the influence transmission are described by some probability parameters. The probability where the influence transmits from an initiating node n j to a responder node n k is r jk . The influence transmits to multiple responders independently in parallel. It is similar to the degree of collaboration probability in trust modeling [Lavrac (2007)]. The constraints are 0 ≤ r jk and k =j r jk ≤ 1. The quantity f j is the probability where the node n j becomes an initiator. The constraints are 0 ≤ f j and N −1 j=0 f j = 1. These parameters are defined for the whole nodes in a social network (both the nodes in O and C).
A surveillance log d i records a set of the overt nodes in a collaborative activity pattern; δ i . It is given by eq.(1). A log d i is a subset of O. The number of data is D. A set {d i } is the whole surveillance logs dataset.
d i = δ i ∩ O (0 ≤ i < D).
(1)
Note that neither an individual node nor a single link alone can be observed directly, but nodes can be observed collectively as a collaborative activity pattern. The dataset {d i } can be expressed by a 2-dimensional D × N matrix of binary variables d. The presence or absence of the node n j in the data d i is indicated by the elements in eq.(2).
d ij = 1 if n j ∈ d i 0 otherwise (0 ≤ i < D, 0 ≤ j < N ). (2)
Solving the node discovery problem means identifying all the surveillance logs where covert nodes would appear if they became overt. In other words, it means to identifying the logs for which d i = δ i holds because of the covert nodes belonging to C.
Solution
Heuristic method
A heuristic method to solve the node discovery problem is studied in [Maeno (2009)]. The method is reviewed briefly.
At first, every node which appears in the dataset {d i } is classified into one of the clusters c l (0 ≤ l < C). The number of clusters is C, which depends on the prior knowledge. Mutually close nodes form a cluster. The measure of closeness between a pair of nodes is evaluated by the Jaccard's coefficient [Liben-Nowell (2004)]. It is used widely in link discovery, web mining, or text processing. The Jaccard's coefficient between the nodes n and n ′ is defined by eq.(3). The function B(s) in eq.(3) is a Boolean function which returns 1 if the proposition s is trueCor 0 otherwise. The operators ∧ and ∨ are logical AND and OR.
J(n, n ′ ) = D−1 i=0 B(n ∈ d i ∧ n ′ ∈ d i ) D−1 i=0 B(n ∈ d i ∨ n ′ ∈ d i )
.
(
The k-medoids clustering algorithm [Hastie (2001)] is employed for classification of the nodes. It is an EM (expectation-maximization) algorithm similar to the kmeans algorithm for numerical data. A medoid node locates most centrally within a cluster. It corresponds to the center of gravity in the k-means algorithm. The clusters and the modoid nodes are re-calculated iteratively until they converge into a stable structure. The k-medoids clustering algorithm may be substituted by other clustering algorithms such as hierarchical clustering or self-organizing mapping.
Then, suspiciousness of every surveillance log d i as a candidate where the covert nodes would appear is evaluated with a ranking function s(d i ). The ranking function returns higher value for a more suspicious log. The strength of the correlation between the log d i and the cluster c l is defined by w(d i , c l ) in eq.(4) as a preparation.
w(d i , c l ) = max nj ∈c l B(n j ∈ d i ) D−1 i=0 B(n j ∈ d i )
.
(4)
The ranking function takes w(d i , c l ) as an input. Various forms of ranking functions can be constructed. For example, [Maeno (2009)] studied a simple form in eq.(5) where the function u(x) returns 1 if the real variable x is positive, or 0 otherwise.
s(d i ) ∝ C−1 l=0 u(w(d i , c l )) = C−1 l=0 B(d i ∩ c l = φ).(5)
The i-th most suspicious log is given by d σ(i) where σ(i) is calculated by eq.(6). Suspiciousness s(d σ(i) ) is always larger than s(d σ(i ′ ) ) for any i < i ′ .
σ(i) = arg max m =σ(n) for ∀ n<i s(d m ) (1 ≤ i ≤ D).(6)
The computational burden of the method remains light as the number of nodes and surveillance logs increases. The method is expected to work generally for clustered networks but moderately even if the network topological and stochastic mechanism to generate the surveillance logs is not understood well. The method works without the knowledge about the hub-and-spoke model; the parametric form with r jk and f j in Section 3. The result, however, can not be very accurate because of the heuristic nature. A statistical inference method which requires heavy computational burden, but outputs more accurate results is presented next.
Statistical inference method
The statistical inference method employs the maximal likelihood estimation to infer the topology of the network, and applies an anomaly detection technique to retrieve the suspicious surveillance logs which are not likely to realize without the covert nodes. The maximal likelihood estimation is a basic statistical method used for fitting a statistical model to data and for providing estimates for the model's parameters. The anomaly detection refers to detecting patterns in a given dataset that do not conform to an established normal behavior.
A single symbol θ represent both of the parameters r jk and f j for the nodes in O. θ is the target variable, the value of which needs to be inferred from the surveillance log dataset. The logarithmic likelihood function [Hastie (2001)] is defined by eq.(7). The quantity p({d i }|θ) denote the probability where the surveillance log dataset {d i } realizes under a given θ.
L(θ) = log(p({d i }|θ)).(7)
The individual surveillance logs are assumed to be independent. eq.(7) becomes eq.(8).
L(θ) = log( D−1 i=0 p(d i |θ)) = D−1 i=0 log(p(d i |θ)).(8)
The quantity q i|jk in eq.(9) is the probability where the presence or absence of the node n k as a responder to the stimulating node n j coincides with the surveillance log d i .
q i|jk = r jk if d ik = 1 for given i and j 1 − r jk otherwise .
Eq.(9) is equivalent to eq.(10) since the value of d ik is either 0 or 1.
q i|jk = d ik r jk + (1 − d ik )(1 − r jk ).(10)
The probability p({d i }|θ) in eq. (8) is expressed by eq.(11).
p(d i |θ) = N −1 j=0 d ij f j 0≤k<N ∧ k =j q i|jk .(11)
The logarithmic likelihood function takes an explicit formula in eq.(12). The case k = j in multiplication ( k ) is included since d 2 ik = d ik always holds.
L(θ) = D−1 i=0 log( N −1 j=0 d ij f j N −1 k=0 {1 − d ik +(2d ik − 1)r jk }).(12)
The maximal likelihood estimatorθ is obtained by solving eq.(13). It gives the values of the parameters r jk and f j . A pair of nodes n j and n k for which r jk > 0 possesses a link between them.
θ = arg max θ L(θ).(13)
A simple incremental optimization technique; the hill climbing method (or the method of steepest descent) is employed to solve eq.(13). Non-deterministic methods such as simulated annealing [Hastie (2001)] can be employed to strengthen the search ability and to avoid sub-optimal solutions. These methods search more optimal parameter values around the present values and update them as in eq.(14) until the values converge.
r jk → r jk + ∆r jk f j → f j + ∆f j (0 ≤ j, k < N ).(14)
The change in the logarithmic likelihood function can be calculated as a product of the derivatives (differential coefficients with regard to r and f ) and the amount of the updates in eq.(15). The update ∆r nm and ∆f n should be in the direction of the steepest ascent in the landscape of the logarithmic likelihood function.
∆L(θ) = N −1 n,m=0 ∂L(θ) ∂r nm ∆r nm + N −1 n=0 ∂L(θ) ∂f n ∆f n .(15)
The derivatives with regard to r are given by eq. (16).
∂L(θ) ∂r nm = D−1 i=0 [f n d in (2d im − 1) × k =m {1 − d ik + (2d ik − 1)r nk } ÷ N −1 j=0 d ij f j N −1 k=0 {1 − d ik + (2d ik − 1)r jk }].(16)
The derivatives with regard to f are given by eq. (17).
∂L(θ) ∂f n = D−1 i=0 [d in N −1 k=0 {1 − d ik + (2d ik − 1)r nk } ÷ N −1 j=0 d ij f j N −1 k=0 {1 − d ik + (2d ik − 1)r jk }].(17)
The ranking function s(d i ) is the inverse of the probability at which d i realizes under the maximal likelihood estimatorθ. According to the anomaly detection technique, it gives a higher return value to the suspicious surveillance logs which are less likely to realize without the covert nodes. The ranking function is given by eq.(18).
s(d i ) = 1 p(d i |θ) .(18)
The i-th most suspicious log is given by d σ(i) by the same formula in eq.(6).
Test Dataset
Network
Two classes of networks are employed to generate a test dataset for performance evaluation of the two methods. The first class is computationally synthesized networks. The second class is a real clandestine organization.
The networks [A] in Figure 1 and [B] in Figure 2 are synthesized computationally. They are based on the Barabási-Albert model [ Barabási (1999)] with a cluster structure. The Barabási-Albert model grows with preferential attachment. The probability where a newly coming node n k connects a link to an existing node n j is proportional to the nodal degree of n j (p(k → j) ∝ K(n j )). The occurrence frequency of the nodal degree tends to be scale-free (F (K) ∝ K a ). In the Barabási-Albert model with a cluster structure, every node n j is assigned a pre-determined cluster attribute c(n j ) to which it belongs. The number of clusters is C. The probability p(k → j) is modified to eq.(19). cluster contrast parameter η is introduced. Links between the which consists of 101 nodes and 5 clusters. Cluster contrast parameter is η = 50. The network is relatively more clustered. The node n 12 is a typical hub node. The node n 75 is a typical peripheral node. which consists of 101 nodes and 5 clusters. Cluster contrast parameter is η = 2.5. The network is relatively less clustered. The node n 12 is a typical hub node. The node n 48 is a typical peripheral node. clusters appear less frequently as η increases. The initial links between the clusters are connected at random before growth by preferential attachment starts.
p(k → j) ∝ η(C − 1)K(n j ) if c(n j ) = c(n k ) K(n j ) otherwise .(19)
Hub nodes are those which have a nodal degree larger than the average. The node n 12 in the network [A] in Figure 1 is a typical hub node. Peripheral nodes are those which have a nodal degree smaller than the average. The node n 75 in the network [A] in Figure 1 is a typical peripheral node.
The network in Figure 3 represents a real clandestine organization. It is a global mujahedin organization which was analyzed in [Sageman (2004)]. The mujahedin in the global Salafi jihad means Muslim fighters in Salafism (Sunni Islamic school of thought) who struggle to establish justice on earth. Note that jihad does not necessarily refer to military exertion. The organization consists of 107 persons and 4 regional sub-networks. The sub-networks represent Central Staffs (n CSj ) including the node n ObL , Core Arabs (n CAj ) from the Arabian Peninsula countries and Egypt, Maghreb Arabs (n MAj ) from the North African countries, and Southeast Asians (n SAj ). The network topology is not simply hierarchical. The 4 regional sub-networks are connected mutually in a complex manner.
The node representing Osama bin Laden; n ObL is a hub (K(n ObL ) = 8). He is believed to be the founder of the organization, and said to be the covert leader who provides operational commanders in regional subnetworks with financial support in many terrorism attacks including 9/11 in 2001. His whereabouts are not known despite many efforts in investigation and capture.
The topological characteristics of the above mentioned networks are summarized in Table 1. The global mujahedin organization has a relatively large Gini coefficient of the nodal degree; G = 0.35 and a relatively large average clustering coefficient [Watts (1998)]; W (n j ) = 0.54. In economics, the Gini coefficient is a measure of inequality of income distribution or of wealth distribution. A larger Gini coefficient indicates lower equality. The values mean that the organization possesses hubs and a cluster structure. The values also indicate that the computationally synthesized network [A] is more clustered and close to the global mujahedin organization while the network [B] is less clustered.
Test Dataset
The test dataset {d i } is generated from each network in 5.1 in the 2 steps below.
In the first step, the collaborative activity patterns {δ i } are generated D times according to the influence Figure 3: Social network representing a global mujahedin (Jihad fighters) organization [Sageman (2004)], which consists of 107 nodes and 4 regional sub-networks. The sub-networks represent Central Staffs (n CSj ) including the node n ObL , Core Arabs (n CAj ), Maghreb Arabs (n MAj ), and Southeast Asians (n SAj ). The node n ObL is Osama bin Laden who many believe is the founder of the organization. transmission under the true value of θ. A pattern includes both an initiator node n j and multiple responder nodes n k . An example is δ ex1 ={n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n ObL , n CS11 , n CS12 , n CS14 } for the global mujahedin organization in Figure 3.
In the second step, the surveillance log dataset {d i } is generated by deleting the covert nodes belonging to C from the patterns {δ i }. The example δ ex1 results in the surveillance log d ex1 = δ ex1 ∩ C = {n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n CS11 , n CS12 , n CS14 } if Osama bin Laden is a cover node; C = n ObL . The covert node in C may appear multiple times in the collaborative activity patterns {δ i }. The number of the target logs to identify D t is given by eq.(20).
D t = D−1 i=0 B(d i = δ i ).(20)
In the performance evaluation in Section 6, a few assumptions are made for simplicity. The probability f j does not depend on the nodes (f j = 1/M ). The value of the probability r jk is either 1 when a link is present between nodes, or 1 otherwise. It means that the number of the possible collaborative activity patterns is bounded. The influence transmission is symmetrically bi-directional; r jk = r kj . 6 Performance 6.1 Performance measure Three measures, precision, recall, and van Rijsbergen's F measure [Korfhuge (1997)], are used to evaluate the performance of the methods. They are commonly used in information retrieval such as search, document classification, and query classification. The precision p is used as evaluation criteria, which is the fraction of the number of relevant data to the number of the all data retrieved by search. The recall r is the fraction of the number of the data retrieved by search to the number of the all relevant data. The relevant data refers to the data where d i = δ i . They are given by eq.(21) and eq.(22) They are functions of the number of the retrieved data D r . It can take the value from 1 to D. The data is retrieved in the order of d σ(1) , d σ(2) , to d σ(Dr) .
p(D r ) = Dr i=1 B(d σ(i) = δ σ(i) ) D r .(21)r(D r ) = Dr i=1 B(d σ(i) = δ σ(i) ) D t .(22)
The F measure F is the harmonic mean of the precision and recall. It is given by eq.(23).
F (D r ) = 1 1 2 ( 1 p(Dr) + 1 r(Dr) ) = 2p(D r )r(D r ) p(D r ) + r(D r ) .(23)
The precision, recall, and F measure range from 0 to 1. All the measures take larger values as the performance of retrieval becomes better.
Comparison
The performance of the heuristic method and statistical inference method is compared with the test dataset generated from the computationally synthesized networks. Figure 4 shows the precision p(D r ) as a function of the rate of the retrieved data to the whole data D r /D in case the hub node n 12 in the computationally synthesized network [A] in Figure 1 is the target covert node to discover, C = {n 12 }. The three graphs are for [a] the statistical inference method, [b] the heuristic method (C = 5), and [c] the heuristic method (C = 10). The number of the surveillance logs in a test dataset is D = 100. The broken lines indicate the theoretical limit (the upper bound) and the random retrieval (the lower bound). The vertical solid line indicates the position where D r = D t . Figure 5 shows the recall r(D r ) as a function of the rate D r /D. Figure 6 shows the F measure F (D r ) as a function of the rate D r /D. The experimental conditions are the same as those for Figure 4. The performance of the heuristic method is moderately good if the number of clusters is known as prior knowledge. Otherwise, the performance would be worse. On the other hand, the statistical inference method surpasses the heuristic method and approaches to the theoretical limit. Figure 7 shows the F measure F (D r ) as a function of the rate D r /D in case the hub node n 12 in the network [B] in Figure 2 is the target covert node to discover. The two graphs are for [a] the statistical inference method and [b] the heuristic method (C = 5). The performance of the statistical inference method is still good while that of the heuristic method becomes worse in a less clustered network. Figure 8 shows the F measure F (D r ) as a function of the rate D r /D in case the peripheral node n 75 in the network [A] in Figure 1 is the target covert node to discover. Figure 9 shows the F measure F (D r ) as a function of the rate D r /D when the peripheral node n 48 in the network [B] in Figure 2 is the target covert node to discover. The statistical inference method works fine while the heuristic method fails.
Application
I illustrate how the method aids the investigators in achieving the long-term target of the non-routine responses to the terrorism attacks. Let's assume that the investigators have surveillance logs of the members of the global mujahedin organization except Osama bin Laden by the time of the attack. Osama bin Laden Figure 1 is the target covert node to discover. Two graphs are for [a] the statistical inference method, and [b] the heuristic method (C = 5).
Figure 9: F measure F (D r ) as a function of the rate D r /D when the peripheral node n 48 in the computationally synthesized network [B] in Figure 2 is the target covert node to discover. Two graphs are for [a] the statistical inference method, and [b] the heuristic method (C = 5).
Figure 10: F measure F (D r ) as a function of the rate of the retrieved data to the whole data D r /D when the statistical inference method is applied in case the node n ObL in Figure 3 is the target covert node to discover. C = {n ObL }. |C| = 1. |O| = 106. The graph is for the statistical inference method. The broken lines indicate the theoretical limit and the random retrieval. The vertical solid line indicates the position where D r = D t .
does not appear in the logs. This is the assumption that the investigators neither know the presence of a wire-puller behind the attack nor recognize Osama bin Laden at the time of the attack.
The situation is simulated computationally like the problems addressed in 6.2. In this case, the node n ObL in Figure 3 is the target covert node to discover, C = {n ObL }. Figure 10 shows F (D r ) as a function of the rate of the retrieved data to the whole data D r /D when the statistical inference method is applied. The result is close to the theoretical limit. The most suspicious surveillance log d σ(1) includes all and only the neighbor nodes n CS1 , n CS2 , n CS6 , n CS7 , n CS9 , n CS11 , n CS12 , and n CS14 . This encourages the investigators to take an action to investigate an unknown wire-puller near these 8 neighbors; the most suspicious close associates. The investigators will decide to collect more detailed information on the suspicious neighbors. It may result in approaching to and finally capturing the covert wirepuller responsible for the attack.
The method, however, fails to identify two suspicious records δ fl1 ={n ObL , n CS11 } and δ fl2 = {n ObL , n CS12 }. These nodes have a small nodal degree; K(n CS11 ) = 1 and K(n CS12 ) = 1. This shows that the surveillance logs on the nodes having small nodal degree do not provide the investigators with much clues for the covert nodes.
Conclusion
In this paper, I define the node discovery problem for a social network and present methods to solve the problem. The statistical inference method employs the maximal likelihood estimation to infer the topology of the network, and applies an anomaly detection technique to retrieve the suspicious surveillance logs which are not likely to realize without the covert nodes. The precision, recall, and F measure characteristics are close to the theoretical limit for the discovery of the covert nodes in computationally synthesized networks and a real clandestine organization. In the investigation of a clandestine organization, the method aids the investigators in identifying the close associates and approaching to a covert leader or a critical conspirator.
The node discovery problem is encountered in many areas of business and social sciences. For example, in addition to the analysis of a clandestine organization, the method contributes to detecting an individual employee who transmits the influence to colleagues, but whose catalytic role is not recognized by company managers, may be critical in reorganizing a company structure.
I plan to address two issues for the future works. The first issue is to extend the hub-and-spoke model for the influence transmission. The model represents the radial transmission from an initiating node toward multiple responder nodes. Other types of influence transmission are present in many real social networks. Examples are serial chain-shaped influence transmission model or tree-like influence transmission model. The second issue is to develop a method to solve the variants of the node discovery problem. Discovering fake nodes, or spoofing nodes are also interesting problems to uncover the malicious intentions of the organization. A fake node is the person who does not exist in the organization, but appears in the surveillance. A spoofing node is the person who belongs to an organization, but appears as a different node in the surveillance logs.
| 4,867 |
0710.3405
|
2008592376
|
Let (G e ) e>0 be a family of ‘e-thin’ Riemannian manifolds modeled on a finite metric graph G, for example, the e-neighborhood of an embedding of G in some Euclidean space with straight edges. We study the asymptotic behavior of the spectrum of the Laplace-Beltrami operator on G e , as e→0, for various boundary conditions. We obtain complete asymptotic expansions for the kth eigenvalue and the eigenfunctions, uniformly for k≤Ce −1 , in terms of scattering data on a non-compact limit space. We then use this to determine the quantum graph which is to be regarded as the limit object, in a spectral sense, of the family (G e ). Our method is a direct construction of approximate eigenfunctions from the scattering and graph data, and the use of a priori estimates to show that all eigenfunctions are obtained in this way.
|
As already mentioned, the Neumann problem was treated in @cite_19 , @cite_8 , @cite_0 , KucZen:ASNLTD @cite_3 , @cite_17 . For Dirichlet boundary conditions, Post @cite_14 derived the first two terms of in the case of 'small' vertex neighborhoods, see Theorem . In the recent preprint @cite_5 Molchanov and Vainberg study the Dirichlet problem and show that, in the context of Theorem , the @math converge to eigenvalues of the quantum graph described in Theorem ; this was conjectured in @cite_4 , where also some results on the scattering theory on non-compact graphs are obtained. However, their statements are unclear as to whether the multiplicities coincide; also, they do not consider the effect of @math eigenvalues on @math or uniform asymptotics for large @math . In @cite_13 a related model is considered. The method in the previously cited papers is to compare quadratic forms or to show resolvent convergence of some sort, and in all cases only the leading asymptotic behavior is obtained.
|
{
"abstract": [
"We consider a family of open sets M? which shrinks with respect to an appropriate parameter ? to a graph. Under the additional assumption that the vertex neighbourhoods are small we show that the appropriately shifted Dirichlet spectrum of M? converges to the spectrum of the (differential) Laplacian on the graph with Dirichlet boundary conditions at the vertices, i.e., a graph operator without coupling between different edges. The smallness is expressed by a lower bound on the first eigenvalue of a mixed eigenvalue problem on the vertex neighbourhood. The lower bound is given by the first transversal mode of the edge neighbourhood. We also allow curved edges and show that all bounded eigenvalues converge to the spectrum of a Laplacian acting on the edge with an additional potential coming from the curvature.",
"Small diameter asymptotics is obtained for scattering solutions in a network of thin fibers. The asymptotics is expressed in terms of solutions of related problems on the limiting quantum graph Γ . We calculate the Lagrangian gluing conditions at vertices ( v ) for the problems on the limiting graph. If the frequency of the incident wave is above the bottom of the absolutely continuous spectrum, the gluing conditions are formulated in terms of the scattering data for each individual junction of the network.",
"",
"Let M be a planar embedded graph whose arcs meet transversally at the vertices. Let ?(ɛ) be a strip-shaped domain around M, of width ɛ except in a neighborhood of the singular points. Assume that the boundary of ?(ɛ) is smooth. We define comparison operators between functions on ?(ɛ) and on M, and we derive energy estimates for the compared functions. We define a Laplace operator on M which is in a certain sense the limit of the Laplace operator on ?(ɛ) with Neumann boundary conditions. In particular, we show that the p-th eigenvalue of the Laplacian on ?(ɛ) converges to the p-th eigenvalue of the Laplacian on M as ɛ tends to 0. A similar result holds for the magnetic Schrodinger operator.",
"Abstract Let M be a finite graph in the plane and let Me be a domain that looks like the e-fattened graph M (exact conditions on the domain are given). It is shown that the spectrum of the Neumann Laplacian on Me converges when e → 0 to the spectrum of an ODE problem on M. The presence of an electromagnetic field is also allowed. Considerations of this kind arise naturally in mesoscopic physics and other areas of physics and chemistry. The results of the paper extend the ones previously obtained by J. Rubinstein and M. Schatzman.",
"",
"Our talk at Lisbon SAMP conference was based mainly on our recent results (published in Comm. Math. Phys.) on small diameter asymptotics for solutions of the Helmgoltz equation in networks of thin fibers. The present paper contains a detailed review of these results under some assumptions which make them much more transparent. It also contains several new theorems on the structure of the spectrum near the threshold. small diameter asymptotics of the resolvent, and solutions of the evolution equation.",
"We analyze the problem of approximating a smooth quantum waveguide with a quantum graph. We consider a planar curve with compactly supported curvature and a strip of constant width around the curve. We rescale the curvature and the width in such a way that the strip can be approximated by a singular limit curve, consisting of one vertex and two infinite, straight edges, i.e., a broken line. We discuss the convergence of the Laplacian, with Dirichlet boundary conditions on the strip, in a suitable sense and we obtain two possible limits: the Laplacian on the line with Dirichlet boundary conditions in the origin and a nontrivial family of point perturbations of the Laplacian on the line. The first case generically occurs and corresponds to the decoupling of the two components of the limit curve, while in the second case a coupling takes place. We present also two families of curves which give rise to coupling.",
"Abstract We consider a family of compact manifolds which shrinks with respect to an appropriate parameter to a graph. The main result is that the spectrum of the Laplace–Beltrami operator converges to the spectrum of the (differential) Laplacian on the graph with Kirchhoff boundary conditions at the vertices. On the other hand, if the shrinking at the vertex parts of the manifold is sufficiently slower comparing to that of the edge parts, the limiting spectrum corresponds to decoupled edges with Dirichlet boundary conditions at the endpoints. At the borderline between the two regimes we have a third possibility when the limiting spectrum can be described by a nontrivial coupling at the vertices."
],
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_13",
"@cite_17"
],
"mid": [
"2088433356",
"2059178562",
"1995358755",
"2016418718",
"2061754404",
"1999693049",
"2022778351",
"1978934456",
"2093468829"
]
}
| 0 |
||
0710.4180
|
2103921041
|
This paper presents a new method for a quick similarity-based search through long unlabeled audio streams to detect and locate audio clips provided by users. The method involves feature-dimension reduction based on a piecewise linear representation of a sequential feature trajectory extracted from a long audio stream. Two techniques enable us to obtain a piecewise linear representation: the dynamic segmentation of feature trajectories and the segment-based Karhunen-Loeve (KL) transform. The proposed search method guarantees the same search results as the search method without the proposed feature-dimension reduction method in principle. Experimental results indicate significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1 12 that of previous methods and detected queries in approximately 0.3 s from a 200-h audio database.
|
A large number of dimensionality reduction methods have been proposed that focused on local correlation (e.g. @cite_29 @cite_2 @cite_18 @cite_20 ). Many of these methods do not assume any specific characteristics. Now, we are concentrating on the dimensionality reduction of time-series signals, and therefore we take advantage of their continuity and local correlation. The computational cost for obtaining such feature subsets is expected to be very small compared with that of existing methods that do not utilize the continuity and local correlation of time-series signals.
|
{
"abstract": [
"We present a fast algorithm for non-linear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by five layer auto-associative networks. The local linear algorithm is also more than an order of magnitude faster to train.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"The clustering problem is well known in the database literature for its numerous applications in problems such as customer segmentation, classification and trend analysis. Unfortunately, all known algorithms tend to break down in high dimensional spaces because of the inherent sparsity of the points. In such high dimensional spaces not all dimensions may be relevant to a given cluster. One way of handling this is to pick the closely correlated dimensions and find clusters in the corresponding subspace. Traditional feature selection algorithms attempt to achieve this. The weakness of this approach is that in typical high dimensional data mining applications different sets of points may cluster better for different subsets of dimensions. The number of dimensions in each such cluster-specific subspace may also vary. Hence, it may be impossible to find a single small subset of dimensions for all the clusters. We therefore discuss a generalization of the clustering problem, referred to as the projected clustering problem , in which the subsets of dimensions selected are specific to the clusters themselves. We develop an algorithmic framework for solving the projected clustering problem, and test its performance on synthetic data.",
"Principal component analysis (PCA) is one of the most popular techniques for processing, compressing, and visualizing data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Therefore, previous attempts to formulate mixture models for PCA have been ad hoc to some extent. In this article, PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectationmaximization algorithm. We discuss the advantages of this model in the context of clustering, density modeling, and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition."
],
"cite_N": [
"@cite_29",
"@cite_18",
"@cite_20",
"@cite_2"
],
"mid": [
"2141666957",
"2053186076",
"2065811242",
"2146610201"
]
}
|
A quick search method for audio signals based on a piecewise linear representation of feature trajectories
|
This paper presents a method for searching quickly through unlabeled audio signal archives (termed stored signals) to detect and locate given audio clips (termed query signals) based on signal similarities.
Many studies related to audio retrieval have dealt with content-based approaches such as audio content classification [1], [2], speech recognition [3], and music transcription [3], [4]. Therefore, these studies mainly focused on associating audio signals with their meanings. In contrast, this study aims at achieving a similarity-based search or more specifically fingerprint identification, which constitutes a search of and retrieval from unlabeled audio archives based only on a signal similarity measure. That is, our objective is signal matching, not the association of signals with their semantics. Although the range of applications for a similarity-based search may seem narrow compared with content-based approaches, this is not actually the case. The applications include the detection and statistical analysis of broadcast music and commercial spots, and the content identification, detection and copyright management of pirated copies of music clips. Fig. 1 represents one of the most representative examples of such applications, which has already been put to practical use. This system automatically checks and identifies broadcast music clips or commercial spots to provide copyright information or other detailed information about the music or the spots.
In audio fingerprinting applications, the query and stored signals cannot be assumed to be exactly the same even in the corresponding sections of the same sound, owing to, for example, compression, transmission and irrelevant noises. Meanwhile, for the applications to be practically viable, the features Manuscript received December 15, 2006; revised June 17, 2007; second revision September 24, 2007, Accepted October 6,2007. The associate editor coordinating the review is Dr. Michael Goodwin. A. Kimura, K. Kashino should be compact and the feature analysis should be computationally efficient. For this purpose, several feature extraction methods have been developed to attain the above objectives. Cano et al. [5] modeled music segments as sequences of sound classes estimated via unsupervised clustering and hidden Markov models (HMMs). Burges et al. [6] employed several layers of Karhunen-Lóeve (KL) transforms, which reduced the local statistical redundancy of features with respect to time, and took account of robustness to shifting and pitching. Oostveen et al. [7] represented each frame of a video clip as a binary map and used the binary map sequence as a feature. This feature is robust to global changes in luminance and contrast variations. Haitsma et al. [8] and Kurozumi et al. [9] each employed a similar approach in the context of audio fingerprinting. Wang [10] developed a feature-point-based approach to improve the robustness. Our previous approach called the Time-series Active Search (TAS) method [11] introduced a histogram as a compact and noise-robust fingerprint, which models the empirical distribution of feature vectors in a segment. Histograms are sufficiently robust for monitoring broadcast music or detecting pirated copies. Another novelty of this approach is its effectiveness in accelerating the search. Adjacent histograms extracted from sliding audio segments are strongly correlated with each other. Therefore, unnecessary matching calculations are avoided by exploiting the algebraic properties of histograms.
Another important research issue regarding similarity-based approaches involves finding a way to speed up the search. Multi-dimensional indexing methods [12], [13] have frequently been used for accelerating searches. However, when feature vectors are high-dimensional, as they are typically with multimedia signals, the efficiency of the existing indexing methods deteriorates significantly [14], [15]. This is why search methods based on linear scans such as the TAS method are often employed for searches with high-dimensional features. However, methods based solely on linear scans may not be appropriate for managing large-scale signal archives, and therefore dimension reduction should be introduced to mitigate this effect.
To this end, this paper presents a quick and accurate audio search method that uses dimensionality reduction of histogram features. The method involves a piecewise linear representation of histogram sequences by utilizing the continuity and local correlation of the histogram sequences. A piecewise linear representation would be feasible for the TAS framework since the histogram sequences form trajectories in multi-dimensional spaces. By incorporating our method into the TAS framework, we significantly increase the search speed while guaranteeing the same search results as the TAS method. We introduce the following two techniques to obtain a piecewise representation: the dynamic segmentation of the feature trajectories and the segment-based KL transform.
The segment-based KL transform involves the dimensionality reduction of divided histogram sequences (called segments) by KL transform. We take advantage of the continuity and local correlation of feature sequences extracted from audio signals. Therefore, we expect to obtain a linear representation with few approximation errors and low computational cost. The segment-based KL transform consists of the following three components: The basic component of this technique reduces the dimensionality of histogram features. The second component that utilizes residuals between original histogram features and features after dimension reduction greatly reduces the required number of histogram comparisons. Feature sampling is introduced as the third component. This not only saves the storage space but also contributes to accelerating the search.
Dynamic segmentation refers to the division of histogram sequences into segments of various lengths to achieve the greatest possible reduction in the average dimensionality of the histogram features. One of the biggest problems in dynamic segmentation is that finding the optimal set of partitions that minimizes the average dimensionality requires a substantial calculation. The computational time must be no more than that needed for capturing audio signals from the viewpoint of practical applicability. To reduce the calculation cost, our technique addresses the quick suboptimal partitioning of the histogram trajectories, which consists of local optimization to avoid recursive calculations and the coarse-to-fine detection of segment boundaries. This paper is organized as follows: Section II introduces the notations and definitions necessary for the subsequent explanations. Section III explains the TAS method upon which our method is founded. Section IV outlines the proposed search method. Section V discusses a dimensionality reduction technique with the segment-based KL transform. Section VI details dynamic segmentation. Section VII presents experimental results related to the search speed and shows the advantages of the proposed method. Section VIII further discusses the advantages and shortcomings of the proposed method as well as providing additional experimental results. Section IX concludes the paper.
II. PRELIMINARIES
Let N be the set of all non-negative numbers, R be the set of all real numbers, and N n be a n-ary Cartesian product of N . Vectors are denoted by boldface lower-case letters, e.g. x, and matrices are denoted by boldface upper-case letters, e.g. A. The superscript t stands for the transposition of a vector or a matrix, e.g. x t or A t . The Euclidean norm of an n-dimensional vector x ∈ R n is denoted as x :
x def. = n i=1 |x i | 2 1/2 ,
where |x| is the magnitude of x. For any function f (·) and a random variable X, E[f (X)] stands for the expectation of f (X). Similarly, for a given value y ∈ Y, some function g(·, ·) and a random variable X, E[f (X, y)|y] stands for the conditional expectation of g(X, y) given y. Fig. 2 outlines the Time-series Active Search (TAS) method, which is the basis of our proposed method. We provide a summary of the algorithm here. Details can be found in [11].
III. TIME-SERIES ACTIVE SEARCH
[Preparation stage] 1) Base features are extracted from the stored signal. Our preliminary experiments showed that the short-time frequency spectrum provides sufficient accuracy for our similarity-based search task. Base features are extracted at every sampled time step, for example, every 10 msec. Henceforth, we call the sampled points frames (the term was inspired by video frames). Base features are denoted as f S (t S ) (0 ≤ t S < L S ), where t S represents the position in the stored signal and L S is the length of the stored signal (i.e. the number of frames in the stored signal). 2) Every base feature is quantized by vector quantization (VQ). A codebook {f i } n i=1 is created beforehand, where n is the codebook size (i.e. the number of codewords in the codebook). We utilize the Linde-Buzo-Gray (LBG) algorithm [16] for codebook creation. A quantized base feature q S (t S ) is expressed as a VQ codeword assigned to the corresponding base feature f S (t S ), which is determined as
q S (t S ) = arg min 1≤i≤n f S (t S ) − f i 2 .
[Search stage] 1) Base features f Q (t Q ) (0 ≤ t Q < L Q ) of the query signal are extracted in the same way as the stored signal and quantized with the codebook {f i } n i=1 created in the preparation stage, where t Q represents the position in the query signal and L Q is its length. We do not have to take into account the calculation time for feature quantization since it takes less than 1% of the length of the signal. A quantized base feature for the query signal is denoted as q Q (t Q ).
2) Histograms are created; one for the stored signal denoted as x S (t S ) and the other for the query signal denoted as x Q . First, windows are applied to the sequences of quantized base features extracted from the query and stored signals. The window length W (i.e. the number of frames in the window) is set at W = L Q , namely the length of the query signal. A histogram is created by counting the instances of each VQ codeword over the window. Therefore, each index of a histogram bin corresponds to a VQ codeword. We note that a histogram does not take the codeword order into account. 3) Histogram matching is executed based on the distance between histograms, computed as
d(t S ) def. = x S (t S ) − x Q .
When the distance d(t S ) falls below a given value (search threshold) θ, the query signal is considered to be detected at the position t S of the stored signal. 4) A window on the stored signal is shifted forward in time and the procedure returns to Step 2). As the window for the stored signal shifts forward in time, VQ codewords included in the window cannot change so rapidly, which means that histograms cannot also change so rapidly. This implies that for a given positive integer w the lower bound on the distance d(t S + w) is obtained from the triangular inequality as follows:
d(t S + w) ≥ max{0, d(t S ) − √ 2w},
where √ 2 is the maximum distance between x S (t S ) and x S (t S + w). Therefore, the skip width w(t S ) of the window at the t S -th frame is obtained as
w(t S ) = floor d(t S ) − θ √ 2 + 1 (if d(t S ) > θ) 1, (otherwise)(1)
where floor(a) indicates the largest integer less than a. We note that no sections will ever be missed that have distance values smaller than the search threshold θ, even if we skip the width w(t S ) given by Eq. (1).
IV. FRAMEWORK OF PROPOSED SEARCH METHOD The proposed method improves the TAS method so that the search is accelerated without false dismissals (incorrectly missing segments that should be detected) or false detections (identifying incorrect matches). To accomplish this, we introduce feature-dimension reduction as explained in Sections V and VI, which reduces the calculation costs required for matching. Fig. 3 shows an overview of the proposed search method, and Fig. 4 outlines the procedure for featuredimension reduction. The procedure consists of a preparation stage and a search stage.
[Preparation stage] 1) Base features f S (t S ) are extracted from the stored signal and quantized, to create quantized base features q S (t S ). The procedure is the same as that of the TAS method. 2) Histograms x S (t S ) are created in advance from the quantized base features of the stored signal by shifting a window of a predefined length W . We note that with the TAS method the window length W varies from one search to another, while with the present method the window length W is fixed. This is because histograms x S (t S ) for the stored signal are created prior to the search. We should also note that the TAS method does not create histograms prior to the search because sequences of VQ codewords need much less storage space than histogram sequences. 3) A piecewise linear representation of the extracted histogram sequence is obtained ( Fig. 4
block (A)).
This representation is characterized by a set T = {t j } M j=0 of segment boundaries expressed by their frame numbers and a set {p j (·)} M j=1 of M functions, where M is the number of segments, t 0 = 0 and t M = L S . The j-th segment is expressed as a half-open interval [t j−1 , t j ) since it starts from x S (t j−1 ) and ends at x S (t j − 1). Section VI shows how to obtain such segment boundaries. Each function p j (·) : N n → R m j that corresponds to the j-th segment reduces the dimensionality n of the histogram to the dimensionality m j . Section V-B shows how to determine these functions. 4) The histograms x S (t S ) are compressed by using the functions {p j (·)} M j=1 obtained in the previous step, and then compressed features y S (t S ) are created ( 1) Base features f Q (t Q ) are extracted and a histogram x Q is created from the query signal in the same way as the TAS method.
2) The histogram x Q is compressed based on the functions {p j (·)} M j=1 obtained in the preparation stage, to create M compressed features y Q [j] (j = 1, · · · , M ). Each compressed feature y Q [j] corresponds to the j-th function p j (·). The procedure used to create compressed features is the same as that for the stored signal.
3) Compressed features created from the stored and query signals are matched, that is, the distance
d(t S ) = y S (t S ) − y Q [j t S ] between two compressed features y S (t S ) and y Q [j t S ] is calculated,
where j t S represents the index of the segment that contains x S (t S ), namely t jt S −1 ≤ t S < t jt S . 4) If the distance falls below the search threshold θ, the original histograms x S (t S ) corresponding to the surviving compressed features y S (t S ) are verified. Namely, the distance d(t S ) = x S (t S ) − x Q is calculated and compared with the search threshold θ. 5) A window on the stored signal is shifted forward in time and the procedure goes back to Step 3).
The skip width of the window is calculated from the distance d(t S ) between compressed features.
B. Segment-based KL transform
As the first step towards obtaining a piecewise representation, the histogram sequence is divided into M segments. Dynamic segmentation is introduced here, which enhances feature-dimension reduction performance. This will be explained in detail in Section VI. Second, a KL transform is performed for every segment and a minimum number of eigenvectors are selected such that the sum of their contribution rates exceeds a predefined value σ, where the contribution rate of an eigenvector stands for its eigenvalue divided by the sum of all eigenvalues, and the predefined value σ is called the contribution threshold. The number of selected eigenvectors in the j-th segment is written as m j . Then, a function p j (·) : N n → R m j (j = 1, 2, · · · , M ) for dimensionality reduction is determined as a map to a subspace whose bases are the selected eigenvectors:
p j (x) = P t j (x − x j ),(2)
where x is a histogram, x j is the centroid of histograms contained in the j-th segment, and P j is an (n × m j ) matrix whose columns are the selected eigenvectors. Finally, each histogram is compressed by using the function p j (·) of the segment to which the histogram belongs. Henceforth, we refer to p j (x) as a projected feature of a histogram x.
In the following, we omit the index j corresponding to a segment unless it is specifically needed, e.g. p(x) and x.
C. Distance bounding
From the nature of the KL transform, the distance between two projected features gives the lower bound of the distance between corresponding original histograms. However, this bound does not approximate the original distance well, and this results in many false detections.
To improve the distance bound, we introduce a new technique. Let us define a projection distance δ(p, x) as the distance between a histogram x and the corresponding projected feature z = p(x):
δ(p, x) def. = x − q(z) ,(3)
where q(·) : R m → R n is the generalized inverse map of p(·), defined as
q(z) def. = P z + x.
Here we create a compressed feature y, which is the projected feature z = (z 1 , z 2 , · · · , z m ) t along with the projection distance δ(p, x):
y = y(p, x) = (z 1 , z 2 , · · · , z m , δ(p, x)) t ,
where y(p, x) means that y is determined by p and x. The Euclidean distance between compressed features is utilized as a new criterion for matching instead of the Euclidean distance between projected features. The distance is expressed as
y S − y Q 2 = z S − z Q 2 + {δ(p, x S ) − δ(p, x Q )} 2 ,(4)
where z S = p(x S ) (resp. z Q = p(x Q )) is the project feature derived from the original histograms x S (resp. x Q ) and y S = y S (p, x S ) (resp. y Q = y Q (p, x Q )) is the corresponding compressed feature. Eq. (4) implies that the distance between compressed features is larger than the distance between corresponding projected features. In addition, from the above discussions, we have the following two properties, which indicate that the distance y S − y Q between two compressed features is a better approximation of the distance x S − x Q between the original histograms than the distance z S − z Q between projected features (Theorem 1), and the expected approximation error is much smaller (Theorem 2). Theorem 1:
z S − z Q ≤ y S − y Q = min (x S ,x Q )∈A(y S ,y Q ) x S −x Q ≤ x S − x Q ,(5)
where A(y S , y Q ) is the set of all possible pairs (x S ,x Q ) of original histograms for given compressed features (y S , y Q ). Intuitive illustration of relationships between projection distance, distance between projected features and distance between compressed features.
Theorem 2: Suppose that random variables (X n S , X n Q ) corresponding to the original histograms (x S , x Q ) have a uniform distribution on the set A(y S , y Q ) defined in Theorem 1, and E[δ(p, X n S )] E[δ(p, X n Q )]. The expected approximation errors can be evaluated as
E X n S − X n Q 2 − y S − y Q 2 y S , y Q E X n S − X n Q 2 − z S − z Q 2 y S , y Q .(6)
The proofs are shown in the appendix. Fig. 6 shows an intuitive illustration of the relationships between projection distances, distances between projected features and distances between compressed features, where the histograms are in a 3-dimensional space and the subspace dimensionality is 1. In this case, for given compressed features (y S , y Q ) and a fixed query histogram x Q , a stored histogram x S must be on a circle whose center is q(z Q ). This circle corresponds to the set A(y S , y Q ).
D. Feature sampling
In the TAS method, quantized base features are stored, because they need much less storage space than the histogram sequence and creating histograms on the spot takes little calculation. With the present method, however, compressed features must be computed and stored in advance so that the search results can be returned as quickly as possible, and therefore much more storage space is needed than with the TAS method. The increase in storage space may cause a reduction in search speed due to the increase in disk access.
Based on the above discussion, we incorporate feature sampling in the temporal domain. The following idea is inspired by the technique called Piecewise Aggregate Approximation (PAA) [22]. With the proposed feature sampling method, first a compressed feature sequence
{y S (t S )} L S −W −1 t S =0
is divided into subsequences {y S (ia), y S (ia + 1), · · · , y S (ia + a − 1)} i=0,1,··· of length a. Then, the first compressed feature y S (ia) of every subsequence is selected as a representative feature. A lower bound of the distances between the query and stored compressed features contained in the subsequence can be expressed in terms of the representative feature y S (ia). This bound is obtained from the triangular inequality as follows:
y S (ia + k) − y Q ≥ y S (ia) − y Q − d(i), d(i) def. = max 0≤k ≤a−1 y S (ia + k ) − y S (ia) .
(∀i = 0, 1, · · · , ∀k = 0, · · · , a − 1) This implies that preserving the representative feature y S (ia) and the maximum distance d(i) is sufficient to guarantee that there are no false dismissals. This feature sampling is feasible for histogram sequences because successive histograms cannot change rapidly. Furthermore, the technique mentioned in this section will also contribute to accelerating the search, especially when successive histograms change little.
VI. DYNAMIC SEGMENTATION A. Related work
The approach used for dividing histogram sequences into segments is critical for realizing efficient feature-dimension reduction since the KL transform is most effective when the constituent elements in the histogram segments are similar. To achieve this, we introduce a dynamic segmentation strategy.
Dynamic segmentation is a generic term that refers to techniques for dividing sequences into segments of various lengths. Dynamic segmentation methods for time-series signals have already been applied to various kinds of applications such as speech coding (e.g. [24]), the temporal compression of waveform signals [25], the automatic segmentation of speech signals into phonic units [26], sinusoidal modeling of audio signals [27], [28], [29] and motion segmentation in video signals [30]. We employ dynamic segmentation to minimize the average dimensionality of high-dimensional feature trajectories.
Dynamic segmentation can improve dimension reduction performance. However, finding the optimal boundaries still requires a substantial calculation. With this in mind, several studies have adopted suboptimal approaches, such as longest line fitting [23], wavelet decomposition [23], [21] and the bottom-up merging of segments [31]. The first two approaches still incur a substantial calculation cost for long time-series signals. The last approach is promising as regards obtaining a rough global approximation at a practical calculation cost. This method is compatible with ours, however, we mainly focus on a more precise local optimization. Fig. 7 shows an outline of our dynamic segmentation method. The objective of the dynamic segmentation method is to divide the stored histogram sequence so that its piecewise linear representation is well characterized by a set of lower dimensional subspaces. To this end, we formulate the dynamic segmentation as a way to find a set T * = {t * j } M j=0 of segment boundaries that minimize the average dimensionality of these segment-approximating subspaces on condition that the boundary t * j between the j-th and the (j +1)th segments is in a shiftable range S j , which is defined as a section with a width ∆ in the vicinity of the initial position t 0 j of the boundary between the j-th and the (j + 1)-th segments. Namely, the set T * of the optimal segment boundaries is given by the following formula:
B. Framework
T * = t * j M j=0 def. = arg min {t j } M j=0 :t j ∈S j ∀j 1 L S M i=1 (t j − t j−1 ) · c(t j−1 , t j , σ) (7) S j def. = {t j : t 0 j − ∆ ≤ t j ≤ t 0 j + ∆} (8)
where c(t i , t j , σ) represents the subspace dimensionality on the segment between the t i -th and the t j -th frames for a given contribution threshold σ, t * 0 = 0 and t * M = L S . The initial positions of the segment boundaries are set beforehand by equi-partitioning.
The above optimization problem defined by Eq. (7) would normally be solved with dynamic programming (DP) (e.g. [32]). However, DP is not practical in this case. Deriving c(t j−1 , t j , σ) included in Eq. (7) incurs a substantial calculation cost since it is equivalent to executing a KL transform calculation for the segment [t j−1 , t j ). This implies that the DP-based approach requires a significant amount of calculation, although less than a naive approach. The above discussion implies that we should reduce the number of KL transform calculations to reduce the total calculation cost required for the optimization. When we adopt the total number of KL transform calculations as a measure for assessing the calculation cost, the cost is evaluated as O(M ∆ 2 ), where M is the number of segments and ∆ is the width of the shiftable range.
To reduce the calculation cost, we instead adopt a suboptimal approach. Two techniques are incorporated: local optimization and the coarse-to-fine detection of segment boundaries. We explain these two techniques in the following sections.
C. Local optimization
The local optimization technique modifies the formulation (Eq. (7)) of dynamic segmentation so that it minimizes the average dimensionality of the subspaces of adjoining segments. The basic idea is similar to the "forward segmentation" technique introduced by Goodwin [27], [28] for deriving accurate sinusoidal models of audio signals. The position t * j of the boundary is determined by using the following forward recursion as a substitute for Eq. (7):
t * j = arg min t j ∈S j (t j − t * j−1 )c * j + (t 0 j+1 − t j )c 0 j+1 t 0 j+1 − t * j−1 ,(9)
which is here given by
c * j = c(t * j−1 , t j , σ), c 0 j+1 = c(t j , t 0 j+1 , σ)
, and S j is defined in Eq. (8). As can be seen in Eq. (9), we can determine each segment boundary independently, unlike the formulation of Eq. (7). Therefore, the local optimization technique can reduce the amount of calculation needed for extracting an appropriate representation, which is evaluated as O(M ∆), where M is the number of segments and ∆ is the width of the shiftable range.
D. Coarse-to-fine detection
The coarse-to-fine detection technique selects suboptimal boundaries in the sense of Eq. (9) with less computational cost. We note that small boundary shifts do not contribute greatly to changes in segment dimensionality because successive histograms cannot change rapidly. With this in mind, we assume that 1) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the initial position t 0 j and the edges (t 0 j − ∆ and t 0 j + ∆) of its shiftable range.
2) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the position t 0 j − ∆ + 2∆ u j +1 i (i = 1, 2, · · · , u j ), where u j determines the number of calculations in this step.
3) The dimensions of the j-th and (j + 1)-th segments are calculated in detail when the segment boundary t j is in the positions where dimension changes are detected in the previous step. We determine the number u j of dimension calculations in step 2 so that the number of calculations in all the above steps, f j (u j ), is minimized. Then, f j (u j ) is given as follows:
f j (u j ) = 2 (3 + u j ) + K j ∆ 1 2 u j + 1 ,
where K j is the estimated number of positions where the dimensionalities change, which is experimentally determined as
K j = c LR − c LL , (if c LR ≤ c RR , c LL < c RL ) K j = (c LC − c LL ) + min(c RC , c LR ) − min(c LC , c RR ), (if c LR > c RR , c LL < c RL , c LC ≤ c RC ) K j = (c RC − c RR ) + min(c LC , c RL ) − min(c RC , c LL ), (if c LR > c RR , c LL < c RL , c LC > c RC ) K j = c RL − c RR , (Otherwise) and c LL = c(t * j−1 , t 0 j − ∆, σ), c RL = c(t 0 j − ∆, t 0 j+1 , σ), c LC = c(t * j−1 , t 0 j , σ), c RC = c(t 0 j , t 0 j+1 , σ), c LR = c(t * j−1 , t 0 j + ∆, σ), c RR = c(t 0 j + ∆, t 0 j+1 , σ).
The first term of f j (u j ) refers to the number of calculations in steps 1 and 2, and the second term corresponds to that in step 3. f j (u j ) takes the minimum value 4 2K j ∆ + 2 when u j = 2K j ∆ − 2. The calculation cost when incorporating local optimization and coarse-to-fine detection techniques is evaluated as follows:
E M 4 2K j ∆ + 2 ≤ M 4 √ 2K∆ + 2 = O M √ K∆ , where K = E[K j ]
, M is the number of segments and ∆ is the width of the shiftable range. The first inequality is derived from Jensen's inequality (e.g. [33, Theorem 2.6.2]). The coarse-to-fine detection technique can additionally reduce the calculation cost because K is usually much smaller than ∆.
VII. EXPERIMENTS A. Conditions
We tested the proposed method in terms of calculation cost in relation to search speed. We again note that the proposed search method guarantees the same search results as the TAS method in principle, and therefore we need to evaluate the search speed. The search accuracy for the TAS method was reported in a previous paper [11]. In summary, for audio identification tasks, there were no false detections or false dismissals down to an S/N ratio of 20 dB if the query duration was longer than 10 seconds.
In the experiments, we used a recording of a real TV broadcast. An audio signal broadcast from a particular TV station was recorded and encoded in MPEG-1 Layer 3 (MP3) format. We recorded a 200hour audio signal as a stored signal, and recorded 200 15-second spots from another TV broadcast as queries. Thus, the task was to detect and locate specific commercial spots from 200 consecutive hours of TV recording. Each spot occurred 2-30 times in the stored signal. Each signal was first digitized at a 32 kHz sampling frequency and 16 bit quantization accuracy. The bit rate for the MP3 encoding was 56 kbps. We extracted base features from each audio signal using a 7-channel second-order IIR band-pass filter with Q = 10. The center frequencies at the filter were equally spaced on a log frequency scale. The base features were calculated every 10 milliseconds from a 60 millisecond window. The base feature vectors were quantized by using the VQ codebook with 128 codewords, and histograms were created based on the scheme of the TAS method. Therefore, the histogram dimension was 128. We implemented the feature sampling described in Section V-D and the sampling duration was a = 50. The tests were carried out on a PC (Pentium 4 2.0 GHz).
B. Search speed
We first measured the CPU time and the number of matches in the search. The search time we measured in this test comprised only the CPU time in the search stage shown in Section IV. This means that the search time did not include the CPU time for any procedures in the preparation stage such as base feature extraction, histogram creation, or histogram dimension reduction for the stored signal. The search threshold was adjusted to θ = 85 so that there were no false detections or false dismissals. We compared the following methods:
(i) The TAS method (baseline). (ii) The proposed search method without the projection distance being embedded in the compressed features. (iii) The proposed search method.
We first examined the relationships between the average segment duration (equivalent to the number of segments), the search time, and the number of matches. The following parameters were set for featuredimension reduction: The contribution threshold was σ = 0.9. The width of the shiftable range for dynamic segmentation was 500. Fig. 10 shows the relationship between the average segment duration and the search time, where the ratio of the search speed of the proposed method to that of the TAS method (conventional method in the figure) is called the speed-up factor. Also, Fig. 11 shows the relationship between the average segment duration and the number of matches. Although the proposed method only slightly increased the number of matches, it greatly reduced the search time. This is because it greatly reduced the calculation cost per match owing to feature-dimension reduction. For example, the proposed method reduced the search time to almost 1/12 when the segment duration was 1.2 minutes (i.e. the number of segments was 10000). As mentioned in Section V-D, the feature sampling technique also contributed to the acceleration of the search, and the effect is similar to histogram skipping. Considering the dimension reduction performance results described later, we found that those effects were greater than that caused by dimension reduction for large segment durations (i.e. a small number of segments). This is examined in detail in the next section. We also found that the proposed method reduced the search time and the number of matches when the distance bounding technique was incorporated, especially when there were a large number of segments.
VIII. DISCUSSION The previous section described the experimental results solely in terms of search speed and the advantages of the proposed method compared with the previous method. This section provides further discussion of the advantages and shortcomings of the proposed method as well as additional experimental results.
We first deal with the dimension reduction performance derived from the segment-based KL transform. We employed equi-partitioning to obtain segments, which means that we did not incorporate the dynamic segmentation technique. Fig. 12 shows the experimental result. The proposed method monotonically reduced the dimensions as the number of segments increased if the segment duration was shorter than 10 hours (the number of segments M ≥ 20). We can see that the proposed method reduced the dimensions, for example, to 1/25 of the original histograms when the contribution threshold was 0.90 and the segment duration was 1.2 minutes (the number of segments was 10000). The average dimensions did not decrease as the number of segments increased if the number of segments was relatively small. This is because we decided the number of subspace bases based on the contribution rates. Next, we deal with the dimension reduction performance derived from the dynamic segmentation technique. The initial positions of the segment boundaries were set by equi-partitioning. The duration of segments obtained by equi-partitioning was 12 minutes (i.e. there were 1000 segments). Fig. 13 shows the result. The proposed method further reduced the feature dimensionality to 87.5% of its initial value, which is almost the same level of performance as when only the local search was utilized. We were unable to calculate the average dimensionality when using DP because of the substantial amount of calculation, as described later. When the shiftable range was relatively narrow, the dynamic segmentation performance was almost the same as that of DP.
Here, we review the search speed performance shown in Fig. 10. It should be noted that three techniques in our proposed method contributed to speeding up the search, namely feature-dimension reduction, distance bounding and feature sampling. When the number of segments was relatively small, the speed-up factor was much larger than the ratio of the dimension of the compressed features to that of the original histograms, which can be seen in Figs. 10, 12 and 13. This implies that the feature sampling technique dominated the search performance in this case. On the other hand, when the number of segments was relatively large, the proposed search method did not greatly improve the search speed compared with the dimension reduction performance. This implies that the feature sampling technique degraded the search performance. In this case, the distance bounding technique mainly contributed to the improvement of the search performance as seen in Fig. 10.
Lastly, we discuss the amount of calculation necessary for dynamic segmentation. We again note that although dynamic segmentation can be executed prior to providing a query signal, the computational time must be at worst smaller than the duration of the stored signal from the viewpoint of practical applicability. We adopted the total number of dimension calculations needed to obtain the dimensions of the segments as a measure for comparing the calculation cost in the same way as in Section VI. Fig. 14 shows the estimated calculation cost for each dynamic segmentation method. We compared our method incorporating local optimization and coarse-to-fine detection with the DP-based method and a case where only the local optimization technique was incorporated. The horizontal line along with "Real-time processing" indicates that the computational time is almost the same as the duration of the signal. The proposed method required much less computation than with DP or local optimization. For example, when the width of the shiftable range was 500, the calculation cost of the proposed method was 1/5000 that of DP and 1/10 that with local optimization. We note that in this experiment, the calculation cost of the proposed method is less than the duration of the stored signal, while those of the other two methods are much longer.
IX. CONCLUDING REMARKS This paper proposed a method for undertaking quick similarity-based searches of an audio signal to detect and locate similar segments to a given audio clip. The proposed method was built on the TAS method, where audio segments are modeled by using histograms. With the proposed method, the histograms are compressed based on a piecewise linear representation of histogram sequences. We introduce dynamic segmentation, which divides histogram sequences into segments of variable lengths. We also addressed the quick suboptimal partitioning of the histogram sequences along with local optimization and coarse-to-fine detection techniques. Experiments revealed significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1/12, and detected the query in about 0.3 seconds from a 200-hour audio database. Although this paper focused on audio signal retrieval, the proposed method can be easily applied to video signal retrieval [34], [35]. Although the method proposed in this paper is founded on the TAS method, we expect that some of the techniques we have described could be used in conjunction with other similarity-based search methods (e.g. [36], [37], [38], [39]) or a speech/music discriminator [40]. Future work includes the implementation of indexing methods suitable for piecewise linear representation, and the dynamic determination of the initial segmentation, both of which have the potential to improve the search performance further. APPENDIX A PROOF OF THEOREM 1 First, let us define
z Q def. = p(x Q ), z S def. = p(x S ), x Q def. = q(z Q ) = q(p(x Q )), x S def. = q(z S ) = q(p(x S )), δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S ).
We note that for any histogram x ∈ N n , x = q(p(x)) is the projection of x into the subspace defined by the map p(·), and therefore x − x is a normal vector of the subspace of p(·). Also, we note that x − x = δ(p, x) and x is on the subspace of p(·). For two vectors x 1 and x 2 , their inner product is denoted as x 1 · x 2 . Then, we obtain
x Q − x S 2 = (x Q − x Q ) − (x S − x S ) + ( x Q − x S ) 2 = x Q − x Q 2 + x S − x S 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) + 2(x Q − x Q ) · ( x Q − x S ) −2(x S − x S ) · ( x Q − x S ) = δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) (10) ≥ δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2δ(p, x Q ) · δ(p, x S ) (11) = {δ(p, x Q ) − δ(p, x S )} 2 + z Q − z S 2 = y Q − y S 2 ,
where Eq. (10) comes from the fact that any vector on a subspace and the normal vector of the subspace are mutually orthogonal, and Eq. (11) from the definition of inner product. This concludes the proof of Theorem 1.
APPENDIX B PROOF OF THEOREM 2
The notations used in the previous section are also employed here. When the projected features z Q , z S and the projection distances
δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S )
are given, we can obtain the distance between the original features as follows: (12) is derived from Eq. (10) and φ is the angle between x Q − q(z Q ) and x S − q(z S ). From the assumption that random variables X S and X Q corresponding to original histograms x S and x Q are distributed independently and uniformly in the set A, the following equation is obtained:
x Q − x S 2 = z Q − z S 2 + δ 2 Q + δ 2 S −(x Q − q(z Q )) · (x S − q(z S )) (12) = z Q − z S 2 + δ 2 Q + δ 2 S − 2δ Q δ S cos φ, where Eq.E X Q − X S 2 − z Q − z S 2 = π 0 (δ 2 Q + δ 2 S − 2δ Q δ S cos φ) S n−m−1 (δ S sin φ) S n−m (δ S ) |d(δ S cos φ)|,(13)
where S k (R) represents the surface area of a k-dimensional hypersphere with radius R, and can be calculated as follows:
S k (R) = k π k/2 (k/2)! R k−1(14)
Substituting Eq. (14) into Eq. (13), we obtain
E X Q − X S 2 − z Q − z S 2 = n − m − 1 n − m (δ 2 Q + δ 2 S ) ≈ n − m − 1 n − m δ 2 Q ,
where the last approximation comes from the fact that δ Q δ D . Also, from Eq. (4) we have
x Q − x S 2 − y Q − y S 2 = 2δ Q δ S (1 − cos φ).
Therefore, we derive the following equation in the same way:
| 7,588 |
0710.4180
|
2103921041
|
This paper presents a new method for a quick similarity-based search through long unlabeled audio streams to detect and locate audio clips provided by users. The method involves feature-dimension reduction based on a piecewise linear representation of a sequential feature trajectory extracted from a long audio stream. Two techniques enable us to obtain a piecewise linear representation: the dynamic segmentation of feature trajectories and the segment-based Karhunen-Loeve (KL) transform. The proposed search method guarantees the same search results as the search method without the proposed feature-dimension reduction method in principle. Experimental results indicate significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1 12 that of previous methods and detected queries in approximately 0.3 s from a 200-h audio database.
|
Dimensionality reduction methods for time-series signals are categorized into two types: temporal dimensionality reduction , namely dimensionality reduction along the temporal axis (e.g. feature sampling), and spatial dimensionality reduction , namely the dimensionality reduction of each multi-dimensional feature sample. Keogh @cite_3 @cite_9 and Wang @cite_27 have introduced temporal dimensionality reduction into waveform signal retrieval. Their framework considers the waveform itself as a feature for detecting similar signal segments. That is why they mainly focused on temporal dimensionality reduction. When considering audio fingerprinting, however, we handle sequences of high-dimensional features that are necessary to identify various kinds of audio segments. Thus, both spatial and temporal dimensionality reduction are required. To this end, our method mainly focuses on spatial dimensionality reduction. We also incorporate a temporal dimensionality reduction technique inspired by the method of @cite_9 , which is described in Section .
|
{
"abstract": [
"Fast retrieval of time series in terms of their contents is important in many application domains. This paper studies database techniques supporting fast searches for time series whose contents are similar to what users specify. The content types studied include shapes, trends, cyclic components, autocorrelation functions and partial autocorrelation functions. Due to the complex nature of the similarity searches involving such contents, traditional database techniques usually cannot provide a fast response when the involved data volume is high. This paper proposes to answer such content-based queries using appropriate approximation techniques. The paper then introduces two specific approximation methods, one is wavelet based and the other line-fitting based. Finally, the paper reports some experiments conducted on a stock price data set as well as a synthesized random walk data set, and shows that both approximation methods significantly reduce the query processing time without introducing intolerable errors.",
"The problem of similarity search in large time series databases has attracted much attention recently. It is a non-trivial problem because of the inherent high dimensionality of the data. The most promising solutions involve first performing dimensionality reduction on the data, and then indexing the reduced data with a spatial access method. Three major dimensionality reduction techniques have been proposed: Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and more recently the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Piecewise Aggregate Approximation (PAA). We theoretically and empirically compare it to the other techniques and demonstrate its superiority. In addition to being competitive with or faster than the other methods, our approach has numerous other advantages. It is simple to understand and to implement, it allows more flexible distance measures, including weighted Euclidean queries, and the index can be built in linear time.",
"Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower bounding, but very tight Euclidean distance approximation and show how they can support fast exact searching, and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority."
],
"cite_N": [
"@cite_27",
"@cite_9",
"@cite_3"
],
"mid": [
"2167035411",
"2066796814",
"2163336863"
]
}
|
A quick search method for audio signals based on a piecewise linear representation of feature trajectories
|
This paper presents a method for searching quickly through unlabeled audio signal archives (termed stored signals) to detect and locate given audio clips (termed query signals) based on signal similarities.
Many studies related to audio retrieval have dealt with content-based approaches such as audio content classification [1], [2], speech recognition [3], and music transcription [3], [4]. Therefore, these studies mainly focused on associating audio signals with their meanings. In contrast, this study aims at achieving a similarity-based search or more specifically fingerprint identification, which constitutes a search of and retrieval from unlabeled audio archives based only on a signal similarity measure. That is, our objective is signal matching, not the association of signals with their semantics. Although the range of applications for a similarity-based search may seem narrow compared with content-based approaches, this is not actually the case. The applications include the detection and statistical analysis of broadcast music and commercial spots, and the content identification, detection and copyright management of pirated copies of music clips. Fig. 1 represents one of the most representative examples of such applications, which has already been put to practical use. This system automatically checks and identifies broadcast music clips or commercial spots to provide copyright information or other detailed information about the music or the spots.
In audio fingerprinting applications, the query and stored signals cannot be assumed to be exactly the same even in the corresponding sections of the same sound, owing to, for example, compression, transmission and irrelevant noises. Meanwhile, for the applications to be practically viable, the features Manuscript received December 15, 2006; revised June 17, 2007; second revision September 24, 2007, Accepted October 6,2007. The associate editor coordinating the review is Dr. Michael Goodwin. A. Kimura, K. Kashino should be compact and the feature analysis should be computationally efficient. For this purpose, several feature extraction methods have been developed to attain the above objectives. Cano et al. [5] modeled music segments as sequences of sound classes estimated via unsupervised clustering and hidden Markov models (HMMs). Burges et al. [6] employed several layers of Karhunen-Lóeve (KL) transforms, which reduced the local statistical redundancy of features with respect to time, and took account of robustness to shifting and pitching. Oostveen et al. [7] represented each frame of a video clip as a binary map and used the binary map sequence as a feature. This feature is robust to global changes in luminance and contrast variations. Haitsma et al. [8] and Kurozumi et al. [9] each employed a similar approach in the context of audio fingerprinting. Wang [10] developed a feature-point-based approach to improve the robustness. Our previous approach called the Time-series Active Search (TAS) method [11] introduced a histogram as a compact and noise-robust fingerprint, which models the empirical distribution of feature vectors in a segment. Histograms are sufficiently robust for monitoring broadcast music or detecting pirated copies. Another novelty of this approach is its effectiveness in accelerating the search. Adjacent histograms extracted from sliding audio segments are strongly correlated with each other. Therefore, unnecessary matching calculations are avoided by exploiting the algebraic properties of histograms.
Another important research issue regarding similarity-based approaches involves finding a way to speed up the search. Multi-dimensional indexing methods [12], [13] have frequently been used for accelerating searches. However, when feature vectors are high-dimensional, as they are typically with multimedia signals, the efficiency of the existing indexing methods deteriorates significantly [14], [15]. This is why search methods based on linear scans such as the TAS method are often employed for searches with high-dimensional features. However, methods based solely on linear scans may not be appropriate for managing large-scale signal archives, and therefore dimension reduction should be introduced to mitigate this effect.
To this end, this paper presents a quick and accurate audio search method that uses dimensionality reduction of histogram features. The method involves a piecewise linear representation of histogram sequences by utilizing the continuity and local correlation of the histogram sequences. A piecewise linear representation would be feasible for the TAS framework since the histogram sequences form trajectories in multi-dimensional spaces. By incorporating our method into the TAS framework, we significantly increase the search speed while guaranteeing the same search results as the TAS method. We introduce the following two techniques to obtain a piecewise representation: the dynamic segmentation of the feature trajectories and the segment-based KL transform.
The segment-based KL transform involves the dimensionality reduction of divided histogram sequences (called segments) by KL transform. We take advantage of the continuity and local correlation of feature sequences extracted from audio signals. Therefore, we expect to obtain a linear representation with few approximation errors and low computational cost. The segment-based KL transform consists of the following three components: The basic component of this technique reduces the dimensionality of histogram features. The second component that utilizes residuals between original histogram features and features after dimension reduction greatly reduces the required number of histogram comparisons. Feature sampling is introduced as the third component. This not only saves the storage space but also contributes to accelerating the search.
Dynamic segmentation refers to the division of histogram sequences into segments of various lengths to achieve the greatest possible reduction in the average dimensionality of the histogram features. One of the biggest problems in dynamic segmentation is that finding the optimal set of partitions that minimizes the average dimensionality requires a substantial calculation. The computational time must be no more than that needed for capturing audio signals from the viewpoint of practical applicability. To reduce the calculation cost, our technique addresses the quick suboptimal partitioning of the histogram trajectories, which consists of local optimization to avoid recursive calculations and the coarse-to-fine detection of segment boundaries. This paper is organized as follows: Section II introduces the notations and definitions necessary for the subsequent explanations. Section III explains the TAS method upon which our method is founded. Section IV outlines the proposed search method. Section V discusses a dimensionality reduction technique with the segment-based KL transform. Section VI details dynamic segmentation. Section VII presents experimental results related to the search speed and shows the advantages of the proposed method. Section VIII further discusses the advantages and shortcomings of the proposed method as well as providing additional experimental results. Section IX concludes the paper.
II. PRELIMINARIES
Let N be the set of all non-negative numbers, R be the set of all real numbers, and N n be a n-ary Cartesian product of N . Vectors are denoted by boldface lower-case letters, e.g. x, and matrices are denoted by boldface upper-case letters, e.g. A. The superscript t stands for the transposition of a vector or a matrix, e.g. x t or A t . The Euclidean norm of an n-dimensional vector x ∈ R n is denoted as x :
x def. = n i=1 |x i | 2 1/2 ,
where |x| is the magnitude of x. For any function f (·) and a random variable X, E[f (X)] stands for the expectation of f (X). Similarly, for a given value y ∈ Y, some function g(·, ·) and a random variable X, E[f (X, y)|y] stands for the conditional expectation of g(X, y) given y. Fig. 2 outlines the Time-series Active Search (TAS) method, which is the basis of our proposed method. We provide a summary of the algorithm here. Details can be found in [11].
III. TIME-SERIES ACTIVE SEARCH
[Preparation stage] 1) Base features are extracted from the stored signal. Our preliminary experiments showed that the short-time frequency spectrum provides sufficient accuracy for our similarity-based search task. Base features are extracted at every sampled time step, for example, every 10 msec. Henceforth, we call the sampled points frames (the term was inspired by video frames). Base features are denoted as f S (t S ) (0 ≤ t S < L S ), where t S represents the position in the stored signal and L S is the length of the stored signal (i.e. the number of frames in the stored signal). 2) Every base feature is quantized by vector quantization (VQ). A codebook {f i } n i=1 is created beforehand, where n is the codebook size (i.e. the number of codewords in the codebook). We utilize the Linde-Buzo-Gray (LBG) algorithm [16] for codebook creation. A quantized base feature q S (t S ) is expressed as a VQ codeword assigned to the corresponding base feature f S (t S ), which is determined as
q S (t S ) = arg min 1≤i≤n f S (t S ) − f i 2 .
[Search stage] 1) Base features f Q (t Q ) (0 ≤ t Q < L Q ) of the query signal are extracted in the same way as the stored signal and quantized with the codebook {f i } n i=1 created in the preparation stage, where t Q represents the position in the query signal and L Q is its length. We do not have to take into account the calculation time for feature quantization since it takes less than 1% of the length of the signal. A quantized base feature for the query signal is denoted as q Q (t Q ).
2) Histograms are created; one for the stored signal denoted as x S (t S ) and the other for the query signal denoted as x Q . First, windows are applied to the sequences of quantized base features extracted from the query and stored signals. The window length W (i.e. the number of frames in the window) is set at W = L Q , namely the length of the query signal. A histogram is created by counting the instances of each VQ codeword over the window. Therefore, each index of a histogram bin corresponds to a VQ codeword. We note that a histogram does not take the codeword order into account. 3) Histogram matching is executed based on the distance between histograms, computed as
d(t S ) def. = x S (t S ) − x Q .
When the distance d(t S ) falls below a given value (search threshold) θ, the query signal is considered to be detected at the position t S of the stored signal. 4) A window on the stored signal is shifted forward in time and the procedure returns to Step 2). As the window for the stored signal shifts forward in time, VQ codewords included in the window cannot change so rapidly, which means that histograms cannot also change so rapidly. This implies that for a given positive integer w the lower bound on the distance d(t S + w) is obtained from the triangular inequality as follows:
d(t S + w) ≥ max{0, d(t S ) − √ 2w},
where √ 2 is the maximum distance between x S (t S ) and x S (t S + w). Therefore, the skip width w(t S ) of the window at the t S -th frame is obtained as
w(t S ) = floor d(t S ) − θ √ 2 + 1 (if d(t S ) > θ) 1, (otherwise)(1)
where floor(a) indicates the largest integer less than a. We note that no sections will ever be missed that have distance values smaller than the search threshold θ, even if we skip the width w(t S ) given by Eq. (1).
IV. FRAMEWORK OF PROPOSED SEARCH METHOD The proposed method improves the TAS method so that the search is accelerated without false dismissals (incorrectly missing segments that should be detected) or false detections (identifying incorrect matches). To accomplish this, we introduce feature-dimension reduction as explained in Sections V and VI, which reduces the calculation costs required for matching. Fig. 3 shows an overview of the proposed search method, and Fig. 4 outlines the procedure for featuredimension reduction. The procedure consists of a preparation stage and a search stage.
[Preparation stage] 1) Base features f S (t S ) are extracted from the stored signal and quantized, to create quantized base features q S (t S ). The procedure is the same as that of the TAS method. 2) Histograms x S (t S ) are created in advance from the quantized base features of the stored signal by shifting a window of a predefined length W . We note that with the TAS method the window length W varies from one search to another, while with the present method the window length W is fixed. This is because histograms x S (t S ) for the stored signal are created prior to the search. We should also note that the TAS method does not create histograms prior to the search because sequences of VQ codewords need much less storage space than histogram sequences. 3) A piecewise linear representation of the extracted histogram sequence is obtained ( Fig. 4
block (A)).
This representation is characterized by a set T = {t j } M j=0 of segment boundaries expressed by their frame numbers and a set {p j (·)} M j=1 of M functions, where M is the number of segments, t 0 = 0 and t M = L S . The j-th segment is expressed as a half-open interval [t j−1 , t j ) since it starts from x S (t j−1 ) and ends at x S (t j − 1). Section VI shows how to obtain such segment boundaries. Each function p j (·) : N n → R m j that corresponds to the j-th segment reduces the dimensionality n of the histogram to the dimensionality m j . Section V-B shows how to determine these functions. 4) The histograms x S (t S ) are compressed by using the functions {p j (·)} M j=1 obtained in the previous step, and then compressed features y S (t S ) are created ( 1) Base features f Q (t Q ) are extracted and a histogram x Q is created from the query signal in the same way as the TAS method.
2) The histogram x Q is compressed based on the functions {p j (·)} M j=1 obtained in the preparation stage, to create M compressed features y Q [j] (j = 1, · · · , M ). Each compressed feature y Q [j] corresponds to the j-th function p j (·). The procedure used to create compressed features is the same as that for the stored signal.
3) Compressed features created from the stored and query signals are matched, that is, the distance
d(t S ) = y S (t S ) − y Q [j t S ] between two compressed features y S (t S ) and y Q [j t S ] is calculated,
where j t S represents the index of the segment that contains x S (t S ), namely t jt S −1 ≤ t S < t jt S . 4) If the distance falls below the search threshold θ, the original histograms x S (t S ) corresponding to the surviving compressed features y S (t S ) are verified. Namely, the distance d(t S ) = x S (t S ) − x Q is calculated and compared with the search threshold θ. 5) A window on the stored signal is shifted forward in time and the procedure goes back to Step 3).
The skip width of the window is calculated from the distance d(t S ) between compressed features.
B. Segment-based KL transform
As the first step towards obtaining a piecewise representation, the histogram sequence is divided into M segments. Dynamic segmentation is introduced here, which enhances feature-dimension reduction performance. This will be explained in detail in Section VI. Second, a KL transform is performed for every segment and a minimum number of eigenvectors are selected such that the sum of their contribution rates exceeds a predefined value σ, where the contribution rate of an eigenvector stands for its eigenvalue divided by the sum of all eigenvalues, and the predefined value σ is called the contribution threshold. The number of selected eigenvectors in the j-th segment is written as m j . Then, a function p j (·) : N n → R m j (j = 1, 2, · · · , M ) for dimensionality reduction is determined as a map to a subspace whose bases are the selected eigenvectors:
p j (x) = P t j (x − x j ),(2)
where x is a histogram, x j is the centroid of histograms contained in the j-th segment, and P j is an (n × m j ) matrix whose columns are the selected eigenvectors. Finally, each histogram is compressed by using the function p j (·) of the segment to which the histogram belongs. Henceforth, we refer to p j (x) as a projected feature of a histogram x.
In the following, we omit the index j corresponding to a segment unless it is specifically needed, e.g. p(x) and x.
C. Distance bounding
From the nature of the KL transform, the distance between two projected features gives the lower bound of the distance between corresponding original histograms. However, this bound does not approximate the original distance well, and this results in many false detections.
To improve the distance bound, we introduce a new technique. Let us define a projection distance δ(p, x) as the distance between a histogram x and the corresponding projected feature z = p(x):
δ(p, x) def. = x − q(z) ,(3)
where q(·) : R m → R n is the generalized inverse map of p(·), defined as
q(z) def. = P z + x.
Here we create a compressed feature y, which is the projected feature z = (z 1 , z 2 , · · · , z m ) t along with the projection distance δ(p, x):
y = y(p, x) = (z 1 , z 2 , · · · , z m , δ(p, x)) t ,
where y(p, x) means that y is determined by p and x. The Euclidean distance between compressed features is utilized as a new criterion for matching instead of the Euclidean distance between projected features. The distance is expressed as
y S − y Q 2 = z S − z Q 2 + {δ(p, x S ) − δ(p, x Q )} 2 ,(4)
where z S = p(x S ) (resp. z Q = p(x Q )) is the project feature derived from the original histograms x S (resp. x Q ) and y S = y S (p, x S ) (resp. y Q = y Q (p, x Q )) is the corresponding compressed feature. Eq. (4) implies that the distance between compressed features is larger than the distance between corresponding projected features. In addition, from the above discussions, we have the following two properties, which indicate that the distance y S − y Q between two compressed features is a better approximation of the distance x S − x Q between the original histograms than the distance z S − z Q between projected features (Theorem 1), and the expected approximation error is much smaller (Theorem 2). Theorem 1:
z S − z Q ≤ y S − y Q = min (x S ,x Q )∈A(y S ,y Q ) x S −x Q ≤ x S − x Q ,(5)
where A(y S , y Q ) is the set of all possible pairs (x S ,x Q ) of original histograms for given compressed features (y S , y Q ). Intuitive illustration of relationships between projection distance, distance between projected features and distance between compressed features.
Theorem 2: Suppose that random variables (X n S , X n Q ) corresponding to the original histograms (x S , x Q ) have a uniform distribution on the set A(y S , y Q ) defined in Theorem 1, and E[δ(p, X n S )] E[δ(p, X n Q )]. The expected approximation errors can be evaluated as
E X n S − X n Q 2 − y S − y Q 2 y S , y Q E X n S − X n Q 2 − z S − z Q 2 y S , y Q .(6)
The proofs are shown in the appendix. Fig. 6 shows an intuitive illustration of the relationships between projection distances, distances between projected features and distances between compressed features, where the histograms are in a 3-dimensional space and the subspace dimensionality is 1. In this case, for given compressed features (y S , y Q ) and a fixed query histogram x Q , a stored histogram x S must be on a circle whose center is q(z Q ). This circle corresponds to the set A(y S , y Q ).
D. Feature sampling
In the TAS method, quantized base features are stored, because they need much less storage space than the histogram sequence and creating histograms on the spot takes little calculation. With the present method, however, compressed features must be computed and stored in advance so that the search results can be returned as quickly as possible, and therefore much more storage space is needed than with the TAS method. The increase in storage space may cause a reduction in search speed due to the increase in disk access.
Based on the above discussion, we incorporate feature sampling in the temporal domain. The following idea is inspired by the technique called Piecewise Aggregate Approximation (PAA) [22]. With the proposed feature sampling method, first a compressed feature sequence
{y S (t S )} L S −W −1 t S =0
is divided into subsequences {y S (ia), y S (ia + 1), · · · , y S (ia + a − 1)} i=0,1,··· of length a. Then, the first compressed feature y S (ia) of every subsequence is selected as a representative feature. A lower bound of the distances between the query and stored compressed features contained in the subsequence can be expressed in terms of the representative feature y S (ia). This bound is obtained from the triangular inequality as follows:
y S (ia + k) − y Q ≥ y S (ia) − y Q − d(i), d(i) def. = max 0≤k ≤a−1 y S (ia + k ) − y S (ia) .
(∀i = 0, 1, · · · , ∀k = 0, · · · , a − 1) This implies that preserving the representative feature y S (ia) and the maximum distance d(i) is sufficient to guarantee that there are no false dismissals. This feature sampling is feasible for histogram sequences because successive histograms cannot change rapidly. Furthermore, the technique mentioned in this section will also contribute to accelerating the search, especially when successive histograms change little.
VI. DYNAMIC SEGMENTATION A. Related work
The approach used for dividing histogram sequences into segments is critical for realizing efficient feature-dimension reduction since the KL transform is most effective when the constituent elements in the histogram segments are similar. To achieve this, we introduce a dynamic segmentation strategy.
Dynamic segmentation is a generic term that refers to techniques for dividing sequences into segments of various lengths. Dynamic segmentation methods for time-series signals have already been applied to various kinds of applications such as speech coding (e.g. [24]), the temporal compression of waveform signals [25], the automatic segmentation of speech signals into phonic units [26], sinusoidal modeling of audio signals [27], [28], [29] and motion segmentation in video signals [30]. We employ dynamic segmentation to minimize the average dimensionality of high-dimensional feature trajectories.
Dynamic segmentation can improve dimension reduction performance. However, finding the optimal boundaries still requires a substantial calculation. With this in mind, several studies have adopted suboptimal approaches, such as longest line fitting [23], wavelet decomposition [23], [21] and the bottom-up merging of segments [31]. The first two approaches still incur a substantial calculation cost for long time-series signals. The last approach is promising as regards obtaining a rough global approximation at a practical calculation cost. This method is compatible with ours, however, we mainly focus on a more precise local optimization. Fig. 7 shows an outline of our dynamic segmentation method. The objective of the dynamic segmentation method is to divide the stored histogram sequence so that its piecewise linear representation is well characterized by a set of lower dimensional subspaces. To this end, we formulate the dynamic segmentation as a way to find a set T * = {t * j } M j=0 of segment boundaries that minimize the average dimensionality of these segment-approximating subspaces on condition that the boundary t * j between the j-th and the (j +1)th segments is in a shiftable range S j , which is defined as a section with a width ∆ in the vicinity of the initial position t 0 j of the boundary between the j-th and the (j + 1)-th segments. Namely, the set T * of the optimal segment boundaries is given by the following formula:
B. Framework
T * = t * j M j=0 def. = arg min {t j } M j=0 :t j ∈S j ∀j 1 L S M i=1 (t j − t j−1 ) · c(t j−1 , t j , σ) (7) S j def. = {t j : t 0 j − ∆ ≤ t j ≤ t 0 j + ∆} (8)
where c(t i , t j , σ) represents the subspace dimensionality on the segment between the t i -th and the t j -th frames for a given contribution threshold σ, t * 0 = 0 and t * M = L S . The initial positions of the segment boundaries are set beforehand by equi-partitioning.
The above optimization problem defined by Eq. (7) would normally be solved with dynamic programming (DP) (e.g. [32]). However, DP is not practical in this case. Deriving c(t j−1 , t j , σ) included in Eq. (7) incurs a substantial calculation cost since it is equivalent to executing a KL transform calculation for the segment [t j−1 , t j ). This implies that the DP-based approach requires a significant amount of calculation, although less than a naive approach. The above discussion implies that we should reduce the number of KL transform calculations to reduce the total calculation cost required for the optimization. When we adopt the total number of KL transform calculations as a measure for assessing the calculation cost, the cost is evaluated as O(M ∆ 2 ), where M is the number of segments and ∆ is the width of the shiftable range.
To reduce the calculation cost, we instead adopt a suboptimal approach. Two techniques are incorporated: local optimization and the coarse-to-fine detection of segment boundaries. We explain these two techniques in the following sections.
C. Local optimization
The local optimization technique modifies the formulation (Eq. (7)) of dynamic segmentation so that it minimizes the average dimensionality of the subspaces of adjoining segments. The basic idea is similar to the "forward segmentation" technique introduced by Goodwin [27], [28] for deriving accurate sinusoidal models of audio signals. The position t * j of the boundary is determined by using the following forward recursion as a substitute for Eq. (7):
t * j = arg min t j ∈S j (t j − t * j−1 )c * j + (t 0 j+1 − t j )c 0 j+1 t 0 j+1 − t * j−1 ,(9)
which is here given by
c * j = c(t * j−1 , t j , σ), c 0 j+1 = c(t j , t 0 j+1 , σ)
, and S j is defined in Eq. (8). As can be seen in Eq. (9), we can determine each segment boundary independently, unlike the formulation of Eq. (7). Therefore, the local optimization technique can reduce the amount of calculation needed for extracting an appropriate representation, which is evaluated as O(M ∆), where M is the number of segments and ∆ is the width of the shiftable range.
D. Coarse-to-fine detection
The coarse-to-fine detection technique selects suboptimal boundaries in the sense of Eq. (9) with less computational cost. We note that small boundary shifts do not contribute greatly to changes in segment dimensionality because successive histograms cannot change rapidly. With this in mind, we assume that 1) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the initial position t 0 j and the edges (t 0 j − ∆ and t 0 j + ∆) of its shiftable range.
2) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the position t 0 j − ∆ + 2∆ u j +1 i (i = 1, 2, · · · , u j ), where u j determines the number of calculations in this step.
3) The dimensions of the j-th and (j + 1)-th segments are calculated in detail when the segment boundary t j is in the positions where dimension changes are detected in the previous step. We determine the number u j of dimension calculations in step 2 so that the number of calculations in all the above steps, f j (u j ), is minimized. Then, f j (u j ) is given as follows:
f j (u j ) = 2 (3 + u j ) + K j ∆ 1 2 u j + 1 ,
where K j is the estimated number of positions where the dimensionalities change, which is experimentally determined as
K j = c LR − c LL , (if c LR ≤ c RR , c LL < c RL ) K j = (c LC − c LL ) + min(c RC , c LR ) − min(c LC , c RR ), (if c LR > c RR , c LL < c RL , c LC ≤ c RC ) K j = (c RC − c RR ) + min(c LC , c RL ) − min(c RC , c LL ), (if c LR > c RR , c LL < c RL , c LC > c RC ) K j = c RL − c RR , (Otherwise) and c LL = c(t * j−1 , t 0 j − ∆, σ), c RL = c(t 0 j − ∆, t 0 j+1 , σ), c LC = c(t * j−1 , t 0 j , σ), c RC = c(t 0 j , t 0 j+1 , σ), c LR = c(t * j−1 , t 0 j + ∆, σ), c RR = c(t 0 j + ∆, t 0 j+1 , σ).
The first term of f j (u j ) refers to the number of calculations in steps 1 and 2, and the second term corresponds to that in step 3. f j (u j ) takes the minimum value 4 2K j ∆ + 2 when u j = 2K j ∆ − 2. The calculation cost when incorporating local optimization and coarse-to-fine detection techniques is evaluated as follows:
E M 4 2K j ∆ + 2 ≤ M 4 √ 2K∆ + 2 = O M √ K∆ , where K = E[K j ]
, M is the number of segments and ∆ is the width of the shiftable range. The first inequality is derived from Jensen's inequality (e.g. [33, Theorem 2.6.2]). The coarse-to-fine detection technique can additionally reduce the calculation cost because K is usually much smaller than ∆.
VII. EXPERIMENTS A. Conditions
We tested the proposed method in terms of calculation cost in relation to search speed. We again note that the proposed search method guarantees the same search results as the TAS method in principle, and therefore we need to evaluate the search speed. The search accuracy for the TAS method was reported in a previous paper [11]. In summary, for audio identification tasks, there were no false detections or false dismissals down to an S/N ratio of 20 dB if the query duration was longer than 10 seconds.
In the experiments, we used a recording of a real TV broadcast. An audio signal broadcast from a particular TV station was recorded and encoded in MPEG-1 Layer 3 (MP3) format. We recorded a 200hour audio signal as a stored signal, and recorded 200 15-second spots from another TV broadcast as queries. Thus, the task was to detect and locate specific commercial spots from 200 consecutive hours of TV recording. Each spot occurred 2-30 times in the stored signal. Each signal was first digitized at a 32 kHz sampling frequency and 16 bit quantization accuracy. The bit rate for the MP3 encoding was 56 kbps. We extracted base features from each audio signal using a 7-channel second-order IIR band-pass filter with Q = 10. The center frequencies at the filter were equally spaced on a log frequency scale. The base features were calculated every 10 milliseconds from a 60 millisecond window. The base feature vectors were quantized by using the VQ codebook with 128 codewords, and histograms were created based on the scheme of the TAS method. Therefore, the histogram dimension was 128. We implemented the feature sampling described in Section V-D and the sampling duration was a = 50. The tests were carried out on a PC (Pentium 4 2.0 GHz).
B. Search speed
We first measured the CPU time and the number of matches in the search. The search time we measured in this test comprised only the CPU time in the search stage shown in Section IV. This means that the search time did not include the CPU time for any procedures in the preparation stage such as base feature extraction, histogram creation, or histogram dimension reduction for the stored signal. The search threshold was adjusted to θ = 85 so that there were no false detections or false dismissals. We compared the following methods:
(i) The TAS method (baseline). (ii) The proposed search method without the projection distance being embedded in the compressed features. (iii) The proposed search method.
We first examined the relationships between the average segment duration (equivalent to the number of segments), the search time, and the number of matches. The following parameters were set for featuredimension reduction: The contribution threshold was σ = 0.9. The width of the shiftable range for dynamic segmentation was 500. Fig. 10 shows the relationship between the average segment duration and the search time, where the ratio of the search speed of the proposed method to that of the TAS method (conventional method in the figure) is called the speed-up factor. Also, Fig. 11 shows the relationship between the average segment duration and the number of matches. Although the proposed method only slightly increased the number of matches, it greatly reduced the search time. This is because it greatly reduced the calculation cost per match owing to feature-dimension reduction. For example, the proposed method reduced the search time to almost 1/12 when the segment duration was 1.2 minutes (i.e. the number of segments was 10000). As mentioned in Section V-D, the feature sampling technique also contributed to the acceleration of the search, and the effect is similar to histogram skipping. Considering the dimension reduction performance results described later, we found that those effects were greater than that caused by dimension reduction for large segment durations (i.e. a small number of segments). This is examined in detail in the next section. We also found that the proposed method reduced the search time and the number of matches when the distance bounding technique was incorporated, especially when there were a large number of segments.
VIII. DISCUSSION The previous section described the experimental results solely in terms of search speed and the advantages of the proposed method compared with the previous method. This section provides further discussion of the advantages and shortcomings of the proposed method as well as additional experimental results.
We first deal with the dimension reduction performance derived from the segment-based KL transform. We employed equi-partitioning to obtain segments, which means that we did not incorporate the dynamic segmentation technique. Fig. 12 shows the experimental result. The proposed method monotonically reduced the dimensions as the number of segments increased if the segment duration was shorter than 10 hours (the number of segments M ≥ 20). We can see that the proposed method reduced the dimensions, for example, to 1/25 of the original histograms when the contribution threshold was 0.90 and the segment duration was 1.2 minutes (the number of segments was 10000). The average dimensions did not decrease as the number of segments increased if the number of segments was relatively small. This is because we decided the number of subspace bases based on the contribution rates. Next, we deal with the dimension reduction performance derived from the dynamic segmentation technique. The initial positions of the segment boundaries were set by equi-partitioning. The duration of segments obtained by equi-partitioning was 12 minutes (i.e. there were 1000 segments). Fig. 13 shows the result. The proposed method further reduced the feature dimensionality to 87.5% of its initial value, which is almost the same level of performance as when only the local search was utilized. We were unable to calculate the average dimensionality when using DP because of the substantial amount of calculation, as described later. When the shiftable range was relatively narrow, the dynamic segmentation performance was almost the same as that of DP.
Here, we review the search speed performance shown in Fig. 10. It should be noted that three techniques in our proposed method contributed to speeding up the search, namely feature-dimension reduction, distance bounding and feature sampling. When the number of segments was relatively small, the speed-up factor was much larger than the ratio of the dimension of the compressed features to that of the original histograms, which can be seen in Figs. 10, 12 and 13. This implies that the feature sampling technique dominated the search performance in this case. On the other hand, when the number of segments was relatively large, the proposed search method did not greatly improve the search speed compared with the dimension reduction performance. This implies that the feature sampling technique degraded the search performance. In this case, the distance bounding technique mainly contributed to the improvement of the search performance as seen in Fig. 10.
Lastly, we discuss the amount of calculation necessary for dynamic segmentation. We again note that although dynamic segmentation can be executed prior to providing a query signal, the computational time must be at worst smaller than the duration of the stored signal from the viewpoint of practical applicability. We adopted the total number of dimension calculations needed to obtain the dimensions of the segments as a measure for comparing the calculation cost in the same way as in Section VI. Fig. 14 shows the estimated calculation cost for each dynamic segmentation method. We compared our method incorporating local optimization and coarse-to-fine detection with the DP-based method and a case where only the local optimization technique was incorporated. The horizontal line along with "Real-time processing" indicates that the computational time is almost the same as the duration of the signal. The proposed method required much less computation than with DP or local optimization. For example, when the width of the shiftable range was 500, the calculation cost of the proposed method was 1/5000 that of DP and 1/10 that with local optimization. We note that in this experiment, the calculation cost of the proposed method is less than the duration of the stored signal, while those of the other two methods are much longer.
IX. CONCLUDING REMARKS This paper proposed a method for undertaking quick similarity-based searches of an audio signal to detect and locate similar segments to a given audio clip. The proposed method was built on the TAS method, where audio segments are modeled by using histograms. With the proposed method, the histograms are compressed based on a piecewise linear representation of histogram sequences. We introduce dynamic segmentation, which divides histogram sequences into segments of variable lengths. We also addressed the quick suboptimal partitioning of the histogram sequences along with local optimization and coarse-to-fine detection techniques. Experiments revealed significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1/12, and detected the query in about 0.3 seconds from a 200-hour audio database. Although this paper focused on audio signal retrieval, the proposed method can be easily applied to video signal retrieval [34], [35]. Although the method proposed in this paper is founded on the TAS method, we expect that some of the techniques we have described could be used in conjunction with other similarity-based search methods (e.g. [36], [37], [38], [39]) or a speech/music discriminator [40]. Future work includes the implementation of indexing methods suitable for piecewise linear representation, and the dynamic determination of the initial segmentation, both of which have the potential to improve the search performance further. APPENDIX A PROOF OF THEOREM 1 First, let us define
z Q def. = p(x Q ), z S def. = p(x S ), x Q def. = q(z Q ) = q(p(x Q )), x S def. = q(z S ) = q(p(x S )), δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S ).
We note that for any histogram x ∈ N n , x = q(p(x)) is the projection of x into the subspace defined by the map p(·), and therefore x − x is a normal vector of the subspace of p(·). Also, we note that x − x = δ(p, x) and x is on the subspace of p(·). For two vectors x 1 and x 2 , their inner product is denoted as x 1 · x 2 . Then, we obtain
x Q − x S 2 = (x Q − x Q ) − (x S − x S ) + ( x Q − x S ) 2 = x Q − x Q 2 + x S − x S 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) + 2(x Q − x Q ) · ( x Q − x S ) −2(x S − x S ) · ( x Q − x S ) = δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) (10) ≥ δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2δ(p, x Q ) · δ(p, x S ) (11) = {δ(p, x Q ) − δ(p, x S )} 2 + z Q − z S 2 = y Q − y S 2 ,
where Eq. (10) comes from the fact that any vector on a subspace and the normal vector of the subspace are mutually orthogonal, and Eq. (11) from the definition of inner product. This concludes the proof of Theorem 1.
APPENDIX B PROOF OF THEOREM 2
The notations used in the previous section are also employed here. When the projected features z Q , z S and the projection distances
δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S )
are given, we can obtain the distance between the original features as follows: (12) is derived from Eq. (10) and φ is the angle between x Q − q(z Q ) and x S − q(z S ). From the assumption that random variables X S and X Q corresponding to original histograms x S and x Q are distributed independently and uniformly in the set A, the following equation is obtained:
x Q − x S 2 = z Q − z S 2 + δ 2 Q + δ 2 S −(x Q − q(z Q )) · (x S − q(z S )) (12) = z Q − z S 2 + δ 2 Q + δ 2 S − 2δ Q δ S cos φ, where Eq.E X Q − X S 2 − z Q − z S 2 = π 0 (δ 2 Q + δ 2 S − 2δ Q δ S cos φ) S n−m−1 (δ S sin φ) S n−m (δ S ) |d(δ S cos φ)|,(13)
where S k (R) represents the surface area of a k-dimensional hypersphere with radius R, and can be calculated as follows:
S k (R) = k π k/2 (k/2)! R k−1(14)
Substituting Eq. (14) into Eq. (13), we obtain
E X Q − X S 2 − z Q − z S 2 = n − m − 1 n − m (δ 2 Q + δ 2 S ) ≈ n − m − 1 n − m δ 2 Q ,
where the last approximation comes from the fact that δ Q δ D . Also, from Eq. (4) we have
x Q − x S 2 − y Q − y S 2 = 2δ Q δ S (1 − cos φ).
Therefore, we derive the following equation in the same way:
| 7,588 |
0710.4180
|
2103921041
|
This paper presents a new method for a quick similarity-based search through long unlabeled audio streams to detect and locate audio clips provided by users. The method involves feature-dimension reduction based on a piecewise linear representation of a sequential feature trajectory extracted from a long audio stream. Two techniques enable us to obtain a piecewise linear representation: the dynamic segmentation of feature trajectories and the segment-based Karhunen-Loeve (KL) transform. The proposed search method guarantees the same search results as the search method without the proposed feature-dimension reduction method in principle. Experimental results indicate significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1 12 that of previous methods and detected queries in approximately 0.3 s from a 200-h audio database.
|
Dynamic segmentation is a generic term that refers to techniques for dividing sequences into segments of various lengths. Dynamic segmentation methods for time-series signals have already been applied to various kinds of applications such as speech coding (e.g. @cite_33 ), the temporal compression of waveform signals @cite_1 , the automatic segmentation of speech signals into phonic units @cite_24 , sinusoidal modeling of audio signals @cite_4 @cite_21 @cite_6 and motion segmentation in video signals @cite_25 . We employ dynamic segmentation to minimize the average dimensionality of high-dimensional feature trajectories.
|
{
"abstract": [
"List of Figures. Foreword M. Vetterli. Preface. 1. Signal Models and Analysis-Synthesis. 2. Sinusoidal Modeling. 3. Multiresolution Sinusoidal Modeling. 4. Residual Modeling. 5. Pitch-Synchronous Models. 6. Matching Pursuit and Atomic Models. 7. Conclusions. Appendix A: Two-Channel Filter Banks. Appendix B: Fourier Series Representations. References. References for Poetry Excerpts. Index.",
"A low-bit-rate linear predictive coder (LPC) that is based on variable-length segment quantization is presented. In this vocoder, the speech spectral-parameter sequence is represented as the concatenation of variable-length spectral segments generated by linearly time-warping fixed-length code segments. Both the sequence of code segments and the segment lengths are efficiently determined using a dynamic programming procedure. This procedure minimizes the spectral distance measured between the original and the coded spectral sequence in a given interval. An iterative algorithm is developed for designing fixed-length code segments for the training spectral sequence. It updates the segment boundaries of the training spectral sequence using an a priori codebook and updates the codebook using these segment sequences. The convergence of this algorithm is discussed theoretically and experimentally. In experiments, the performance of variable-length segment quantization for voice coding is compared to that of fixed-length segment quantization and vector quantization. >",
"The sinusoidal model has proven useful for representation and modification of speech and audio. One drawback, however, is that a sinusoidal signal model is typically derived using a fixed frame size, which corresponds to a rigid signal segmentation. For nonstationary signals, the resolution limitations that result from this rigidity lead to reconstruction artifacts. It is shown in this paper that such artifacts can be significantly reduced by using a signal-adaptive segmentation derived by a dynamic program. An atomic interpretation of the sinusoidal model is given; this perspective suggests that algorithms for adaptive segmentation can be viewed as methods for adapting the time scales of the constituent atoms so as to improve the model by employing appropriate time-frequency tradeoffs.",
"",
"In this paper, we propose an efficient sinusoidal model of polyphonic audio signals especially good for the application of timescale modification. One of the critical problem of sinusoidal modeling is that the signal is smeared during the synthesis frame, which is a very undesirable effect for transient parts. We solve this problem by introducing multiresolution analysis-synthesis and dynamic segmentation methods. A signal is modeled with a sinusoidal component and a noise component. A multiresolution filter bank is applied to an input signal which splits it into octave-spaced subbands without causing aliasing and then sinusoidal analysis is applied to each subband signal. To alleviate smearing of transients during synthesis, a dynamic segmentation method is applied to the subband signals that determines the optimal analysis-synthesis frame size adaptively to fit its time-frequency characteristics. To extract sinusoidal components and calculate respective parameters, a matching pursuit algorithm is applied to each analysis frame of the subband signal. A psychoacoustic model implementing frequency masking is incorporated with matching pursuit to provide a reasonable stop condition of iteration and reduce the number of sinusoids. The noise component obtained by subtracting the synthesized signal with sinusoidal components from the original signal is modeled by a line-segment model of short time spectrum envelope. For various polyphonic audio signals, the results of simulation shows the proposed sinusoidal modeling can synthesize original signals without loss of perceptual quality and do more robust and high-quality timescale modification for large scale factors.",
"For large vocabulary and continuous speech recognition, the sub-word-unit-based approach is a viable alternative to the whole-word-unit-based approach. For preparing a large inventory of subword units, an automatic segmentation is preferrable to manual segmentation as it substantially reduces the work associated with the generation of templates and gives more consistent results. In this paper we discuss some methods for automatically segmenting speech into phonetic units. Three different approaches are described, one based on template matching, one based on detecting the spectral changes that occur at the boundaries between phonetic units and one based on a constrained-clustering vector quantization approach. An evaluation of the performance of the automatic segmentation methods is given.",
"We consider the segmentation of a trajectory into piecewise polynomial parts, or possibly other forms. Segmentation is typically formulated as an optimization problem which trades off model fitting error versus the cost of introducing new segments. Heuristics such as split-and-merge are used to find the best segmentation. We show that for ordered data (e.g., single curves or trajectories) the global optimum segmentation can be found by dynamic programming. The approach is easily extended to handle different segment types and top down information about segment boundaries, when available. We show segmentation results for video sequences of a basketball undergoing gravitational and nongravitational motion."
],
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_25"
],
"mid": [
"1861683848",
"1988378063",
"2164275279",
"",
"2149410414",
"1950396994",
"2129247217"
]
}
|
A quick search method for audio signals based on a piecewise linear representation of feature trajectories
|
This paper presents a method for searching quickly through unlabeled audio signal archives (termed stored signals) to detect and locate given audio clips (termed query signals) based on signal similarities.
Many studies related to audio retrieval have dealt with content-based approaches such as audio content classification [1], [2], speech recognition [3], and music transcription [3], [4]. Therefore, these studies mainly focused on associating audio signals with their meanings. In contrast, this study aims at achieving a similarity-based search or more specifically fingerprint identification, which constitutes a search of and retrieval from unlabeled audio archives based only on a signal similarity measure. That is, our objective is signal matching, not the association of signals with their semantics. Although the range of applications for a similarity-based search may seem narrow compared with content-based approaches, this is not actually the case. The applications include the detection and statistical analysis of broadcast music and commercial spots, and the content identification, detection and copyright management of pirated copies of music clips. Fig. 1 represents one of the most representative examples of such applications, which has already been put to practical use. This system automatically checks and identifies broadcast music clips or commercial spots to provide copyright information or other detailed information about the music or the spots.
In audio fingerprinting applications, the query and stored signals cannot be assumed to be exactly the same even in the corresponding sections of the same sound, owing to, for example, compression, transmission and irrelevant noises. Meanwhile, for the applications to be practically viable, the features Manuscript received December 15, 2006; revised June 17, 2007; second revision September 24, 2007, Accepted October 6,2007. The associate editor coordinating the review is Dr. Michael Goodwin. A. Kimura, K. Kashino should be compact and the feature analysis should be computationally efficient. For this purpose, several feature extraction methods have been developed to attain the above objectives. Cano et al. [5] modeled music segments as sequences of sound classes estimated via unsupervised clustering and hidden Markov models (HMMs). Burges et al. [6] employed several layers of Karhunen-Lóeve (KL) transforms, which reduced the local statistical redundancy of features with respect to time, and took account of robustness to shifting and pitching. Oostveen et al. [7] represented each frame of a video clip as a binary map and used the binary map sequence as a feature. This feature is robust to global changes in luminance and contrast variations. Haitsma et al. [8] and Kurozumi et al. [9] each employed a similar approach in the context of audio fingerprinting. Wang [10] developed a feature-point-based approach to improve the robustness. Our previous approach called the Time-series Active Search (TAS) method [11] introduced a histogram as a compact and noise-robust fingerprint, which models the empirical distribution of feature vectors in a segment. Histograms are sufficiently robust for monitoring broadcast music or detecting pirated copies. Another novelty of this approach is its effectiveness in accelerating the search. Adjacent histograms extracted from sliding audio segments are strongly correlated with each other. Therefore, unnecessary matching calculations are avoided by exploiting the algebraic properties of histograms.
Another important research issue regarding similarity-based approaches involves finding a way to speed up the search. Multi-dimensional indexing methods [12], [13] have frequently been used for accelerating searches. However, when feature vectors are high-dimensional, as they are typically with multimedia signals, the efficiency of the existing indexing methods deteriorates significantly [14], [15]. This is why search methods based on linear scans such as the TAS method are often employed for searches with high-dimensional features. However, methods based solely on linear scans may not be appropriate for managing large-scale signal archives, and therefore dimension reduction should be introduced to mitigate this effect.
To this end, this paper presents a quick and accurate audio search method that uses dimensionality reduction of histogram features. The method involves a piecewise linear representation of histogram sequences by utilizing the continuity and local correlation of the histogram sequences. A piecewise linear representation would be feasible for the TAS framework since the histogram sequences form trajectories in multi-dimensional spaces. By incorporating our method into the TAS framework, we significantly increase the search speed while guaranteeing the same search results as the TAS method. We introduce the following two techniques to obtain a piecewise representation: the dynamic segmentation of the feature trajectories and the segment-based KL transform.
The segment-based KL transform involves the dimensionality reduction of divided histogram sequences (called segments) by KL transform. We take advantage of the continuity and local correlation of feature sequences extracted from audio signals. Therefore, we expect to obtain a linear representation with few approximation errors and low computational cost. The segment-based KL transform consists of the following three components: The basic component of this technique reduces the dimensionality of histogram features. The second component that utilizes residuals between original histogram features and features after dimension reduction greatly reduces the required number of histogram comparisons. Feature sampling is introduced as the third component. This not only saves the storage space but also contributes to accelerating the search.
Dynamic segmentation refers to the division of histogram sequences into segments of various lengths to achieve the greatest possible reduction in the average dimensionality of the histogram features. One of the biggest problems in dynamic segmentation is that finding the optimal set of partitions that minimizes the average dimensionality requires a substantial calculation. The computational time must be no more than that needed for capturing audio signals from the viewpoint of practical applicability. To reduce the calculation cost, our technique addresses the quick suboptimal partitioning of the histogram trajectories, which consists of local optimization to avoid recursive calculations and the coarse-to-fine detection of segment boundaries. This paper is organized as follows: Section II introduces the notations and definitions necessary for the subsequent explanations. Section III explains the TAS method upon which our method is founded. Section IV outlines the proposed search method. Section V discusses a dimensionality reduction technique with the segment-based KL transform. Section VI details dynamic segmentation. Section VII presents experimental results related to the search speed and shows the advantages of the proposed method. Section VIII further discusses the advantages and shortcomings of the proposed method as well as providing additional experimental results. Section IX concludes the paper.
II. PRELIMINARIES
Let N be the set of all non-negative numbers, R be the set of all real numbers, and N n be a n-ary Cartesian product of N . Vectors are denoted by boldface lower-case letters, e.g. x, and matrices are denoted by boldface upper-case letters, e.g. A. The superscript t stands for the transposition of a vector or a matrix, e.g. x t or A t . The Euclidean norm of an n-dimensional vector x ∈ R n is denoted as x :
x def. = n i=1 |x i | 2 1/2 ,
where |x| is the magnitude of x. For any function f (·) and a random variable X, E[f (X)] stands for the expectation of f (X). Similarly, for a given value y ∈ Y, some function g(·, ·) and a random variable X, E[f (X, y)|y] stands for the conditional expectation of g(X, y) given y. Fig. 2 outlines the Time-series Active Search (TAS) method, which is the basis of our proposed method. We provide a summary of the algorithm here. Details can be found in [11].
III. TIME-SERIES ACTIVE SEARCH
[Preparation stage] 1) Base features are extracted from the stored signal. Our preliminary experiments showed that the short-time frequency spectrum provides sufficient accuracy for our similarity-based search task. Base features are extracted at every sampled time step, for example, every 10 msec. Henceforth, we call the sampled points frames (the term was inspired by video frames). Base features are denoted as f S (t S ) (0 ≤ t S < L S ), where t S represents the position in the stored signal and L S is the length of the stored signal (i.e. the number of frames in the stored signal). 2) Every base feature is quantized by vector quantization (VQ). A codebook {f i } n i=1 is created beforehand, where n is the codebook size (i.e. the number of codewords in the codebook). We utilize the Linde-Buzo-Gray (LBG) algorithm [16] for codebook creation. A quantized base feature q S (t S ) is expressed as a VQ codeword assigned to the corresponding base feature f S (t S ), which is determined as
q S (t S ) = arg min 1≤i≤n f S (t S ) − f i 2 .
[Search stage] 1) Base features f Q (t Q ) (0 ≤ t Q < L Q ) of the query signal are extracted in the same way as the stored signal and quantized with the codebook {f i } n i=1 created in the preparation stage, where t Q represents the position in the query signal and L Q is its length. We do not have to take into account the calculation time for feature quantization since it takes less than 1% of the length of the signal. A quantized base feature for the query signal is denoted as q Q (t Q ).
2) Histograms are created; one for the stored signal denoted as x S (t S ) and the other for the query signal denoted as x Q . First, windows are applied to the sequences of quantized base features extracted from the query and stored signals. The window length W (i.e. the number of frames in the window) is set at W = L Q , namely the length of the query signal. A histogram is created by counting the instances of each VQ codeword over the window. Therefore, each index of a histogram bin corresponds to a VQ codeword. We note that a histogram does not take the codeword order into account. 3) Histogram matching is executed based on the distance between histograms, computed as
d(t S ) def. = x S (t S ) − x Q .
When the distance d(t S ) falls below a given value (search threshold) θ, the query signal is considered to be detected at the position t S of the stored signal. 4) A window on the stored signal is shifted forward in time and the procedure returns to Step 2). As the window for the stored signal shifts forward in time, VQ codewords included in the window cannot change so rapidly, which means that histograms cannot also change so rapidly. This implies that for a given positive integer w the lower bound on the distance d(t S + w) is obtained from the triangular inequality as follows:
d(t S + w) ≥ max{0, d(t S ) − √ 2w},
where √ 2 is the maximum distance between x S (t S ) and x S (t S + w). Therefore, the skip width w(t S ) of the window at the t S -th frame is obtained as
w(t S ) = floor d(t S ) − θ √ 2 + 1 (if d(t S ) > θ) 1, (otherwise)(1)
where floor(a) indicates the largest integer less than a. We note that no sections will ever be missed that have distance values smaller than the search threshold θ, even if we skip the width w(t S ) given by Eq. (1).
IV. FRAMEWORK OF PROPOSED SEARCH METHOD The proposed method improves the TAS method so that the search is accelerated without false dismissals (incorrectly missing segments that should be detected) or false detections (identifying incorrect matches). To accomplish this, we introduce feature-dimension reduction as explained in Sections V and VI, which reduces the calculation costs required for matching. Fig. 3 shows an overview of the proposed search method, and Fig. 4 outlines the procedure for featuredimension reduction. The procedure consists of a preparation stage and a search stage.
[Preparation stage] 1) Base features f S (t S ) are extracted from the stored signal and quantized, to create quantized base features q S (t S ). The procedure is the same as that of the TAS method. 2) Histograms x S (t S ) are created in advance from the quantized base features of the stored signal by shifting a window of a predefined length W . We note that with the TAS method the window length W varies from one search to another, while with the present method the window length W is fixed. This is because histograms x S (t S ) for the stored signal are created prior to the search. We should also note that the TAS method does not create histograms prior to the search because sequences of VQ codewords need much less storage space than histogram sequences. 3) A piecewise linear representation of the extracted histogram sequence is obtained ( Fig. 4
block (A)).
This representation is characterized by a set T = {t j } M j=0 of segment boundaries expressed by their frame numbers and a set {p j (·)} M j=1 of M functions, where M is the number of segments, t 0 = 0 and t M = L S . The j-th segment is expressed as a half-open interval [t j−1 , t j ) since it starts from x S (t j−1 ) and ends at x S (t j − 1). Section VI shows how to obtain such segment boundaries. Each function p j (·) : N n → R m j that corresponds to the j-th segment reduces the dimensionality n of the histogram to the dimensionality m j . Section V-B shows how to determine these functions. 4) The histograms x S (t S ) are compressed by using the functions {p j (·)} M j=1 obtained in the previous step, and then compressed features y S (t S ) are created ( 1) Base features f Q (t Q ) are extracted and a histogram x Q is created from the query signal in the same way as the TAS method.
2) The histogram x Q is compressed based on the functions {p j (·)} M j=1 obtained in the preparation stage, to create M compressed features y Q [j] (j = 1, · · · , M ). Each compressed feature y Q [j] corresponds to the j-th function p j (·). The procedure used to create compressed features is the same as that for the stored signal.
3) Compressed features created from the stored and query signals are matched, that is, the distance
d(t S ) = y S (t S ) − y Q [j t S ] between two compressed features y S (t S ) and y Q [j t S ] is calculated,
where j t S represents the index of the segment that contains x S (t S ), namely t jt S −1 ≤ t S < t jt S . 4) If the distance falls below the search threshold θ, the original histograms x S (t S ) corresponding to the surviving compressed features y S (t S ) are verified. Namely, the distance d(t S ) = x S (t S ) − x Q is calculated and compared with the search threshold θ. 5) A window on the stored signal is shifted forward in time and the procedure goes back to Step 3).
The skip width of the window is calculated from the distance d(t S ) between compressed features.
B. Segment-based KL transform
As the first step towards obtaining a piecewise representation, the histogram sequence is divided into M segments. Dynamic segmentation is introduced here, which enhances feature-dimension reduction performance. This will be explained in detail in Section VI. Second, a KL transform is performed for every segment and a minimum number of eigenvectors are selected such that the sum of their contribution rates exceeds a predefined value σ, where the contribution rate of an eigenvector stands for its eigenvalue divided by the sum of all eigenvalues, and the predefined value σ is called the contribution threshold. The number of selected eigenvectors in the j-th segment is written as m j . Then, a function p j (·) : N n → R m j (j = 1, 2, · · · , M ) for dimensionality reduction is determined as a map to a subspace whose bases are the selected eigenvectors:
p j (x) = P t j (x − x j ),(2)
where x is a histogram, x j is the centroid of histograms contained in the j-th segment, and P j is an (n × m j ) matrix whose columns are the selected eigenvectors. Finally, each histogram is compressed by using the function p j (·) of the segment to which the histogram belongs. Henceforth, we refer to p j (x) as a projected feature of a histogram x.
In the following, we omit the index j corresponding to a segment unless it is specifically needed, e.g. p(x) and x.
C. Distance bounding
From the nature of the KL transform, the distance between two projected features gives the lower bound of the distance between corresponding original histograms. However, this bound does not approximate the original distance well, and this results in many false detections.
To improve the distance bound, we introduce a new technique. Let us define a projection distance δ(p, x) as the distance between a histogram x and the corresponding projected feature z = p(x):
δ(p, x) def. = x − q(z) ,(3)
where q(·) : R m → R n is the generalized inverse map of p(·), defined as
q(z) def. = P z + x.
Here we create a compressed feature y, which is the projected feature z = (z 1 , z 2 , · · · , z m ) t along with the projection distance δ(p, x):
y = y(p, x) = (z 1 , z 2 , · · · , z m , δ(p, x)) t ,
where y(p, x) means that y is determined by p and x. The Euclidean distance between compressed features is utilized as a new criterion for matching instead of the Euclidean distance between projected features. The distance is expressed as
y S − y Q 2 = z S − z Q 2 + {δ(p, x S ) − δ(p, x Q )} 2 ,(4)
where z S = p(x S ) (resp. z Q = p(x Q )) is the project feature derived from the original histograms x S (resp. x Q ) and y S = y S (p, x S ) (resp. y Q = y Q (p, x Q )) is the corresponding compressed feature. Eq. (4) implies that the distance between compressed features is larger than the distance between corresponding projected features. In addition, from the above discussions, we have the following two properties, which indicate that the distance y S − y Q between two compressed features is a better approximation of the distance x S − x Q between the original histograms than the distance z S − z Q between projected features (Theorem 1), and the expected approximation error is much smaller (Theorem 2). Theorem 1:
z S − z Q ≤ y S − y Q = min (x S ,x Q )∈A(y S ,y Q ) x S −x Q ≤ x S − x Q ,(5)
where A(y S , y Q ) is the set of all possible pairs (x S ,x Q ) of original histograms for given compressed features (y S , y Q ). Intuitive illustration of relationships between projection distance, distance between projected features and distance between compressed features.
Theorem 2: Suppose that random variables (X n S , X n Q ) corresponding to the original histograms (x S , x Q ) have a uniform distribution on the set A(y S , y Q ) defined in Theorem 1, and E[δ(p, X n S )] E[δ(p, X n Q )]. The expected approximation errors can be evaluated as
E X n S − X n Q 2 − y S − y Q 2 y S , y Q E X n S − X n Q 2 − z S − z Q 2 y S , y Q .(6)
The proofs are shown in the appendix. Fig. 6 shows an intuitive illustration of the relationships between projection distances, distances between projected features and distances between compressed features, where the histograms are in a 3-dimensional space and the subspace dimensionality is 1. In this case, for given compressed features (y S , y Q ) and a fixed query histogram x Q , a stored histogram x S must be on a circle whose center is q(z Q ). This circle corresponds to the set A(y S , y Q ).
D. Feature sampling
In the TAS method, quantized base features are stored, because they need much less storage space than the histogram sequence and creating histograms on the spot takes little calculation. With the present method, however, compressed features must be computed and stored in advance so that the search results can be returned as quickly as possible, and therefore much more storage space is needed than with the TAS method. The increase in storage space may cause a reduction in search speed due to the increase in disk access.
Based on the above discussion, we incorporate feature sampling in the temporal domain. The following idea is inspired by the technique called Piecewise Aggregate Approximation (PAA) [22]. With the proposed feature sampling method, first a compressed feature sequence
{y S (t S )} L S −W −1 t S =0
is divided into subsequences {y S (ia), y S (ia + 1), · · · , y S (ia + a − 1)} i=0,1,··· of length a. Then, the first compressed feature y S (ia) of every subsequence is selected as a representative feature. A lower bound of the distances between the query and stored compressed features contained in the subsequence can be expressed in terms of the representative feature y S (ia). This bound is obtained from the triangular inequality as follows:
y S (ia + k) − y Q ≥ y S (ia) − y Q − d(i), d(i) def. = max 0≤k ≤a−1 y S (ia + k ) − y S (ia) .
(∀i = 0, 1, · · · , ∀k = 0, · · · , a − 1) This implies that preserving the representative feature y S (ia) and the maximum distance d(i) is sufficient to guarantee that there are no false dismissals. This feature sampling is feasible for histogram sequences because successive histograms cannot change rapidly. Furthermore, the technique mentioned in this section will also contribute to accelerating the search, especially when successive histograms change little.
VI. DYNAMIC SEGMENTATION A. Related work
The approach used for dividing histogram sequences into segments is critical for realizing efficient feature-dimension reduction since the KL transform is most effective when the constituent elements in the histogram segments are similar. To achieve this, we introduce a dynamic segmentation strategy.
Dynamic segmentation is a generic term that refers to techniques for dividing sequences into segments of various lengths. Dynamic segmentation methods for time-series signals have already been applied to various kinds of applications such as speech coding (e.g. [24]), the temporal compression of waveform signals [25], the automatic segmentation of speech signals into phonic units [26], sinusoidal modeling of audio signals [27], [28], [29] and motion segmentation in video signals [30]. We employ dynamic segmentation to minimize the average dimensionality of high-dimensional feature trajectories.
Dynamic segmentation can improve dimension reduction performance. However, finding the optimal boundaries still requires a substantial calculation. With this in mind, several studies have adopted suboptimal approaches, such as longest line fitting [23], wavelet decomposition [23], [21] and the bottom-up merging of segments [31]. The first two approaches still incur a substantial calculation cost for long time-series signals. The last approach is promising as regards obtaining a rough global approximation at a practical calculation cost. This method is compatible with ours, however, we mainly focus on a more precise local optimization. Fig. 7 shows an outline of our dynamic segmentation method. The objective of the dynamic segmentation method is to divide the stored histogram sequence so that its piecewise linear representation is well characterized by a set of lower dimensional subspaces. To this end, we formulate the dynamic segmentation as a way to find a set T * = {t * j } M j=0 of segment boundaries that minimize the average dimensionality of these segment-approximating subspaces on condition that the boundary t * j between the j-th and the (j +1)th segments is in a shiftable range S j , which is defined as a section with a width ∆ in the vicinity of the initial position t 0 j of the boundary between the j-th and the (j + 1)-th segments. Namely, the set T * of the optimal segment boundaries is given by the following formula:
B. Framework
T * = t * j M j=0 def. = arg min {t j } M j=0 :t j ∈S j ∀j 1 L S M i=1 (t j − t j−1 ) · c(t j−1 , t j , σ) (7) S j def. = {t j : t 0 j − ∆ ≤ t j ≤ t 0 j + ∆} (8)
where c(t i , t j , σ) represents the subspace dimensionality on the segment between the t i -th and the t j -th frames for a given contribution threshold σ, t * 0 = 0 and t * M = L S . The initial positions of the segment boundaries are set beforehand by equi-partitioning.
The above optimization problem defined by Eq. (7) would normally be solved with dynamic programming (DP) (e.g. [32]). However, DP is not practical in this case. Deriving c(t j−1 , t j , σ) included in Eq. (7) incurs a substantial calculation cost since it is equivalent to executing a KL transform calculation for the segment [t j−1 , t j ). This implies that the DP-based approach requires a significant amount of calculation, although less than a naive approach. The above discussion implies that we should reduce the number of KL transform calculations to reduce the total calculation cost required for the optimization. When we adopt the total number of KL transform calculations as a measure for assessing the calculation cost, the cost is evaluated as O(M ∆ 2 ), where M is the number of segments and ∆ is the width of the shiftable range.
To reduce the calculation cost, we instead adopt a suboptimal approach. Two techniques are incorporated: local optimization and the coarse-to-fine detection of segment boundaries. We explain these two techniques in the following sections.
C. Local optimization
The local optimization technique modifies the formulation (Eq. (7)) of dynamic segmentation so that it minimizes the average dimensionality of the subspaces of adjoining segments. The basic idea is similar to the "forward segmentation" technique introduced by Goodwin [27], [28] for deriving accurate sinusoidal models of audio signals. The position t * j of the boundary is determined by using the following forward recursion as a substitute for Eq. (7):
t * j = arg min t j ∈S j (t j − t * j−1 )c * j + (t 0 j+1 − t j )c 0 j+1 t 0 j+1 − t * j−1 ,(9)
which is here given by
c * j = c(t * j−1 , t j , σ), c 0 j+1 = c(t j , t 0 j+1 , σ)
, and S j is defined in Eq. (8). As can be seen in Eq. (9), we can determine each segment boundary independently, unlike the formulation of Eq. (7). Therefore, the local optimization technique can reduce the amount of calculation needed for extracting an appropriate representation, which is evaluated as O(M ∆), where M is the number of segments and ∆ is the width of the shiftable range.
D. Coarse-to-fine detection
The coarse-to-fine detection technique selects suboptimal boundaries in the sense of Eq. (9) with less computational cost. We note that small boundary shifts do not contribute greatly to changes in segment dimensionality because successive histograms cannot change rapidly. With this in mind, we assume that 1) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the initial position t 0 j and the edges (t 0 j − ∆ and t 0 j + ∆) of its shiftable range.
2) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the position t 0 j − ∆ + 2∆ u j +1 i (i = 1, 2, · · · , u j ), where u j determines the number of calculations in this step.
3) The dimensions of the j-th and (j + 1)-th segments are calculated in detail when the segment boundary t j is in the positions where dimension changes are detected in the previous step. We determine the number u j of dimension calculations in step 2 so that the number of calculations in all the above steps, f j (u j ), is minimized. Then, f j (u j ) is given as follows:
f j (u j ) = 2 (3 + u j ) + K j ∆ 1 2 u j + 1 ,
where K j is the estimated number of positions where the dimensionalities change, which is experimentally determined as
K j = c LR − c LL , (if c LR ≤ c RR , c LL < c RL ) K j = (c LC − c LL ) + min(c RC , c LR ) − min(c LC , c RR ), (if c LR > c RR , c LL < c RL , c LC ≤ c RC ) K j = (c RC − c RR ) + min(c LC , c RL ) − min(c RC , c LL ), (if c LR > c RR , c LL < c RL , c LC > c RC ) K j = c RL − c RR , (Otherwise) and c LL = c(t * j−1 , t 0 j − ∆, σ), c RL = c(t 0 j − ∆, t 0 j+1 , σ), c LC = c(t * j−1 , t 0 j , σ), c RC = c(t 0 j , t 0 j+1 , σ), c LR = c(t * j−1 , t 0 j + ∆, σ), c RR = c(t 0 j + ∆, t 0 j+1 , σ).
The first term of f j (u j ) refers to the number of calculations in steps 1 and 2, and the second term corresponds to that in step 3. f j (u j ) takes the minimum value 4 2K j ∆ + 2 when u j = 2K j ∆ − 2. The calculation cost when incorporating local optimization and coarse-to-fine detection techniques is evaluated as follows:
E M 4 2K j ∆ + 2 ≤ M 4 √ 2K∆ + 2 = O M √ K∆ , where K = E[K j ]
, M is the number of segments and ∆ is the width of the shiftable range. The first inequality is derived from Jensen's inequality (e.g. [33, Theorem 2.6.2]). The coarse-to-fine detection technique can additionally reduce the calculation cost because K is usually much smaller than ∆.
VII. EXPERIMENTS A. Conditions
We tested the proposed method in terms of calculation cost in relation to search speed. We again note that the proposed search method guarantees the same search results as the TAS method in principle, and therefore we need to evaluate the search speed. The search accuracy for the TAS method was reported in a previous paper [11]. In summary, for audio identification tasks, there were no false detections or false dismissals down to an S/N ratio of 20 dB if the query duration was longer than 10 seconds.
In the experiments, we used a recording of a real TV broadcast. An audio signal broadcast from a particular TV station was recorded and encoded in MPEG-1 Layer 3 (MP3) format. We recorded a 200hour audio signal as a stored signal, and recorded 200 15-second spots from another TV broadcast as queries. Thus, the task was to detect and locate specific commercial spots from 200 consecutive hours of TV recording. Each spot occurred 2-30 times in the stored signal. Each signal was first digitized at a 32 kHz sampling frequency and 16 bit quantization accuracy. The bit rate for the MP3 encoding was 56 kbps. We extracted base features from each audio signal using a 7-channel second-order IIR band-pass filter with Q = 10. The center frequencies at the filter were equally spaced on a log frequency scale. The base features were calculated every 10 milliseconds from a 60 millisecond window. The base feature vectors were quantized by using the VQ codebook with 128 codewords, and histograms were created based on the scheme of the TAS method. Therefore, the histogram dimension was 128. We implemented the feature sampling described in Section V-D and the sampling duration was a = 50. The tests were carried out on a PC (Pentium 4 2.0 GHz).
B. Search speed
We first measured the CPU time and the number of matches in the search. The search time we measured in this test comprised only the CPU time in the search stage shown in Section IV. This means that the search time did not include the CPU time for any procedures in the preparation stage such as base feature extraction, histogram creation, or histogram dimension reduction for the stored signal. The search threshold was adjusted to θ = 85 so that there were no false detections or false dismissals. We compared the following methods:
(i) The TAS method (baseline). (ii) The proposed search method without the projection distance being embedded in the compressed features. (iii) The proposed search method.
We first examined the relationships between the average segment duration (equivalent to the number of segments), the search time, and the number of matches. The following parameters were set for featuredimension reduction: The contribution threshold was σ = 0.9. The width of the shiftable range for dynamic segmentation was 500. Fig. 10 shows the relationship between the average segment duration and the search time, where the ratio of the search speed of the proposed method to that of the TAS method (conventional method in the figure) is called the speed-up factor. Also, Fig. 11 shows the relationship between the average segment duration and the number of matches. Although the proposed method only slightly increased the number of matches, it greatly reduced the search time. This is because it greatly reduced the calculation cost per match owing to feature-dimension reduction. For example, the proposed method reduced the search time to almost 1/12 when the segment duration was 1.2 minutes (i.e. the number of segments was 10000). As mentioned in Section V-D, the feature sampling technique also contributed to the acceleration of the search, and the effect is similar to histogram skipping. Considering the dimension reduction performance results described later, we found that those effects were greater than that caused by dimension reduction for large segment durations (i.e. a small number of segments). This is examined in detail in the next section. We also found that the proposed method reduced the search time and the number of matches when the distance bounding technique was incorporated, especially when there were a large number of segments.
VIII. DISCUSSION The previous section described the experimental results solely in terms of search speed and the advantages of the proposed method compared with the previous method. This section provides further discussion of the advantages and shortcomings of the proposed method as well as additional experimental results.
We first deal with the dimension reduction performance derived from the segment-based KL transform. We employed equi-partitioning to obtain segments, which means that we did not incorporate the dynamic segmentation technique. Fig. 12 shows the experimental result. The proposed method monotonically reduced the dimensions as the number of segments increased if the segment duration was shorter than 10 hours (the number of segments M ≥ 20). We can see that the proposed method reduced the dimensions, for example, to 1/25 of the original histograms when the contribution threshold was 0.90 and the segment duration was 1.2 minutes (the number of segments was 10000). The average dimensions did not decrease as the number of segments increased if the number of segments was relatively small. This is because we decided the number of subspace bases based on the contribution rates. Next, we deal with the dimension reduction performance derived from the dynamic segmentation technique. The initial positions of the segment boundaries were set by equi-partitioning. The duration of segments obtained by equi-partitioning was 12 minutes (i.e. there were 1000 segments). Fig. 13 shows the result. The proposed method further reduced the feature dimensionality to 87.5% of its initial value, which is almost the same level of performance as when only the local search was utilized. We were unable to calculate the average dimensionality when using DP because of the substantial amount of calculation, as described later. When the shiftable range was relatively narrow, the dynamic segmentation performance was almost the same as that of DP.
Here, we review the search speed performance shown in Fig. 10. It should be noted that three techniques in our proposed method contributed to speeding up the search, namely feature-dimension reduction, distance bounding and feature sampling. When the number of segments was relatively small, the speed-up factor was much larger than the ratio of the dimension of the compressed features to that of the original histograms, which can be seen in Figs. 10, 12 and 13. This implies that the feature sampling technique dominated the search performance in this case. On the other hand, when the number of segments was relatively large, the proposed search method did not greatly improve the search speed compared with the dimension reduction performance. This implies that the feature sampling technique degraded the search performance. In this case, the distance bounding technique mainly contributed to the improvement of the search performance as seen in Fig. 10.
Lastly, we discuss the amount of calculation necessary for dynamic segmentation. We again note that although dynamic segmentation can be executed prior to providing a query signal, the computational time must be at worst smaller than the duration of the stored signal from the viewpoint of practical applicability. We adopted the total number of dimension calculations needed to obtain the dimensions of the segments as a measure for comparing the calculation cost in the same way as in Section VI. Fig. 14 shows the estimated calculation cost for each dynamic segmentation method. We compared our method incorporating local optimization and coarse-to-fine detection with the DP-based method and a case where only the local optimization technique was incorporated. The horizontal line along with "Real-time processing" indicates that the computational time is almost the same as the duration of the signal. The proposed method required much less computation than with DP or local optimization. For example, when the width of the shiftable range was 500, the calculation cost of the proposed method was 1/5000 that of DP and 1/10 that with local optimization. We note that in this experiment, the calculation cost of the proposed method is less than the duration of the stored signal, while those of the other two methods are much longer.
IX. CONCLUDING REMARKS This paper proposed a method for undertaking quick similarity-based searches of an audio signal to detect and locate similar segments to a given audio clip. The proposed method was built on the TAS method, where audio segments are modeled by using histograms. With the proposed method, the histograms are compressed based on a piecewise linear representation of histogram sequences. We introduce dynamic segmentation, which divides histogram sequences into segments of variable lengths. We also addressed the quick suboptimal partitioning of the histogram sequences along with local optimization and coarse-to-fine detection techniques. Experiments revealed significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1/12, and detected the query in about 0.3 seconds from a 200-hour audio database. Although this paper focused on audio signal retrieval, the proposed method can be easily applied to video signal retrieval [34], [35]. Although the method proposed in this paper is founded on the TAS method, we expect that some of the techniques we have described could be used in conjunction with other similarity-based search methods (e.g. [36], [37], [38], [39]) or a speech/music discriminator [40]. Future work includes the implementation of indexing methods suitable for piecewise linear representation, and the dynamic determination of the initial segmentation, both of which have the potential to improve the search performance further. APPENDIX A PROOF OF THEOREM 1 First, let us define
z Q def. = p(x Q ), z S def. = p(x S ), x Q def. = q(z Q ) = q(p(x Q )), x S def. = q(z S ) = q(p(x S )), δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S ).
We note that for any histogram x ∈ N n , x = q(p(x)) is the projection of x into the subspace defined by the map p(·), and therefore x − x is a normal vector of the subspace of p(·). Also, we note that x − x = δ(p, x) and x is on the subspace of p(·). For two vectors x 1 and x 2 , their inner product is denoted as x 1 · x 2 . Then, we obtain
x Q − x S 2 = (x Q − x Q ) − (x S − x S ) + ( x Q − x S ) 2 = x Q − x Q 2 + x S − x S 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) + 2(x Q − x Q ) · ( x Q − x S ) −2(x S − x S ) · ( x Q − x S ) = δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) (10) ≥ δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2δ(p, x Q ) · δ(p, x S ) (11) = {δ(p, x Q ) − δ(p, x S )} 2 + z Q − z S 2 = y Q − y S 2 ,
where Eq. (10) comes from the fact that any vector on a subspace and the normal vector of the subspace are mutually orthogonal, and Eq. (11) from the definition of inner product. This concludes the proof of Theorem 1.
APPENDIX B PROOF OF THEOREM 2
The notations used in the previous section are also employed here. When the projected features z Q , z S and the projection distances
δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S )
are given, we can obtain the distance between the original features as follows: (12) is derived from Eq. (10) and φ is the angle between x Q − q(z Q ) and x S − q(z S ). From the assumption that random variables X S and X Q corresponding to original histograms x S and x Q are distributed independently and uniformly in the set A, the following equation is obtained:
x Q − x S 2 = z Q − z S 2 + δ 2 Q + δ 2 S −(x Q − q(z Q )) · (x S − q(z S )) (12) = z Q − z S 2 + δ 2 Q + δ 2 S − 2δ Q δ S cos φ, where Eq.E X Q − X S 2 − z Q − z S 2 = π 0 (δ 2 Q + δ 2 S − 2δ Q δ S cos φ) S n−m−1 (δ S sin φ) S n−m (δ S ) |d(δ S cos φ)|,(13)
where S k (R) represents the surface area of a k-dimensional hypersphere with radius R, and can be calculated as follows:
S k (R) = k π k/2 (k/2)! R k−1(14)
Substituting Eq. (14) into Eq. (13), we obtain
E X Q − X S 2 − z Q − z S 2 = n − m − 1 n − m (δ 2 Q + δ 2 S ) ≈ n − m − 1 n − m δ 2 Q ,
where the last approximation comes from the fact that δ Q δ D . Also, from Eq. (4) we have
x Q − x S 2 − y Q − y S 2 = 2δ Q δ S (1 − cos φ).
Therefore, we derive the following equation in the same way:
| 7,588 |
0710.4180
|
2103921041
|
This paper presents a new method for a quick similarity-based search through long unlabeled audio streams to detect and locate audio clips provided by users. The method involves feature-dimension reduction based on a piecewise linear representation of a sequential feature trajectory extracted from a long audio stream. Two techniques enable us to obtain a piecewise linear representation: the dynamic segmentation of feature trajectories and the segment-based Karhunen-Loeve (KL) transform. The proposed search method guarantees the same search results as the search method without the proposed feature-dimension reduction method in principle. Experimental results indicate significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1 12 that of previous methods and detected queries in approximately 0.3 s from a 200-h audio database.
|
Dynamic segmentation can improve dimension reduction performance. However, finding the optimal boundaries still requires a substantial calculation. With this in mind, several studies have adopted suboptimal approaches, such as longest line fitting @cite_27 , wavelet decomposition @cite_27 @cite_3 and the bottom-up merging of segments @cite_10 . The first two approaches still incur a substantial calculation cost for long time-series signals. The last approach is promising as regards obtaining a rough global approximation at a practical calculation cost. This method is compatible with ours, however, we mainly focus on a more precise local optimization.
|
{
"abstract": [
"Fast retrieval of time series in terms of their contents is important in many application domains. This paper studies database techniques supporting fast searches for time series whose contents are similar to what users specify. The content types studied include shapes, trends, cyclic components, autocorrelation functions and partial autocorrelation functions. Due to the complex nature of the similarity searches involving such contents, traditional database techniques usually cannot provide a fast response when the involved data volume is high. This paper proposes to answer such content-based queries using appropriate approximation techniques. The paper then introduces two specific approximation methods, one is wavelet based and the other line-fitting based. Finally, the paper reports some experiments conducted on a stock price data set as well as a synthesized random walk data set, and shows that both approximation methods significantly reduce the query processing time without introducing intolerable errors.",
"The problem of efficiently and accurately locating patterns of interest in massive time series data sets is an important and non-trivial problem in a wide variety of applications, including diagnosis and monitoring of complex systems, biomedical data analysis, and exploratory data analysis in scientific and business time series. In this paper a probabilistic approach is taken to this problem. Using piecewise linear segmentations as the underlying representation, local features (such as peaks, troughs, and plateaus) are defined using a prior distribution on expected deformations from a basic template. Global shape information is represented using another prior on the relative locations of the individual features. An appropriately defined probabilistic model integrates the local and global information and directly leads to an overall distance measure between sequence patterns based on prior knowledge. A search algorithm using this distance measure is shown to efficiently and accurately find matches for a variety of patterns on a number of data sets, including engineering sensor data from space Shuttle mission archives. The proposed approach provides a natural framework to support user-customizable \"query by content\" on time series data, taking prior domain information into account in a principled manner.",
"Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower bounding, but very tight Euclidean distance approximation and show how they can support fast exact searching, and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority."
],
"cite_N": [
"@cite_27",
"@cite_10",
"@cite_3"
],
"mid": [
"2167035411",
"2097983034",
"2163336863"
]
}
|
A quick search method for audio signals based on a piecewise linear representation of feature trajectories
|
This paper presents a method for searching quickly through unlabeled audio signal archives (termed stored signals) to detect and locate given audio clips (termed query signals) based on signal similarities.
Many studies related to audio retrieval have dealt with content-based approaches such as audio content classification [1], [2], speech recognition [3], and music transcription [3], [4]. Therefore, these studies mainly focused on associating audio signals with their meanings. In contrast, this study aims at achieving a similarity-based search or more specifically fingerprint identification, which constitutes a search of and retrieval from unlabeled audio archives based only on a signal similarity measure. That is, our objective is signal matching, not the association of signals with their semantics. Although the range of applications for a similarity-based search may seem narrow compared with content-based approaches, this is not actually the case. The applications include the detection and statistical analysis of broadcast music and commercial spots, and the content identification, detection and copyright management of pirated copies of music clips. Fig. 1 represents one of the most representative examples of such applications, which has already been put to practical use. This system automatically checks and identifies broadcast music clips or commercial spots to provide copyright information or other detailed information about the music or the spots.
In audio fingerprinting applications, the query and stored signals cannot be assumed to be exactly the same even in the corresponding sections of the same sound, owing to, for example, compression, transmission and irrelevant noises. Meanwhile, for the applications to be practically viable, the features Manuscript received December 15, 2006; revised June 17, 2007; second revision September 24, 2007, Accepted October 6,2007. The associate editor coordinating the review is Dr. Michael Goodwin. A. Kimura, K. Kashino should be compact and the feature analysis should be computationally efficient. For this purpose, several feature extraction methods have been developed to attain the above objectives. Cano et al. [5] modeled music segments as sequences of sound classes estimated via unsupervised clustering and hidden Markov models (HMMs). Burges et al. [6] employed several layers of Karhunen-Lóeve (KL) transforms, which reduced the local statistical redundancy of features with respect to time, and took account of robustness to shifting and pitching. Oostveen et al. [7] represented each frame of a video clip as a binary map and used the binary map sequence as a feature. This feature is robust to global changes in luminance and contrast variations. Haitsma et al. [8] and Kurozumi et al. [9] each employed a similar approach in the context of audio fingerprinting. Wang [10] developed a feature-point-based approach to improve the robustness. Our previous approach called the Time-series Active Search (TAS) method [11] introduced a histogram as a compact and noise-robust fingerprint, which models the empirical distribution of feature vectors in a segment. Histograms are sufficiently robust for monitoring broadcast music or detecting pirated copies. Another novelty of this approach is its effectiveness in accelerating the search. Adjacent histograms extracted from sliding audio segments are strongly correlated with each other. Therefore, unnecessary matching calculations are avoided by exploiting the algebraic properties of histograms.
Another important research issue regarding similarity-based approaches involves finding a way to speed up the search. Multi-dimensional indexing methods [12], [13] have frequently been used for accelerating searches. However, when feature vectors are high-dimensional, as they are typically with multimedia signals, the efficiency of the existing indexing methods deteriorates significantly [14], [15]. This is why search methods based on linear scans such as the TAS method are often employed for searches with high-dimensional features. However, methods based solely on linear scans may not be appropriate for managing large-scale signal archives, and therefore dimension reduction should be introduced to mitigate this effect.
To this end, this paper presents a quick and accurate audio search method that uses dimensionality reduction of histogram features. The method involves a piecewise linear representation of histogram sequences by utilizing the continuity and local correlation of the histogram sequences. A piecewise linear representation would be feasible for the TAS framework since the histogram sequences form trajectories in multi-dimensional spaces. By incorporating our method into the TAS framework, we significantly increase the search speed while guaranteeing the same search results as the TAS method. We introduce the following two techniques to obtain a piecewise representation: the dynamic segmentation of the feature trajectories and the segment-based KL transform.
The segment-based KL transform involves the dimensionality reduction of divided histogram sequences (called segments) by KL transform. We take advantage of the continuity and local correlation of feature sequences extracted from audio signals. Therefore, we expect to obtain a linear representation with few approximation errors and low computational cost. The segment-based KL transform consists of the following three components: The basic component of this technique reduces the dimensionality of histogram features. The second component that utilizes residuals between original histogram features and features after dimension reduction greatly reduces the required number of histogram comparisons. Feature sampling is introduced as the third component. This not only saves the storage space but also contributes to accelerating the search.
Dynamic segmentation refers to the division of histogram sequences into segments of various lengths to achieve the greatest possible reduction in the average dimensionality of the histogram features. One of the biggest problems in dynamic segmentation is that finding the optimal set of partitions that minimizes the average dimensionality requires a substantial calculation. The computational time must be no more than that needed for capturing audio signals from the viewpoint of practical applicability. To reduce the calculation cost, our technique addresses the quick suboptimal partitioning of the histogram trajectories, which consists of local optimization to avoid recursive calculations and the coarse-to-fine detection of segment boundaries. This paper is organized as follows: Section II introduces the notations and definitions necessary for the subsequent explanations. Section III explains the TAS method upon which our method is founded. Section IV outlines the proposed search method. Section V discusses a dimensionality reduction technique with the segment-based KL transform. Section VI details dynamic segmentation. Section VII presents experimental results related to the search speed and shows the advantages of the proposed method. Section VIII further discusses the advantages and shortcomings of the proposed method as well as providing additional experimental results. Section IX concludes the paper.
II. PRELIMINARIES
Let N be the set of all non-negative numbers, R be the set of all real numbers, and N n be a n-ary Cartesian product of N . Vectors are denoted by boldface lower-case letters, e.g. x, and matrices are denoted by boldface upper-case letters, e.g. A. The superscript t stands for the transposition of a vector or a matrix, e.g. x t or A t . The Euclidean norm of an n-dimensional vector x ∈ R n is denoted as x :
x def. = n i=1 |x i | 2 1/2 ,
where |x| is the magnitude of x. For any function f (·) and a random variable X, E[f (X)] stands for the expectation of f (X). Similarly, for a given value y ∈ Y, some function g(·, ·) and a random variable X, E[f (X, y)|y] stands for the conditional expectation of g(X, y) given y. Fig. 2 outlines the Time-series Active Search (TAS) method, which is the basis of our proposed method. We provide a summary of the algorithm here. Details can be found in [11].
III. TIME-SERIES ACTIVE SEARCH
[Preparation stage] 1) Base features are extracted from the stored signal. Our preliminary experiments showed that the short-time frequency spectrum provides sufficient accuracy for our similarity-based search task. Base features are extracted at every sampled time step, for example, every 10 msec. Henceforth, we call the sampled points frames (the term was inspired by video frames). Base features are denoted as f S (t S ) (0 ≤ t S < L S ), where t S represents the position in the stored signal and L S is the length of the stored signal (i.e. the number of frames in the stored signal). 2) Every base feature is quantized by vector quantization (VQ). A codebook {f i } n i=1 is created beforehand, where n is the codebook size (i.e. the number of codewords in the codebook). We utilize the Linde-Buzo-Gray (LBG) algorithm [16] for codebook creation. A quantized base feature q S (t S ) is expressed as a VQ codeword assigned to the corresponding base feature f S (t S ), which is determined as
q S (t S ) = arg min 1≤i≤n f S (t S ) − f i 2 .
[Search stage] 1) Base features f Q (t Q ) (0 ≤ t Q < L Q ) of the query signal are extracted in the same way as the stored signal and quantized with the codebook {f i } n i=1 created in the preparation stage, where t Q represents the position in the query signal and L Q is its length. We do not have to take into account the calculation time for feature quantization since it takes less than 1% of the length of the signal. A quantized base feature for the query signal is denoted as q Q (t Q ).
2) Histograms are created; one for the stored signal denoted as x S (t S ) and the other for the query signal denoted as x Q . First, windows are applied to the sequences of quantized base features extracted from the query and stored signals. The window length W (i.e. the number of frames in the window) is set at W = L Q , namely the length of the query signal. A histogram is created by counting the instances of each VQ codeword over the window. Therefore, each index of a histogram bin corresponds to a VQ codeword. We note that a histogram does not take the codeword order into account. 3) Histogram matching is executed based on the distance between histograms, computed as
d(t S ) def. = x S (t S ) − x Q .
When the distance d(t S ) falls below a given value (search threshold) θ, the query signal is considered to be detected at the position t S of the stored signal. 4) A window on the stored signal is shifted forward in time and the procedure returns to Step 2). As the window for the stored signal shifts forward in time, VQ codewords included in the window cannot change so rapidly, which means that histograms cannot also change so rapidly. This implies that for a given positive integer w the lower bound on the distance d(t S + w) is obtained from the triangular inequality as follows:
d(t S + w) ≥ max{0, d(t S ) − √ 2w},
where √ 2 is the maximum distance between x S (t S ) and x S (t S + w). Therefore, the skip width w(t S ) of the window at the t S -th frame is obtained as
w(t S ) = floor d(t S ) − θ √ 2 + 1 (if d(t S ) > θ) 1, (otherwise)(1)
where floor(a) indicates the largest integer less than a. We note that no sections will ever be missed that have distance values smaller than the search threshold θ, even if we skip the width w(t S ) given by Eq. (1).
IV. FRAMEWORK OF PROPOSED SEARCH METHOD The proposed method improves the TAS method so that the search is accelerated without false dismissals (incorrectly missing segments that should be detected) or false detections (identifying incorrect matches). To accomplish this, we introduce feature-dimension reduction as explained in Sections V and VI, which reduces the calculation costs required for matching. Fig. 3 shows an overview of the proposed search method, and Fig. 4 outlines the procedure for featuredimension reduction. The procedure consists of a preparation stage and a search stage.
[Preparation stage] 1) Base features f S (t S ) are extracted from the stored signal and quantized, to create quantized base features q S (t S ). The procedure is the same as that of the TAS method. 2) Histograms x S (t S ) are created in advance from the quantized base features of the stored signal by shifting a window of a predefined length W . We note that with the TAS method the window length W varies from one search to another, while with the present method the window length W is fixed. This is because histograms x S (t S ) for the stored signal are created prior to the search. We should also note that the TAS method does not create histograms prior to the search because sequences of VQ codewords need much less storage space than histogram sequences. 3) A piecewise linear representation of the extracted histogram sequence is obtained ( Fig. 4
block (A)).
This representation is characterized by a set T = {t j } M j=0 of segment boundaries expressed by their frame numbers and a set {p j (·)} M j=1 of M functions, where M is the number of segments, t 0 = 0 and t M = L S . The j-th segment is expressed as a half-open interval [t j−1 , t j ) since it starts from x S (t j−1 ) and ends at x S (t j − 1). Section VI shows how to obtain such segment boundaries. Each function p j (·) : N n → R m j that corresponds to the j-th segment reduces the dimensionality n of the histogram to the dimensionality m j . Section V-B shows how to determine these functions. 4) The histograms x S (t S ) are compressed by using the functions {p j (·)} M j=1 obtained in the previous step, and then compressed features y S (t S ) are created ( 1) Base features f Q (t Q ) are extracted and a histogram x Q is created from the query signal in the same way as the TAS method.
2) The histogram x Q is compressed based on the functions {p j (·)} M j=1 obtained in the preparation stage, to create M compressed features y Q [j] (j = 1, · · · , M ). Each compressed feature y Q [j] corresponds to the j-th function p j (·). The procedure used to create compressed features is the same as that for the stored signal.
3) Compressed features created from the stored and query signals are matched, that is, the distance
d(t S ) = y S (t S ) − y Q [j t S ] between two compressed features y S (t S ) and y Q [j t S ] is calculated,
where j t S represents the index of the segment that contains x S (t S ), namely t jt S −1 ≤ t S < t jt S . 4) If the distance falls below the search threshold θ, the original histograms x S (t S ) corresponding to the surviving compressed features y S (t S ) are verified. Namely, the distance d(t S ) = x S (t S ) − x Q is calculated and compared with the search threshold θ. 5) A window on the stored signal is shifted forward in time and the procedure goes back to Step 3).
The skip width of the window is calculated from the distance d(t S ) between compressed features.
B. Segment-based KL transform
As the first step towards obtaining a piecewise representation, the histogram sequence is divided into M segments. Dynamic segmentation is introduced here, which enhances feature-dimension reduction performance. This will be explained in detail in Section VI. Second, a KL transform is performed for every segment and a minimum number of eigenvectors are selected such that the sum of their contribution rates exceeds a predefined value σ, where the contribution rate of an eigenvector stands for its eigenvalue divided by the sum of all eigenvalues, and the predefined value σ is called the contribution threshold. The number of selected eigenvectors in the j-th segment is written as m j . Then, a function p j (·) : N n → R m j (j = 1, 2, · · · , M ) for dimensionality reduction is determined as a map to a subspace whose bases are the selected eigenvectors:
p j (x) = P t j (x − x j ),(2)
where x is a histogram, x j is the centroid of histograms contained in the j-th segment, and P j is an (n × m j ) matrix whose columns are the selected eigenvectors. Finally, each histogram is compressed by using the function p j (·) of the segment to which the histogram belongs. Henceforth, we refer to p j (x) as a projected feature of a histogram x.
In the following, we omit the index j corresponding to a segment unless it is specifically needed, e.g. p(x) and x.
C. Distance bounding
From the nature of the KL transform, the distance between two projected features gives the lower bound of the distance between corresponding original histograms. However, this bound does not approximate the original distance well, and this results in many false detections.
To improve the distance bound, we introduce a new technique. Let us define a projection distance δ(p, x) as the distance between a histogram x and the corresponding projected feature z = p(x):
δ(p, x) def. = x − q(z) ,(3)
where q(·) : R m → R n is the generalized inverse map of p(·), defined as
q(z) def. = P z + x.
Here we create a compressed feature y, which is the projected feature z = (z 1 , z 2 , · · · , z m ) t along with the projection distance δ(p, x):
y = y(p, x) = (z 1 , z 2 , · · · , z m , δ(p, x)) t ,
where y(p, x) means that y is determined by p and x. The Euclidean distance between compressed features is utilized as a new criterion for matching instead of the Euclidean distance between projected features. The distance is expressed as
y S − y Q 2 = z S − z Q 2 + {δ(p, x S ) − δ(p, x Q )} 2 ,(4)
where z S = p(x S ) (resp. z Q = p(x Q )) is the project feature derived from the original histograms x S (resp. x Q ) and y S = y S (p, x S ) (resp. y Q = y Q (p, x Q )) is the corresponding compressed feature. Eq. (4) implies that the distance between compressed features is larger than the distance between corresponding projected features. In addition, from the above discussions, we have the following two properties, which indicate that the distance y S − y Q between two compressed features is a better approximation of the distance x S − x Q between the original histograms than the distance z S − z Q between projected features (Theorem 1), and the expected approximation error is much smaller (Theorem 2). Theorem 1:
z S − z Q ≤ y S − y Q = min (x S ,x Q )∈A(y S ,y Q ) x S −x Q ≤ x S − x Q ,(5)
where A(y S , y Q ) is the set of all possible pairs (x S ,x Q ) of original histograms for given compressed features (y S , y Q ). Intuitive illustration of relationships between projection distance, distance between projected features and distance between compressed features.
Theorem 2: Suppose that random variables (X n S , X n Q ) corresponding to the original histograms (x S , x Q ) have a uniform distribution on the set A(y S , y Q ) defined in Theorem 1, and E[δ(p, X n S )] E[δ(p, X n Q )]. The expected approximation errors can be evaluated as
E X n S − X n Q 2 − y S − y Q 2 y S , y Q E X n S − X n Q 2 − z S − z Q 2 y S , y Q .(6)
The proofs are shown in the appendix. Fig. 6 shows an intuitive illustration of the relationships between projection distances, distances between projected features and distances between compressed features, where the histograms are in a 3-dimensional space and the subspace dimensionality is 1. In this case, for given compressed features (y S , y Q ) and a fixed query histogram x Q , a stored histogram x S must be on a circle whose center is q(z Q ). This circle corresponds to the set A(y S , y Q ).
D. Feature sampling
In the TAS method, quantized base features are stored, because they need much less storage space than the histogram sequence and creating histograms on the spot takes little calculation. With the present method, however, compressed features must be computed and stored in advance so that the search results can be returned as quickly as possible, and therefore much more storage space is needed than with the TAS method. The increase in storage space may cause a reduction in search speed due to the increase in disk access.
Based on the above discussion, we incorporate feature sampling in the temporal domain. The following idea is inspired by the technique called Piecewise Aggregate Approximation (PAA) [22]. With the proposed feature sampling method, first a compressed feature sequence
{y S (t S )} L S −W −1 t S =0
is divided into subsequences {y S (ia), y S (ia + 1), · · · , y S (ia + a − 1)} i=0,1,··· of length a. Then, the first compressed feature y S (ia) of every subsequence is selected as a representative feature. A lower bound of the distances between the query and stored compressed features contained in the subsequence can be expressed in terms of the representative feature y S (ia). This bound is obtained from the triangular inequality as follows:
y S (ia + k) − y Q ≥ y S (ia) − y Q − d(i), d(i) def. = max 0≤k ≤a−1 y S (ia + k ) − y S (ia) .
(∀i = 0, 1, · · · , ∀k = 0, · · · , a − 1) This implies that preserving the representative feature y S (ia) and the maximum distance d(i) is sufficient to guarantee that there are no false dismissals. This feature sampling is feasible for histogram sequences because successive histograms cannot change rapidly. Furthermore, the technique mentioned in this section will also contribute to accelerating the search, especially when successive histograms change little.
VI. DYNAMIC SEGMENTATION A. Related work
The approach used for dividing histogram sequences into segments is critical for realizing efficient feature-dimension reduction since the KL transform is most effective when the constituent elements in the histogram segments are similar. To achieve this, we introduce a dynamic segmentation strategy.
Dynamic segmentation is a generic term that refers to techniques for dividing sequences into segments of various lengths. Dynamic segmentation methods for time-series signals have already been applied to various kinds of applications such as speech coding (e.g. [24]), the temporal compression of waveform signals [25], the automatic segmentation of speech signals into phonic units [26], sinusoidal modeling of audio signals [27], [28], [29] and motion segmentation in video signals [30]. We employ dynamic segmentation to minimize the average dimensionality of high-dimensional feature trajectories.
Dynamic segmentation can improve dimension reduction performance. However, finding the optimal boundaries still requires a substantial calculation. With this in mind, several studies have adopted suboptimal approaches, such as longest line fitting [23], wavelet decomposition [23], [21] and the bottom-up merging of segments [31]. The first two approaches still incur a substantial calculation cost for long time-series signals. The last approach is promising as regards obtaining a rough global approximation at a practical calculation cost. This method is compatible with ours, however, we mainly focus on a more precise local optimization. Fig. 7 shows an outline of our dynamic segmentation method. The objective of the dynamic segmentation method is to divide the stored histogram sequence so that its piecewise linear representation is well characterized by a set of lower dimensional subspaces. To this end, we formulate the dynamic segmentation as a way to find a set T * = {t * j } M j=0 of segment boundaries that minimize the average dimensionality of these segment-approximating subspaces on condition that the boundary t * j between the j-th and the (j +1)th segments is in a shiftable range S j , which is defined as a section with a width ∆ in the vicinity of the initial position t 0 j of the boundary between the j-th and the (j + 1)-th segments. Namely, the set T * of the optimal segment boundaries is given by the following formula:
B. Framework
T * = t * j M j=0 def. = arg min {t j } M j=0 :t j ∈S j ∀j 1 L S M i=1 (t j − t j−1 ) · c(t j−1 , t j , σ) (7) S j def. = {t j : t 0 j − ∆ ≤ t j ≤ t 0 j + ∆} (8)
where c(t i , t j , σ) represents the subspace dimensionality on the segment between the t i -th and the t j -th frames for a given contribution threshold σ, t * 0 = 0 and t * M = L S . The initial positions of the segment boundaries are set beforehand by equi-partitioning.
The above optimization problem defined by Eq. (7) would normally be solved with dynamic programming (DP) (e.g. [32]). However, DP is not practical in this case. Deriving c(t j−1 , t j , σ) included in Eq. (7) incurs a substantial calculation cost since it is equivalent to executing a KL transform calculation for the segment [t j−1 , t j ). This implies that the DP-based approach requires a significant amount of calculation, although less than a naive approach. The above discussion implies that we should reduce the number of KL transform calculations to reduce the total calculation cost required for the optimization. When we adopt the total number of KL transform calculations as a measure for assessing the calculation cost, the cost is evaluated as O(M ∆ 2 ), where M is the number of segments and ∆ is the width of the shiftable range.
To reduce the calculation cost, we instead adopt a suboptimal approach. Two techniques are incorporated: local optimization and the coarse-to-fine detection of segment boundaries. We explain these two techniques in the following sections.
C. Local optimization
The local optimization technique modifies the formulation (Eq. (7)) of dynamic segmentation so that it minimizes the average dimensionality of the subspaces of adjoining segments. The basic idea is similar to the "forward segmentation" technique introduced by Goodwin [27], [28] for deriving accurate sinusoidal models of audio signals. The position t * j of the boundary is determined by using the following forward recursion as a substitute for Eq. (7):
t * j = arg min t j ∈S j (t j − t * j−1 )c * j + (t 0 j+1 − t j )c 0 j+1 t 0 j+1 − t * j−1 ,(9)
which is here given by
c * j = c(t * j−1 , t j , σ), c 0 j+1 = c(t j , t 0 j+1 , σ)
, and S j is defined in Eq. (8). As can be seen in Eq. (9), we can determine each segment boundary independently, unlike the formulation of Eq. (7). Therefore, the local optimization technique can reduce the amount of calculation needed for extracting an appropriate representation, which is evaluated as O(M ∆), where M is the number of segments and ∆ is the width of the shiftable range.
D. Coarse-to-fine detection
The coarse-to-fine detection technique selects suboptimal boundaries in the sense of Eq. (9) with less computational cost. We note that small boundary shifts do not contribute greatly to changes in segment dimensionality because successive histograms cannot change rapidly. With this in mind, we assume that 1) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the initial position t 0 j and the edges (t 0 j − ∆ and t 0 j + ∆) of its shiftable range.
2) The dimensions of the j-th and (j + 1)-th segments are calculated when the segment boundary t j is at the position t 0 j − ∆ + 2∆ u j +1 i (i = 1, 2, · · · , u j ), where u j determines the number of calculations in this step.
3) The dimensions of the j-th and (j + 1)-th segments are calculated in detail when the segment boundary t j is in the positions where dimension changes are detected in the previous step. We determine the number u j of dimension calculations in step 2 so that the number of calculations in all the above steps, f j (u j ), is minimized. Then, f j (u j ) is given as follows:
f j (u j ) = 2 (3 + u j ) + K j ∆ 1 2 u j + 1 ,
where K j is the estimated number of positions where the dimensionalities change, which is experimentally determined as
K j = c LR − c LL , (if c LR ≤ c RR , c LL < c RL ) K j = (c LC − c LL ) + min(c RC , c LR ) − min(c LC , c RR ), (if c LR > c RR , c LL < c RL , c LC ≤ c RC ) K j = (c RC − c RR ) + min(c LC , c RL ) − min(c RC , c LL ), (if c LR > c RR , c LL < c RL , c LC > c RC ) K j = c RL − c RR , (Otherwise) and c LL = c(t * j−1 , t 0 j − ∆, σ), c RL = c(t 0 j − ∆, t 0 j+1 , σ), c LC = c(t * j−1 , t 0 j , σ), c RC = c(t 0 j , t 0 j+1 , σ), c LR = c(t * j−1 , t 0 j + ∆, σ), c RR = c(t 0 j + ∆, t 0 j+1 , σ).
The first term of f j (u j ) refers to the number of calculations in steps 1 and 2, and the second term corresponds to that in step 3. f j (u j ) takes the minimum value 4 2K j ∆ + 2 when u j = 2K j ∆ − 2. The calculation cost when incorporating local optimization and coarse-to-fine detection techniques is evaluated as follows:
E M 4 2K j ∆ + 2 ≤ M 4 √ 2K∆ + 2 = O M √ K∆ , where K = E[K j ]
, M is the number of segments and ∆ is the width of the shiftable range. The first inequality is derived from Jensen's inequality (e.g. [33, Theorem 2.6.2]). The coarse-to-fine detection technique can additionally reduce the calculation cost because K is usually much smaller than ∆.
VII. EXPERIMENTS A. Conditions
We tested the proposed method in terms of calculation cost in relation to search speed. We again note that the proposed search method guarantees the same search results as the TAS method in principle, and therefore we need to evaluate the search speed. The search accuracy for the TAS method was reported in a previous paper [11]. In summary, for audio identification tasks, there were no false detections or false dismissals down to an S/N ratio of 20 dB if the query duration was longer than 10 seconds.
In the experiments, we used a recording of a real TV broadcast. An audio signal broadcast from a particular TV station was recorded and encoded in MPEG-1 Layer 3 (MP3) format. We recorded a 200hour audio signal as a stored signal, and recorded 200 15-second spots from another TV broadcast as queries. Thus, the task was to detect and locate specific commercial spots from 200 consecutive hours of TV recording. Each spot occurred 2-30 times in the stored signal. Each signal was first digitized at a 32 kHz sampling frequency and 16 bit quantization accuracy. The bit rate for the MP3 encoding was 56 kbps. We extracted base features from each audio signal using a 7-channel second-order IIR band-pass filter with Q = 10. The center frequencies at the filter were equally spaced on a log frequency scale. The base features were calculated every 10 milliseconds from a 60 millisecond window. The base feature vectors were quantized by using the VQ codebook with 128 codewords, and histograms were created based on the scheme of the TAS method. Therefore, the histogram dimension was 128. We implemented the feature sampling described in Section V-D and the sampling duration was a = 50. The tests were carried out on a PC (Pentium 4 2.0 GHz).
B. Search speed
We first measured the CPU time and the number of matches in the search. The search time we measured in this test comprised only the CPU time in the search stage shown in Section IV. This means that the search time did not include the CPU time for any procedures in the preparation stage such as base feature extraction, histogram creation, or histogram dimension reduction for the stored signal. The search threshold was adjusted to θ = 85 so that there were no false detections or false dismissals. We compared the following methods:
(i) The TAS method (baseline). (ii) The proposed search method without the projection distance being embedded in the compressed features. (iii) The proposed search method.
We first examined the relationships between the average segment duration (equivalent to the number of segments), the search time, and the number of matches. The following parameters were set for featuredimension reduction: The contribution threshold was σ = 0.9. The width of the shiftable range for dynamic segmentation was 500. Fig. 10 shows the relationship between the average segment duration and the search time, where the ratio of the search speed of the proposed method to that of the TAS method (conventional method in the figure) is called the speed-up factor. Also, Fig. 11 shows the relationship between the average segment duration and the number of matches. Although the proposed method only slightly increased the number of matches, it greatly reduced the search time. This is because it greatly reduced the calculation cost per match owing to feature-dimension reduction. For example, the proposed method reduced the search time to almost 1/12 when the segment duration was 1.2 minutes (i.e. the number of segments was 10000). As mentioned in Section V-D, the feature sampling technique also contributed to the acceleration of the search, and the effect is similar to histogram skipping. Considering the dimension reduction performance results described later, we found that those effects were greater than that caused by dimension reduction for large segment durations (i.e. a small number of segments). This is examined in detail in the next section. We also found that the proposed method reduced the search time and the number of matches when the distance bounding technique was incorporated, especially when there were a large number of segments.
VIII. DISCUSSION The previous section described the experimental results solely in terms of search speed and the advantages of the proposed method compared with the previous method. This section provides further discussion of the advantages and shortcomings of the proposed method as well as additional experimental results.
We first deal with the dimension reduction performance derived from the segment-based KL transform. We employed equi-partitioning to obtain segments, which means that we did not incorporate the dynamic segmentation technique. Fig. 12 shows the experimental result. The proposed method monotonically reduced the dimensions as the number of segments increased if the segment duration was shorter than 10 hours (the number of segments M ≥ 20). We can see that the proposed method reduced the dimensions, for example, to 1/25 of the original histograms when the contribution threshold was 0.90 and the segment duration was 1.2 minutes (the number of segments was 10000). The average dimensions did not decrease as the number of segments increased if the number of segments was relatively small. This is because we decided the number of subspace bases based on the contribution rates. Next, we deal with the dimension reduction performance derived from the dynamic segmentation technique. The initial positions of the segment boundaries were set by equi-partitioning. The duration of segments obtained by equi-partitioning was 12 minutes (i.e. there were 1000 segments). Fig. 13 shows the result. The proposed method further reduced the feature dimensionality to 87.5% of its initial value, which is almost the same level of performance as when only the local search was utilized. We were unable to calculate the average dimensionality when using DP because of the substantial amount of calculation, as described later. When the shiftable range was relatively narrow, the dynamic segmentation performance was almost the same as that of DP.
Here, we review the search speed performance shown in Fig. 10. It should be noted that three techniques in our proposed method contributed to speeding up the search, namely feature-dimension reduction, distance bounding and feature sampling. When the number of segments was relatively small, the speed-up factor was much larger than the ratio of the dimension of the compressed features to that of the original histograms, which can be seen in Figs. 10, 12 and 13. This implies that the feature sampling technique dominated the search performance in this case. On the other hand, when the number of segments was relatively large, the proposed search method did not greatly improve the search speed compared with the dimension reduction performance. This implies that the feature sampling technique degraded the search performance. In this case, the distance bounding technique mainly contributed to the improvement of the search performance as seen in Fig. 10.
Lastly, we discuss the amount of calculation necessary for dynamic segmentation. We again note that although dynamic segmentation can be executed prior to providing a query signal, the computational time must be at worst smaller than the duration of the stored signal from the viewpoint of practical applicability. We adopted the total number of dimension calculations needed to obtain the dimensions of the segments as a measure for comparing the calculation cost in the same way as in Section VI. Fig. 14 shows the estimated calculation cost for each dynamic segmentation method. We compared our method incorporating local optimization and coarse-to-fine detection with the DP-based method and a case where only the local optimization technique was incorporated. The horizontal line along with "Real-time processing" indicates that the computational time is almost the same as the duration of the signal. The proposed method required much less computation than with DP or local optimization. For example, when the width of the shiftable range was 500, the calculation cost of the proposed method was 1/5000 that of DP and 1/10 that with local optimization. We note that in this experiment, the calculation cost of the proposed method is less than the duration of the stored signal, while those of the other two methods are much longer.
IX. CONCLUDING REMARKS This paper proposed a method for undertaking quick similarity-based searches of an audio signal to detect and locate similar segments to a given audio clip. The proposed method was built on the TAS method, where audio segments are modeled by using histograms. With the proposed method, the histograms are compressed based on a piecewise linear representation of histogram sequences. We introduce dynamic segmentation, which divides histogram sequences into segments of variable lengths. We also addressed the quick suboptimal partitioning of the histogram sequences along with local optimization and coarse-to-fine detection techniques. Experiments revealed significant improvements in search speed. For example, the proposed method reduced the total search time to approximately 1/12, and detected the query in about 0.3 seconds from a 200-hour audio database. Although this paper focused on audio signal retrieval, the proposed method can be easily applied to video signal retrieval [34], [35]. Although the method proposed in this paper is founded on the TAS method, we expect that some of the techniques we have described could be used in conjunction with other similarity-based search methods (e.g. [36], [37], [38], [39]) or a speech/music discriminator [40]. Future work includes the implementation of indexing methods suitable for piecewise linear representation, and the dynamic determination of the initial segmentation, both of which have the potential to improve the search performance further. APPENDIX A PROOF OF THEOREM 1 First, let us define
z Q def. = p(x Q ), z S def. = p(x S ), x Q def. = q(z Q ) = q(p(x Q )), x S def. = q(z S ) = q(p(x S )), δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S ).
We note that for any histogram x ∈ N n , x = q(p(x)) is the projection of x into the subspace defined by the map p(·), and therefore x − x is a normal vector of the subspace of p(·). Also, we note that x − x = δ(p, x) and x is on the subspace of p(·). For two vectors x 1 and x 2 , their inner product is denoted as x 1 · x 2 . Then, we obtain
x Q − x S 2 = (x Q − x Q ) − (x S − x S ) + ( x Q − x S ) 2 = x Q − x Q 2 + x S − x S 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) + 2(x Q − x Q ) · ( x Q − x S ) −2(x S − x S ) · ( x Q − x S ) = δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2(x Q − x Q ) · (x S − x S ) (10) ≥ δ(p, x Q ) 2 + δ(p, x S ) 2 + x Q − x S 2 −2δ(p, x Q ) · δ(p, x S ) (11) = {δ(p, x Q ) − δ(p, x S )} 2 + z Q − z S 2 = y Q − y S 2 ,
where Eq. (10) comes from the fact that any vector on a subspace and the normal vector of the subspace are mutually orthogonal, and Eq. (11) from the definition of inner product. This concludes the proof of Theorem 1.
APPENDIX B PROOF OF THEOREM 2
The notations used in the previous section are also employed here. When the projected features z Q , z S and the projection distances
δ Q def. = δ(p, x Q ), δ S def. = δ(p, x S )
are given, we can obtain the distance between the original features as follows: (12) is derived from Eq. (10) and φ is the angle between x Q − q(z Q ) and x S − q(z S ). From the assumption that random variables X S and X Q corresponding to original histograms x S and x Q are distributed independently and uniformly in the set A, the following equation is obtained:
x Q − x S 2 = z Q − z S 2 + δ 2 Q + δ 2 S −(x Q − q(z Q )) · (x S − q(z S )) (12) = z Q − z S 2 + δ 2 Q + δ 2 S − 2δ Q δ S cos φ, where Eq.E X Q − X S 2 − z Q − z S 2 = π 0 (δ 2 Q + δ 2 S − 2δ Q δ S cos φ) S n−m−1 (δ S sin φ) S n−m (δ S ) |d(δ S cos φ)|,(13)
where S k (R) represents the surface area of a k-dimensional hypersphere with radius R, and can be calculated as follows:
S k (R) = k π k/2 (k/2)! R k−1(14)
Substituting Eq. (14) into Eq. (13), we obtain
E X Q − X S 2 − z Q − z S 2 = n − m − 1 n − m (δ 2 Q + δ 2 S ) ≈ n − m − 1 n − m δ 2 Q ,
where the last approximation comes from the fact that δ Q δ D . Also, from Eq. (4) we have
x Q − x S 2 − y Q − y S 2 = 2δ Q δ S (1 − cos φ).
Therefore, we derive the following equation in the same way:
| 7,588 |
0710.4278
|
1991617913
|
We present a generalized method for reconstructing the shape of an object from measured gradient data. A certain class of optical sensors does not measure the shape of an object but rather its local slope. These sensors display several advantages, including high information efficiency, sensitivity, and robustness. For many applications, however, it is necessary to acquire the shape, which must be calculated from the slopes by numerical integration. Existing integration techniques show drawbacks that render them unusable in many cases. Our method is based on an approximation employing radial basis functions. It can be applied to irregularly sampled, noisy, and incomplete data, and it reconstructs surfaces both locally and globally with high accuracy.
|
In general, it is crucial to note that the reconstruction method depends on the slope-measuring sensor and the properties of the acquired data. For example, slope data acquired by Shape from Shading is rather noisy, exhibits curl, and is usually located on a full grid of millions of points. Here, a fast subspace approximation method like the one proposed by Frankot and Chellappa @cite_13 is appropriate. On the other hand, wavefront reconstruction deals with much smaller data sets, and the surface is known to be rather smooth and flat. In this case, a direct finite-difference solver can be applied @cite_11 . Deflectometric sensors deliver a third type of data: It consists of very large data sets with rather small noise and curl, but the data may not be complete, depending on the local reflectance of the measured surface. Furthermore, the measuring field may have an unknown, irregularly shaped boundary. These properties render most of the aforementioned methods unusable for deflectometric data. In the following sections, we will describe a surface reconstruction method which is especially able to deal with slope data acquired by sensors such as Phase-measuring Deflectometry.
|
{
"abstract": [
"An approach for enforcing integrability, a particular implementation of the approach, an example of its application to extending an existing shape-from-shading algorithm, and experimental results showing the improvement that results from enforcing integrability are presented. A possibly nonintegrable estimate of surface slopes is represented by a finite set of basis functions, and integrability is enforced by calculating the orthogonal projection onto a vector subspace spanning the set of integrable slopes. The integrability projection constraint was applied to extending an iterative shape-from-shading algorithm of M.J. Brooks and B.K.P. Horn (1985). Experimental results show that the extended algorithm converges faster and with less error than the original version. Good surface reconstructions were obtained with and without known boundary conditions and for fairly complicated surfaces. >",
"The problem of wave-front estimation from wave-front slope measurements has been examined from a least-squares curve fitting model point of view. It is shown that the slope measurement sampling geometry influences the model selection for the phase estimation. Successive over-relaxation (SOR) is employed to numerically solve the exact zonal phase estimation problem. A new zonal phase gradient model is introduced and its error propagator, which relates the mean-square wave-front error to the noisy slope measurements, has been compared with two previously used models. A technique for the rapid extraction of phase aperture functions is presented. Error propagation properties for modal estimation are evaluated and compared with zonal estimation results."
],
"cite_N": [
"@cite_13",
"@cite_11"
],
"mid": [
"2138127543",
"2003292041"
]
}
|
Shape reconstruction from gradient data
|
In industrial inspection, there is an ever-growing demand for highly accurate, non-destructive measurements of three-dimensional object geometries. A variety of optical sensors have been developed to meet these demands [1]. These sensors satisfy the requirements at least partially. Numerous applications, however, still wait for a capable metrology. The limitations of those sensors emerge from physics and technology-the physical limits are determined by the wave equation and by coherent noise, while the technological limits are mainly due to the spacetime-bandwidth product of electronic cameras.
Closer consideration reveals that the technological limits are basically of informationtheoretical nature. The majority of the available optical 3D sensors need large amounts of raw data in order to obtain the shape. A lot of redundant information is acquired and the expensive channel capacity of the sensors is used inefficiently [2]. A major source of redundancy is the shape of the object itself: If the object surface z(x, y) is almost planar, there is similar height information at each pixel. In terms of information theory the surface points of such objects are "correlated"; their power spectral density Φ z decreases rapidly. In order to remove redundancy, one can apply spatial differentiation to whiten the power spectral density (see Fig. 1). Fortunately, there are optical systems that perform such spatial differentiation. Indeed, sensors that acquire just the local slope instead of absolute height values are much more efficient in terms of exploiting the available channel capacity. Further, reconstructing the object height from slope data reduces the high-frequency noise since integration acts as a low-pass filter.
There are several sensor principles that acquire the local slope: For rough surfaces, it is mainly the principle of Shape from Shading [3]. For specular surfaces, there are differentiating sensor principles like the differential interference contrast microscopy or deflectometry [4]. Deflectometric scanning methods allow an extremely precise characterization of optical surfaces by measuring slope variations as small as 0.02 arcsec [5]. Full-field deflectometric sensors acquire the two-dimensional local gradient of a (specular) surface. Using "Phase-measuring Deflectometry" (PMD) [6][7][8][9], for example, one can measure the local gradient of an object at one million sample points within a few seconds. The repeatability of the sensor described in [9] is below 10 arcsec with an absolute error less than 100 arcsec, on a measurement field of 80 mm × 80 mm and a sampling distance of 0.1 mm.
In several cases it is sufficient to know the local gradient or the local curvature; however, most applications demand the height information as well. As an example we consider eyeglass lenses. In order to calculate the local surface power of an eyeglass lens by numerical differentiation, we only need the surface slope and the lateral sampling distance. But for quality assurance in an industrial setup, it is necessary to adjust the production machines according to the measured shape deviation. This requires height information of the surface. Another application is the measurement of precision optics. For the optimization of these systems sensors are used to measure the local gradient of wavefronts [10]. To obtain the shape of these wavefronts, a numerical shape reconstruction method is needed.
1.A. Why is integration of 2D gradient data difficult?
In the previous section we stated that measuring the gradient instead of the object height is more efficient from an information-theoretical point of view, since redundant information is largely reduced. Using numerical integration techniques, the shape of the object can be reconstructed locally with high accuracy. For example, a full-field deflectometric sensor allows the detection of local height variations as small as a few nanometers.
However, if we want to reconstruct the global shape of the object, low-frequency information is essential. Acquiring solely the slope of the object reduces the low-frequency information substantially (see Fig. 1). In other words, we have a lot of local information while lacking global information, because we reduced the latter by optical differentiation. As a consequence, small measuring errors in the low-frequency range will have a strong effect on the overall reconstructed surface shape. This makes the reconstruction of the global shape a difficult task.
Furthermore, one-dimensional integration techniques cannot be easily extended to the two-dimensional case. In this case, one has to choose a path of integration. Unfortunately, noisy data leads to different integration results depending on the path [11]. Therefore, requiring the integration to be path independent becomes an important condition ("integrability condition") for developing an optimal reconstruction algorithm (see Sections 2 and 4.C).
Problem formulation
We consider an object surface to be a twice continuously differentiable function z : Ω → R on some compact, simply connected region Ω ⊂ R 2 . The integrability condition implies that the gradient field ∇z = (z x , z y ) T is curl free, i. e. every path integral between two points yields the same value. This is equivalent to the requirement that there exists a potential z to the gradient field ∇z which is unique up to a constant. Most object surfaces measurable by deflectometric sensors fulfill these requirements, or at least they can be decomposed into simple surface patches showing these properties.
Measuring the gradient ∇z at each sample point x i = (x i , y i ) T yields a discrete vector field (p(x i ), q(x i )) T , i = 1 . . . N . These measured gradient values usually are contaminated by noise-the vector field is not necessarily curl free. Hence, there might not exist a potential z such that ∇z(x i ) = (p(x i ), q(x i )) T for all i. In that case, we seek a least-squares approximation, i. e. a surface representation z such that the following error functional is minimized [3,12,13]:
J(z) := N i=1 [z x (x i ) − p(x i )] 2 + [z y (x i ) − q(x i )] 2 .
(1)
Shape reconstruction
4.A. Challenges
The desired surface reconstruction method should have the properties of both local and global integration methods: It needs to preserve local details without propagating the error along a certain path. It also needs to minimize the error functional of Eq. (1), hence yielding a globally optimal solution in a least-squares sense. Further, the method should be able to deal with irregularly shaped boundaries, missing data points, and it has to be able to reconstruct surfaces of a large variety of objects with steep slopes and high curvature values. It should also be able to handle large data sets which may consist of some million sample points. We now show how to meet these challenges using an analytic interpolation approach.
4.B. Analytic reconstruction
A low noise level allows interpolation of the slope values instead of approximation. Interpolation is a special case which has the great advantage that we can ensure that small height variations are preserved. In this paper we will only focus on the interpolation approach as analytic reconstruction method. For other measurement principles like Shape from Shading, an approximation approach might be more appropriate.
The basic idea of the integration method is as follows: We seek an analytic interpolation function such that its gradient interpolates the measured gradient data. Once this interpolation is determined, it uniquely defines the surface reconstruction up to an integration constant. To obtain the analytic interpolant, we choose a generalized Hermite interpolation approach employing radial basis functions (RBFs) [18,19]. This method has the advantage that it can be applied to scattered data. It allows us to integrate data sets with holes, irregular sampling grids, or irregularly shaped boundaries. Furthermore, this method allows for an optimal surface recovery in the sense of Eq. (1) (see Section 4.C below).
In more detail: Assuming that the object surface fulfills the requirements described in Section 2, the data is given as pairs (p(x j ), q(x j )) T , where p(x j ) and q(x j ) are the measured slopes of the object at x j in x-and y-direction, respectively, for 1 ≤ j ≤ N . We define the interpolant to be
s(x) = N i=1 α i Φ x (x − x i ) + N i=1 β i Φ y (x − x i ),(3)
where α i and β i , for 1 ≤ i ≤ N , are coefficients and Φ : R 2 → R is a radial basis function. Hereby, Φ x and Φ y denote the analytic derivative of Φ with respect to x and y, respectively. This interpolant is specifically tailored for gradient data [9]. To obtain the coefficients in Eq. (3) we match the analytic derivatives of the interpolant with the measured derivatives:
s x (x j ) ! = p(x j ) s y (x j ) ! = q(x j ) , for 1 ≤ j ≤ N .(4)
This leads to solving the following system of linear equations [20]:
Φ xx (x i − x j ) Φ xy (x i − x j ) Φ xy (x i − x j ) Φ yy (x i − x j ) A ∈ M 2N ×2N α i β i α ∈ M 2N ×1 = p(x j ) q(x j ) d ∈ M 2N ×1 .(5)
Using the resulting coefficients α i , β i we then can apply the interpolant in Eq. (3) to reconstruct the object surface. For higher noise levels an approximation approach is recommended.
In this case, we simply reduce the number of basis functions so that they do not match the number of data points any more. The system A α = d in Eq. (5) then becomes overdetermined and can be solved in a least-squares sense.
4.C. Optimal recovery
The interpolation approach employing radial basis functions has the advantage that it yields a unique solution to the surface recovery problem: Within this setup, the interpolation matrix in Eq. (5) is always symmetric and positive definite. Further, the solution satisfies a minimization principle in the sense that the resulting analytic surface function has minimal energy [21]. We choose Φ to be a Wendland's function [22], Φ(x) =: φ(r), with φ(r) = 1 3
(1 − r) 6 + (35r 2 + 18r + 3) ∈ C 4 (R + ) and r := x 2 + y 2 .
This has two reasons: First, Wendland's functions allow to choose their continuity according to the smoothness of the given data. The above Wendland's function leads to an interpolant which is three-times continuously differentiable, hence guaranteeing the integrability condition. Second, the compact support of the function allows to adjust the support size in such a way that the solution of Eq. (5) is stable in the presence of noise. It turns out that the support size has to be chosen rather large in order to guarantee a good surface reconstruction [9].
4.D. Handling large data sets
The amount of data commonly acquired with a PMD sensor in a single measurement is rather large: it consists of about one million sample points. This amount of data, which results from a measurement with high lateral resolution, would require the inversion of a matrix with (2 × 10 6 ) 2 entries (Eq. (5)). Since we choose a large support size for our basis functions to obtain good numerical stability the corresponding matrix is not sparse. It is obvious that this large amount of data cannot be handled directly by inexpensive computing equipment in reasonable time.
To cope with such large data sets we developed a method which first splits the data field into a set of overlapping rectangular patches. We interpolate the data on each patch separately. If the given data were height information only, this approach would yield the complete surface reconstruction. For slope data, we interpolate the data and obtain a surface reconstruction up to a constant of integration (see Fig. 3(a)) on each patch. In order to determine the missing information we apply the following fitting scheme: Let us denote two adjacent patches as Ω 1 and Ω 2 and the resulting interpolants as s 1 and s 2 , respectively. Since the constant of integration is still unknown the two interpolants might be on different height levels. Generally, we seek a correcting function f 2 : Ω 2 → R by minimizing
K(f 2 ) := x∈Ω 1 ∩Ω 2 |s 1 (x) − s 2 (x) − f 2 (x)| 2 .(7)
This fitting scheme is then propagated from the center toward the borders of the data field to obtain the reconstructed surface on the entire field (see Fig. 3(b)).
In the simplest case, the functions f i are chosen to be constant on each patch, representing the missing constant of integration. If the systematic error of the measured data is small, the constant fit method is appropriate since it basically yields no error propagation. For very noisy data sets it might be better to use a planar fit, i. e. f i (x) = a i x + b i y + c i , to avoid discontinuities at the patch boundaries. This modification, however, introduces a propagation of the error along the patches. The correction angle required on each patch to minimize Eq. (7) depends on the noise of the data. Numerical experiments have shown that in most cases the correction angle is at least ten times smaller than the noise level of the measured data.
Using this information we can estimate the global height error which, by error propagation, may sum up toward the borders of the measuring field [23]:
∆z global ≈ tan(σ α ) ∆x √ M ,(8)
where σ α is the standard deviation of the correction angles, ∆x is the patch size, and M is the number of patches. Suppose we want to integrate over a field of 80 mm (which corresponds to a typical eyeglass diameter), assuming a realistic noise level of 8 arcsec and a patch size (not including its overlaps) of 3 mm. With our setup, this results in 27 × 27 patches, with a maximal tipping of σ α ≈ 0.6 arcsec per patch. According to Eq. (8), the resulting global error caused by propagation of the correction angles is only 45 nm.
We choose the size of the patches as big as possible, provided that a single patch can still be handled efficiently. For the patch size in the example, 23 × 23 points (including 25% overlap) correspond to a 1058 × 1058 interpolation matrix that can be inverted quickly using standard numerical methods like Cholesky decomposition.
A final remark concerning the runtime complexity of the method described above: The complexity can be further reduced in case the sampling grid is regular. Since the patches all have the same size and the matrix entries in Eq. (5) only depend on the distances between sample points, the matrix can be inverted once for all patches and then applied to varying data on different patches, as long as the particular data subset is complete. Note that Eq. (3) can be written as s = B α, where B is the evaluation matrix. Then, by applying Eq. (5) we obtain s = BA −1 d, where the matrix BA −1 needs to be calculated only once for all complete patches. If samples are missing, however, the interpolation yields different coefficients α and hence forces to recompute BA −1 for this particular patch. Using these techniques, the reconstruction of 1000 × 1000 surface values from their gradients takes about 5 minutes on a current personal computer.
Results
First, we investigated the stability of our method with respect to noise. We simulated realistic gradient data of a sphere (with 80 mm radius, 80 mm × 80 mm field with sampling distance 0.2 mm, see Figure 4(a)) and added uniformly distributed noise of different levels, ranging from 0.05 to 400 arcsec. We reconstructed the surface of the sphere using the interpolation method described in Section 4. Hereby, we aligned the patches by only adding a constant to each patch. The reconstruction was performed for 12 statistically independent slope data sets for each noise level.
Depicted in Figure 4(b) is a cross-section of the absolute error of the surface reconstruction from the ideal sphere, for a realistic noise level of 8 arcsec. The absolute error is less than ±15 nm on the entire measurement field. The local height error corresponding to this noise level is about ±5 nm. This demonstrates that the dynamic range of the global absolute error with respect to the height variance (25 mm) of the considered sphere is about 1 : 10 6 .
The graph in Figure 4(c) depicts the mean value and the standard deviation (black error bars) of the absolute error of the reconstruction, for 12 different data sets and for different noise levels. It demonstrates that for increasing noise level the absolute error grows only linearly (linear fit depicted in gray), and even for a noise level being the fifty-fold of the typical sensor noise the global absolute error remains in the sub-micrometer regime. This result implies that the reconstruction error is smaller than most technical applications require.
Another common task in quality assurance is the detection and quantification of surface defects like scratches or grooves. We therefore tested our method for its ability to reconstruct such local defects that may be in a range of only a few nanometers. For this purpose, we considered data from a PMD sensor for small, specular objects. The sensor has a resolvable distance of 30 µm laterally and a local angular uncertainty of about 12 arcsec [8]. In order to quantify the deviation of the perfect shape, we again simulated a sphere (this time with 12 mm radius and 5.7 mm × 5.7 mm data field size). We added parallel, straight grooves of varying depths from 1 to 100 nm and of 180 µm width and reconstructed the surface from the modified gradients. The perfect sphere was then subtracted from the reconstructed surface. The resulting reconstructed grooves are depicted in Figure 5(a). The grooves ranging from 100 down to 5 nm depth are clearly distinguishable from the plane. Figure 5(b) shows that all reconstructed depths agree fairly well with the actual depths. Note that each groove is determined by only 5 inner sample points. The simulation results demonstrate that our method is almost free of error propagation while preserving small, local details of only some nanometers height.
So far, we used only simulated data to test the reconstruction. Now, we want to demonstrate the application of our method to a real measurement. The measurement was performed with a Phase-measuring Deflectometry sensor for very small, specular objects. It can laterally resolve object points with a distance of 75 nm, while having a local angular uncertainty of about 200 arcsec. The object under test is a part of a wafer with about 350 nm height range. The size of the measurement field was 100 µm × 80 µm. Depicted in Figure 2 is the reconstructed object surface from roughly three million data values. Both the global shape and local details could be reconstructed with high precision.
Conclusion
We motivated why the employment of optical slope-measuring sensors can be advantageous. We gave a brief overview of existing sensor principles. The question that arose next was how to reconstruct the surface from its slope data. We presented a method based on radial basis functions which enables us to reconstruct the object surface from noisy gradient data. The method can handle large data sets consisting of some million entries. Furthermore, the data does not need to be acquired on a regular grid-it can be arbitrarily scattered and it can contain data holes. We demonstrated that, while accurately reconstructing the object's global shape, which may have a height range of some millimeters, the method preserves local height variations on a nanometer scale.
A remaining challenge is to improve the runtime complexity of the algorithm in order to be able to employ it for inline quality assurance in a production process. 5. Reconstruction of grooves on a spherical surface from simulated slope data, for realistic noise. The nominal height of the grooves ranges from 100 nm down to 1 nm. After the reconstruction, the sphere was subtracted to make the grooves visible. The actual, reconstructed grooves are depicted in (a) full-field and in (b) cross-section.
| 3,415 |
0708.3157
|
2166577116
|
This paper proves two main results. First, it is shown that if Σ is a smooth manifold homeomorphic to the standard n-torus T n = R n Z n and H is a real-analytically completely integrable convex hamiltonian on T ∗Σ, then Σ is diffeomorphic to Tn. Second, it is proven that for some topological 7-manifolds, the cotangent bundle of each smooth structure admits a real-analytically completely integrable riemannian metric hamiltonian.
|
Taimanov @cite_27 has proven that if a compact manifold @math admits a real-analytically completely integrable geodesic flow, then @math is almost abelian of rank at most @math ; @math ; and there is an injection @math where @math . These constraints are ineffective for exotic tori.
|
{
"abstract": [
"In this paper, (Liouville) integrability of geodesic flows on non-simply-connected manifolds is studied. In particular, the following result is obtained: A geodesic flow on a real-analytic Riemannian manifold cannot be integrable in terms of analytic functions if either 1) the fundamental group of the manifold contains no commutative subgroup of finite index, or 2) the first Betti number of the manifold over the field of rational numbers is greater than the dimension (the manifold is assumed to be closed). Bibliography: 11 titles."
],
"cite_N": [
"@cite_27"
],
"mid": [
"2133347489"
]
}
|
THE MASLOV COCYCLE, SMOOTH STRUCTURES AND REAL-ANALYTIC COMPLETE INTEGRABILITY
| 0 |
|
0708.3157
|
2166577116
|
This paper proves two main results. First, it is shown that if Σ is a smooth manifold homeomorphic to the standard n-torus T n = R n Z n and H is a real-analytically completely integrable convex hamiltonian on T ∗Σ, then Σ is diffeomorphic to Tn. Second, it is proven that for some topological 7-manifolds, the cotangent bundle of each smooth structure admits a real-analytically completely integrable riemannian metric hamiltonian.
|
@cite_1 , Rudnev and Ten assume that a geodesic flow is completely integrable with a non-degenerate first-integral map on an @math -dimensional compact manifold with first Betti number equal to @math . Non-degeneracy means, amongst other things, that the singular set is stratified by the rank of the first integral map and each stratum is a symplectic submanifold on which the system is completely integrable. From these hypotheses, they deduce that there is a lagrangian torus @math such that the natural map @math (figure ) is a homeomorphism . Theorem 2 of @cite_1 states that @math is a diffeomorphism, but this is mistaken. It is shown only that @math is a @math smooth map, hence by invariance of domain, a homeomorphism. To prove that @math is a diffeomorphism one must prove that the Maslov cocycle of @math vanishes, or something equivalent. This is the first difficulty in proving theorem .
|
{
"abstract": [
"We establish a generic sufficient condition for a compact n-dimensional manifold bearing an integrable geodesic flow to be the n-torus. As a complementary result, we show that in the case of domains of possible motions with boundary, the first Betti number of the domain of possible motions may be arbitrarily large."
],
"cite_N": [
"@cite_1"
],
"mid": [
"2061464622"
]
}
|
THE MASLOV COCYCLE, SMOOTH STRUCTURES AND REAL-ANALYTIC COMPLETE INTEGRABILITY
| 0 |
|
0704.0967
|
2167988857
|
MIMO technology is one of the most significant advances in the past decade to increase channel capacity and has a great potential to improve network capacity for mesh networks. In a MIMO-based mesh network, the links outgoing from each node sharing the common communication spectrum can be modeled as a Gaussian vector broadcast channel. Recently, researchers showed that dirty paper coding'' (DPC) is the optimal transmission strategy for Gaussian vector broadcast channels. So far, there has been little study on how this fundamental result will impact the cross-layer design for MIMO-based mesh networks. To fill this gap, we consider the problem of jointly optimizing DPC power allocation in the link layer at each node and multihop multipath routing in a MIMO-based mesh networks. It turns out that this optimization problem is a very challenging non-convex problem. To address this difficulty, we transform the original problem to an equivalent problem by exploiting the channel duality. For the transformed problem, we develop an efficient solution procedure that integrates Lagrangian dual decomposition method, conjugate gradient projection method based on matrix differential calculus, cutting-plane method, and subgradient method. In our numerical example, it is shown that we can achieve a network performance gain of 34.4 by using DPC.
|
Despite significant research progress in using MIMO for single-user communications, research on multi-user multi-hop MIMO networks is still in its inception stage. There are many open problems, and many areas are still poorly understood @cite_1 . Currently, the relatively well-studied research area of multi-user MIMO systems are cellular systems, which are single-hop and infrastructure-based. For multi-hop MIMO-based mesh networks, research results remain limited. In @cite_17 , Hu and Zhang studied the problem of joint medium access control and routing, with a consideration of optimal hop distance to minimize end-to-end delay. In @cite_6 , Sundaresan and Sivakumar used simulations to study various characteristics and tradeoffs (multiplexing gain vs. diversity gain) of MIMO links that can be leveraged by routing layer protocols in rich multipath environments to improve performance. In @cite_12 , proposed a distributed algorithm for MIMO-based multi-hop ad hoc networks, in which diversity and multiplexing gains of each link are controlled to achieve the optimal rate-reliability tradeoff. The optimization problem assumes fixed SINRs and fixed routes between source and destination nodes. However, in these works, there is no explicit consideration of per-antenna power allocation and their impact on upper layers. Moreover, DPC in cross-layer design has never been studied either.
|
{
"abstract": [
"We provide an overview of the extensive results on the Shannon capacity of single-user and multiuser multiple-input multiple-output (MIMO) channels. Although enormous capacity gains have been predicted for such channels, these predictions are based on somewhat unrealistic assumptions about the underlying time-varying channel model and how well it can be tracked at the receiver, as well as at the transmitter. More realistic assumptions can dramatically impact the potential capacity gains of MIMO techniques. For time-varying MIMO channels there are multiple Shannon theoretic capacity definitions and, for each definition, different correlation models and channel information assumptions that we consider. We first provide a comprehensive summary of ergodic and capacity versus outage results for single-user MIMO channels. These results indicate that the capacity gain obtained from multiple antennas heavily depends on the available channel information at either the receiver or transmitter, the channel signal-to-noise ratio, and the correlation between the channel gains on each antenna element. We then focus attention on the capacity region of the multiple-access channels (MACs) and the largest known achievable rate region for the broadcast channel. In contrast to single-user MIMO channels, capacity results for these multiuser MIMO channels are quite difficult to obtain, even for constant channels. We summarize results for the MIMO broadcast and MAC for channels that are either constant or fading with perfect instantaneous knowledge of the antenna gains at both transmitter(s) and receiver(s). We show that the capacity region of the MIMO multiple access and the largest known achievable rate region (called the dirty-paper region) for the MIMO broadcast channel are intimately related via a duality transformation. This transformation facilitates finding the transmission strategies that achieve a point on the boundary of the MIMO MAC capacity region in terms of the transmission strategies of the MIMO broadcast dirty-paper region and vice-versa. Finally, we discuss capacity results for multicell MIMO channels with base station cooperation. The base stations then act as a spatially diverse antenna array and transmission strategies that exploit this structure exhibit significant capacity gains. This section also provides a brief discussion of system level issues associated with MIMO cellular. Open problems in this field abound and are discussed throughout the paper.",
"The current framework of network utility maximization for rate allocation and its price-based algorithms assumes that each link provides a fixed-size transmission \"pipe\" and each user's utility is a function of transmission rate only. These assumptions break down in many practical systems, where, by adapting the physical layer channel coding or transmission diversity, different tradeoffs between rate and reliability can be achieved. In network utility maximization problems formulated in this paper, the utility for each user depends on both transmission rate and signal quality, with an intrinsic tradeoff between the two. Each link may also provide a higher (or lower) rate on the transmission \"pipes\" by allowing a higher (or lower) decoding error probability. Despite nonseparability and nonconvexity of these optimization problems, we propose new price-based distributed algorithms and prove their convergence to the globally optimal rate-reliability tradeoff under readily-verifiable sufficient conditions. We first consider networks in which the rate-reliability tradeoff is controlled by adapting channel code rates in each link's physical-layer error correction codes, and propose two distributed algorithms based on pricing, which respectively implement the \"integrated\" and \"differentiated\" policies of dynamic rate-reliability adjustment. In contrast to the classical price-based rate control algorithms, in our algorithms, each user provides an offered price for its own reliability to the network, while the network provides congestion prices to users. The proposed algorithms converge to a tradeoff point between rate and reliability, which we prove to be a globally optimal one for channel codes with sufficiently large coding length and utilities whose curvatures are sufficiently negative. Under these conditions, the proposed algorithms can thus generate the Pareto optimal tradeoff curves between rate and reliability for all the users. In addition, the distributed algorithms and convergence proofs are extended for wireless multiple-inpit-multiple-output multihop networks, in which diversity and multiplexing gains of each link are controlled to achieve the optimal rate-reliability tradeoff. Numerical examples confirm that there can be significant enhancement of the network utility by distributively trading-off rate and reliability, even when only some of the links can implement dynamic reliability.",
"Smart antennas include a broad variety of antenna technologies ranging from the simple switched beams to the sophisticated digital adaptive arrays. While beam-forming antennas are good candidates for use in strong line of sight (LOS) environments, it is the multiple input multiple output (MIMO) technology that is best suited for multipath environments. In fact, the MIMO links exploit the multipath induced rich scattering to provide high spectral efficiencies. The focus of this work is to identify the various characteristics and tradeoffs of MIMO links that can be leveraged by routing layer protocols in rich multipath environments to improve their performance. To this end, we propose a routing protocol called MIR for ad-hoc networks with MIMO links, that leverages the various characteristics of MIMO links in its mechanisms to improve the network performance. We show the effectiveness of the proposed protocol by evaluating its performance through ns2 simulations for a variety of network conditions.",
"In this paper, we explore the utility of recently discovered multiple-antenna techniques (namely MIMO techniques) for medium access control (MAC) design and routing in mobile ad hoc networks. Specifically, we focus on ad hoc networks where the spatial diversity technique is used to combat fading and achieve robustness in the presence of user mobility. We first examine the impact of spatial diversity on the MAC design, and devise a MIMO MAC protocol accordingly. We then develop analytical methods to characterize the corresponding saturation throughput for MIMO multi-hop networks. Building on the throughout analysis, we study the impact of MIMO MAC on routing. We characterize the optimal hop distance that minimizes the end-to-end delay in a large network. For completeness, we also study MAC design using directional antennas for the case where the channel has a strong line of sight (LOS) component. Our results show that the spatial diversity technique and the directional antenna technique can enhance the performance of mobile ad hoc networks significantly."
],
"cite_N": [
"@cite_1",
"@cite_12",
"@cite_6",
"@cite_17"
],
"mid": [
"2165943096",
"2064545653",
"2108658592",
"1976640996"
]
}
|
Cross-Layer Optimization of MIMO-Based Mesh Networks with Gaussian Vector Broadcast Channels
| 0 |
|
cs0702032
|
1664024785
|
We consider two optimization problems related to finding dense subgraphs. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph of highest average degree among all subgraphs with at least k vertices, and the densest at-most-k-subgraph problem (DamkS) is defined similarly. These problems are related to the well-known densest k-subgraph problem (DkS), which is to find the densest subgraph on exactly k vertices. We show that DalkS can be approximated efficiently, while DamkS is nearly as hard to approximate as the densest k-subgraph problem.
|
We will briefly survey a few results on the complexity of the densest @math -subgraph problem. The best approximation algorithm known for the general problem (when @math is specified as part of the input) is the algorithm of Feige, Peleg, and Kortsarz @cite_7 , which has ratio @math for some @math . For any particular value of @math , the greedy algorithm of @cite_10 gives the ratio @math . Algorithms based on linear programming and semidefinite programming have produced approximation ratios better than @math for certain values of @math , but have not improved the approximation ratio of @math for the general case @cite_9 @cite_1 .
|
{
"abstract": [
"Given a graph G=(V,E), a weight function w: E?R+, and a parameter k, we consider the problem of finding a subset U?V of size k that maximizes:Max-Vertex Coverk: the weight of edges incident with vertices in U,Max-Dense Subgraphk: the weight of edges in the subgraph induced by U,Max-Cutk: the weight of edges cut by the partition (U,V ),Max-Uncutk: the weight of edges not cut by the partition (U,V ).For each of the above problems we present approximation algorithms based on semidefinite programming and obtain approximation ratios better than those previously published. In particular we show that if a graph has a vertex cover of size k, then one can select in polynomial time a set of k vertices that covers over 80 of the edges.",
"Given an n-vertex graph G and a parameter k, we are to find a k-vertex subgraph with the maximum number of edges. This problem is NP-hard. We show that the problem remains NP-hard even when the maximum degree in G is three. When G contains a k-clique, we give an algorithm that for any e sub sub<) e). We study the applicability of semidefinite programming for approximating the dense k-subgraph problem. Our main result in this respect is negative, showing that for k @ n1 3, semidefinite programs fail to distinguish between graphs that contain k-cliques and graphs in which the densest k-vertex subgraph has average degree below logn.",
"Given an n-vertex graph with nonnegative edge weights and a positive integer k?n, our goal is to find a k-vertex subgraph with the maximum weight. We study the following greedy algorithm for this problem: repeatedly remove a vertex with the minimum weighted-degree in the currently remaining graph, until exactly k vertices are left. We derive tight bounds on the worst case approximation ratio R of this greedy algorithm: (1 2+n 2k)2?O(n?1 3)?R?(1 2+n 2k)2+O(1 n) for k in the range n 3?k?n and 2(n k?1)?O(1 k)?R?2(n k?1)+O(n k2) for k",
"This paper considers the problem of computing the dense k -vertex subgraph of a given graph, namely, the subgraph with the most edges. An approximation algorithm is developed for the problem, with approximation ratio O(n δ ) , for some δ < 1 3 ."
],
"cite_N": [
"@cite_1",
"@cite_9",
"@cite_10",
"@cite_7"
],
"mid": [
"2034543148",
"2010787744",
"2032279394",
"2036836182"
]
}
|
Finding large and small dense subgraphs
|
The density of an induced subgraph is the total weight of its edges divided by the size of its vertex set, or half its average degree. The problem of finding the densest subgraph of a given graph, and various related problems, have been studied extensively. In the past decade, identifying subgraphs with high density has become an important task in the analysis of large networks [14,10].
There are a variety of efficient algorithms for finding the densest subgraph of a given graph. The densest subgraph can be identified in polynomial time by solving a maximum flow problem [11,9]. Charikar [5] gave a greedy algorithm that produces a 2-approximation of the densest subgraph in linear time. Kannan and Vinay [12] gave a spectral approximation algorithm for a related notion of density. Both of these approximation algorithms are fast enough to run on extremely large graphs.
In contrast, no practical algorithms are known for finding the densest subgraph on exactly k vertices. If k is specified as part of the input, and is allowed to vary with the graph size n, the best polynomial time algorithm known has approximation ratio n δ , where δ is slightly less than 1/3. This algorithm is due to Feige, Peleg, and Korsarz [7]. The densest k-subgraph problem is known to be N P-complete, but there is a large gap between this approximation ratio and the strongest known hardness result.
In many of the graphs we would like to analyze (for example, graphs arising from sponsored search auctions, or from links between blogs), the densest subgraph is extremely small relative to the size of the graph. When this is the case, we would like to find a subgraph that is both large and dense, without solving the seemingly intractable densest k-subgraph problem. To address this concern, we introduce the densest at-least-k-subgraph problem, which is to find the densest subgraph on at least k vertices.
In this paper, we show that the densest at-least-k-subgraph problem can be solved nearly as efficiently as the densest subgraph problem. In fact, we show it can be solved by a careful application of the same techniques. We give a greedy 3-approximation algorithm for DalkS that runs in time O(m + n log n) in a weighted graph, and time O(m) in an unweighted graph. This algorithm is an extension of Charikar's algorithm for densest subgraph problem. We also give a 2-approximation algorithm for DalkS that runs in polynomial time, and can be computed by solving a single parametric flow problem. This is an extension of the algorithm of Gallo, Grigoriadis, and Tarjan [9] for the densest subgraph problem.
We also show that finding a dense subgraph with at most k vertices is nearly as hard as finding the densest subgraph with exactly k vertices. In particular, we prove that a polynomial time γ-approximation algorithm for the densest at-most-k-subgraph problem would imply a polynomial time 4(γ 2 + γ)-approximation algorithm for the densest k-subgraph problem. More generally, if there exists a polynomial time algorithm that approximates DamkS in a weak sense, returning a set of at most βk vertices with density at least 1/γ times the density of the densest subgraph on at most k vertices, then there is a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ).
Our algorithms for DalkS can find subgraphs with nearly optimal density in extremely large graphs, while providing considerable control over the sizes of those subgraphs. Our reduction of DkS to DamkS gives additional insight into when DkS is hard, and suggests a possible approach for improving the approximation ratio for DkS.
The paper is organized as follows. We first consider the DalkS problem, presenting the greedy 3-approximation in Section 3, and the polynomial time 2-approximation in Section 4. We consider the DamkS problem in Section 5. In Section 6, we discuss the possibility of finding a good approximation algorithm for DamkS.
Definitions
Let G = (V, E) be an undirected graph with a weight function w : E → R + which assigns a positive weight to each edge. The weighted degree w(v, G) is the sum of the weights of the edges incident with v. The total weight W (G) is the sum of the weights of the edges in G. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph on at least k vertices achieving density dal(G, k). Similarly, the densest at-most-k-subgraph problem (DamkS) is to find an induced subgraph on at most k vertices achieving density dam(G, k). The densest ksubgraph problem (DkS) is to find an induced subgraph on exactly k vertices achieving dex(G, k), and the densest subgraph problem is to find an induced subgraph of any size achieving dmax(G).
We now define formally what it means to be an approximation algorithm for DalkS. Approximation algorithms for Damks, DkS, and the densest subgraph problem, are defined similarly.
The densest at-least-k-subgraph problem
In this section, we give 3-approximation algorithm for the densest at-least-ksubgraph problem that runs in time O(m + n log n) in a weighted graph, and time O(m) in an unweighted graph. The algorithm is a simple extension of Charikar's greedy algorithm for the densest subgraph problem. To analyze the algorithm, we relate the density of a graph to the size of its w-cores, which are subgraphs with minimum weighted degree at least w.
ChALK(G, k) : Input: a graph G with n vertices, and an integer k. Output: an induced subgraph of G with at least k vertices.
1. Let H n = G and repeat the following step for i = n, . . . , 1:
(a) Let r i be the minimum weighted degree of any vertex in H i .
(b) Let v i be a vertex where w(v i , H i ) = r i . (c) Remove v i from H i to form the induced subgraph H i−1 . 2. Compute the density of d(H i ) for each i ∈ [1, n].
Output the induced subgraph
H i maximizing max i≥k d(H i ).
Theorem 1. ChALK(G, k) is a 3-approximation algorithm for the densest at-least-k-subgraph problem.
We will prove Theorem 1 in the following subsection. The implementation of step 1 described by Charikar (see [5]) gives us the following bound on the running time of ChALK.
Analysis of ChALK
The ChALK algorithm is easy to understand if we consider the relationship between induced subgraphs of G with high average degree (dense subgraphs) and induced subgraphs of G with high minimum degree (w-cores).
Definition 4. Given a graph G and a weight w ∈ R, the w-core C w (G) is the unique largest induced subgraph of G with minimum weighted degree at least w.
Here is an outline of how we will proceed. We first prove that the ChALK algorithm computes all the w-cores of G (Lemma 1). We then prove that for any induced subgraph H of G with density d, the (2d/3)-core of G has total weight at least W (H)/3 (Lemma 2). We will prove Theorem 1 using these two lemmas. Lemma 1. Let {H 1 , . . . , H n }, {v 1 , . . . , v n }, and {r 1 , . . . , r n } be the induced subgraphs, vertices, and weighted degrees determined by ChALK on the input graph G. For any w ∈ R, if I(w) is the largest index such that r(v I(w) ) ≥ w, then H I(w) = C w (G).
Proof. Fix a value of w. It easy to prove by induction that none of the vertices v n . . . v I(w)+1 that were removed before v I(w) is contained in any induced subgraph with minimum degree at least w. That implies C w (G) ⊆ H I(w) . On the other hand, the minimum degree of H I(w) is at least w, so H I(w) ⊆ C w (G). Therefore, H I(w) = C w (G).
Lemma 2.
For any graph G with total weight W and density d = W/|G|, the d-core of G is nonempty. Furthermore, for any α ∈ [0, 1], the total weight of the (αd)-core of G is strictly greater than (1 − α)W .
Proof. Let {H 1 , . . . , H n } be the induced subgraphs determined by ChALK on the input graph G. Fix a value of w, let I(w) be the largest index such that r(v I(w) ) ≥ w, and recall that H I(w) = C w (G) by Lemma 1. Since each edge in G is removed exactly once during the course of the algorithm,
W = |G| i=1 r(i) = I(w) i=1 r(i) + |G| i=I(w)+1 r(i) < W (H I(w) ) + w · (|G| − I(w)) ≤ W (C w (G)) + w|G|.
Therefore,
W (C w (G)) > W − w|G|.
Taking w = d = W/|G| in the equation above, we learn that W (C d (G)) > 0. Taking w = αd = αW/|G|, we learn that W (C αd (G)) > (1 − α)W .
Proof of Theorem 1. Let {H 1 , . . . , H n } be the induced subgraphs determined by the ChALK algorithm on the input graph G. It suffices to show that for any k, there is an integer I ∈ [k, n] satisfying d(H I ) ≥ dal(G, k)/3.
Let H * be an induced subgraph of G with at least k vertices and with density d * = W (H * )/|H * | = dal(G, k). We may apply Lemma 2 to H * with α = 2/3 to show that C (2d * /3) (H * ) has total weight at least W (H * )/3. This implies that C (2d * /3) (G) has total weight at least W (H * )/3.
The core C (2d * /3) (G) has density at least d * /3, because its minimum degree is at least 2d * /3. Lemma 1 shows that C (2d * /3) (G) = H I , for I = |C (2d * /3) (G)|. If I ≥ k, then H I satisfies the requirements of the theorem. If I < k, then C (2d * /3) (G) = H I is contained in H k , and the following calculation shows that H k satisfies the requirements of the theorem.
d(H k ) = W (H k ) k ≥ W (C (2d * /3) (G)) k ≥ W (H * )/3 k = d * /3.
Remark 1. Charikar proved that ChALK(G, 1) is a 2-approximation algorithm for the densest subgraph problem. This can be derived from the fact that if w = dmax(G)
, the w-core of G is nonempty.
A 2-approximation algorithm for the densest atleast-k-subgraph problem
In this section, we will give a polynomial time 2-approximation algorithm for the densest at-least-k subgraph problem. The algorithm is based on the parametric flow algorithm of Gallo, Grigoriadis, and Tarjan [9]. It is well-known that the densest subgraph problem can be solved using similar techniques; Goldberg [11] showed that the densest subgraph can be found in polynomial time by solving a sequence of maximum flow problems, and Gallo, Grigoriadis, and Tarjan described how to find the densest subgraph using their parametric flow algorithm.
It is natural to ask whether there is a polynomial time algorithm for the densest at-least-k-subgraph problem. We do not know of such an algorithm, nor have we proved that DalkS is N P-complete.
Let H ′ be the modified collection of subgraphs obtained by padding each subgraph in H with arbitrary vertices until its size is at least k. We will show that there is a set H ∈ H ′ that satisfies d(H) ≥ dal(G, k)/2. Thus, a polynomial time 2-approximation algorithm for DalkS can be obtained by computing H, padding some of the sets with arbitrary vertices to form H ′ , and returning the densest set in H ′ . The running time is dominated by the parametric flow algorithm.
Let H * be an induced subgraph of G with at least k vertices that has density d(H * ) = dal(G, k). Let α = dal(G, k)/2, and let H be the set from H that maximizes (1) for this value of α. In particular,
|H|(d(H) − α) ≥ |H * |(d(H * ) − α) ≥ |H * |d(H * )/2.(2)
This implies that H satisfies d(H) ≥ α = dal(G, k)/2. If |H| ≥ k, then we are done. If |H| < k, then consider the set H ′ of size exactly k obtained by padding H with arbitrary vertices. We will show that d(H ′ ) ≥ dal(G, k)/2, which will complete the proof. First, notice that (2) implies a lower bound on the size of H.
|H| ≥ |H * | d(H * ) 2d(H) = |H * | dal(G, k) 2d(H) .
We can then bound the density of the padded set H ′ .
d(H ′ ) ≥ d(H) |H| k ≥ d(H) |H * | k dal(G, k) 2d(H) = dal(G, k) 2 |H * | k ≥ dal(G, k) 2 .
The densest at-most-k-subgraph problem
In this section, we show that the densest at-most-k-subgraph problem is nearly as hard to approximate as the densest k-subgraph problem. We will show that if there exists a polynomial time algorithm that approximates DamkS in a weak sense, returning a set of at most βk vertices with density at least 1/γ times the density of the densest subgraph on at most k vertices, then there exists a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ). As an immediate consequence, a polynomial time γapproximation algorithm for the densest at-most-k-subgraph problem would imply a polynomial time 4(γ 2 + γ)-approximation algorithm for the densest k-subgraph problem. Proof. Assume there exists a polynomial time algorithm A(G, k) that is (β, γ)-algorithm for DamkS. We will now describe a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ).
Given as input a graph G and integer k, let H 1 = G, let i = 1, and repeat the following procedure. Let H i = A(G i , k) be an induced subgraph of G i with at most βk vertices and with density at least dam(G i , k)/γ. Remove all the edges in H i from G i to form a new graph G i+1 on the same vertex set as G. Repeat this procedure until all edges have been removed from G.
Let n i be the number of vertices in H i , let W i = W (H i ), and let d i = d(H i ) = W i /n i . Let H * be an induced subgraph of G with exactly k vertices and density d * = dex(G, k). Notice that if (W 1 + · · · + W t−1 ) ≤ W (H * )/2, then d t ≥ d * /2γ. This is because d t is at least 1/γ times the density of the induced subgraph of G t on the vertex set of H * , which is at least
W (H * ) − (W 1 + · · · + W t−1 ) k ≥ W (H * ) 2k = d * 2 .
Let T be the smallest integer such that (W 1 +· · ·+W T ) ≥ W (H * )/2, and let U T be the induced subgraph on the union of the vertex sets of H 1 , . . . , H T . The total weight W (U T ) is at least W (H * )/2. The density of U T is
d(U T ) = W (U T ) |U T | ≥ W 1 + · · · + W T n 1 + · · · + n T ≥ min 1≤t≤T W t n t ≥ d * 2γ .
To bound the number of vertices in U T , notice that (n 1 + · · · + n T −1 ) ≤ γk, because
d * k 2 = W (H * ) 2 ≥ T −1 i=1 W i = T −1 i=1 n i d i ≥ d * 2γ T −1 i=1 n i .
Since n T is at most βk, we have |U T | ≤ (n 1 + · · · + n T ) ≤ (γ + β)k.
There are now two cases to consider. If |U T | ≤ k, we add vertices to U T arbitrarily to form a set U ′ T of size exactly k. The set U ′ T is more than dense enough to prove the theorem,
d(U ′ T ) ≥ W (H * )/2 k = d * 2 .
If |U T | > k, then we employ a simple greedy procedure to reduce the number of vertices. We begin with the induced subgraph U T , greedily remove the vertex with smallest degree to obtain a smaller subgraph, and repeat until exactly k vertices remain. The resulting subgraph U ′′ T has density at least d(U T )(k/2|U T |) by the method of conditional expectations (see also [7]). The set U ′′ T is sufficiently dense,
d(U ′′ T ) ≥ d(U T ) k 2|U T | ≥ d * 2γ k 2(γ + β)k = d * 4(γ 2 + γβ)
.
Remark 2. The argument from Theorem 4 proves a slightly more general statement: if there is a polynomial time algorithm for DamkS that is a (β, γ)algorithm for certain values of k, then there is a polynomial time algorithm for DkS that is a 4(γ 2 + γβ)-approximation algorithm for those same values of k.
We remark that the densest at-most-k-subgraph is easily seen to be N Pcomplete, since a subgraph of size at most k has density at least (k − 1)/2 if and only if it is a k-clique. As mentioned previously, Feige and Seltser [8] proved that the densest k-subgraph problem remains N P-complete when restricted to graphs with maximum degree 3, and their proof shows that the same statement is true for the densest at-most-k-subgraph problem.
Conclusion
In this section, we discuss the possibility of improving the approximation ratio for DkS via an approximation algorithm for DamkS. One possible approach is to develop a local algorithm for DamkS, analogous to the recently developed local algorithms for graph partitioning [15,1]. For any partition separating k vertices, these algorithms can produce a partition separating O(k) vertices that is nearly as good (in terms of conductance).
| 3,144 |
cs0702032
|
1664024785
|
We consider two optimization problems related to finding dense subgraphs. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph of highest average degree among all subgraphs with at least k vertices, and the densest at-most-k-subgraph problem (DamkS) is defined similarly. These problems are related to the well-known densest k-subgraph problem (DkS), which is to find the densest subgraph on exactly k vertices. We show that DalkS can be approximated efficiently, while DamkS is nearly as hard to approximate as the densest k-subgraph problem.
|
Feige and Seltser @cite_9 showed the densest @math -subgraph problem is @math -complete when restricted to bipartite graphs of maximum degree 3, by a reduction from max-clique. This reduction does not produce a hardness of approximation result for DkS. In fact, they showed that if a graph contains a @math -clique, a subgraph with @math vertices and @math edges can be found in subexponential time. Khot @cite_4 proved there can be no PTAS for the densest @math -subgraph problem, under a standard complexity assumption.
|
{
"abstract": [
"Given an n-vertex graph G and a parameter k, we are to find a k-vertex subgraph with the maximum number of edges. This problem is NP-hard. We show that the problem remains NP-hard even when the maximum degree in G is three. When G contains a k-clique, we give an algorithm that for any e sub sub<) e). We study the applicability of semidefinite programming for approximating the dense k-subgraph problem. Our main result in this respect is negative, showing that for k @ n1 3, semidefinite programs fail to distinguish between graphs that contain k-cliques and graphs in which the densest k-vertex subgraph has average degree below logn.",
"Assuming that NP @math @math BPTIME( @math ), we show that graph min-bisection, dense @math -subgraph, and bipartite clique have no polynomial time approximation scheme (PTAS). We give a reduction from the minimum distance of code (MDC) problem. Starting with an instance of MDC, we build a quasi-random probabilistically checkable proof (PCP) that suffices to prove the desired inapproximability results. In a quasi-random PCP, the query pattern of the verifier looks random in a certain precise sense. Among the several new techniques we introduce, the most interesting one gives a way of certifying that a given polynomial belongs to a given linear subspace of polynomials. As is important for our purpose, the certificate itself happens to be another polynomial, and it can be checked probabilistically by reading a constant number of its values."
],
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"2010787744",
"2091602684"
]
}
|
Finding large and small dense subgraphs
|
The density of an induced subgraph is the total weight of its edges divided by the size of its vertex set, or half its average degree. The problem of finding the densest subgraph of a given graph, and various related problems, have been studied extensively. In the past decade, identifying subgraphs with high density has become an important task in the analysis of large networks [14,10].
There are a variety of efficient algorithms for finding the densest subgraph of a given graph. The densest subgraph can be identified in polynomial time by solving a maximum flow problem [11,9]. Charikar [5] gave a greedy algorithm that produces a 2-approximation of the densest subgraph in linear time. Kannan and Vinay [12] gave a spectral approximation algorithm for a related notion of density. Both of these approximation algorithms are fast enough to run on extremely large graphs.
In contrast, no practical algorithms are known for finding the densest subgraph on exactly k vertices. If k is specified as part of the input, and is allowed to vary with the graph size n, the best polynomial time algorithm known has approximation ratio n δ , where δ is slightly less than 1/3. This algorithm is due to Feige, Peleg, and Korsarz [7]. The densest k-subgraph problem is known to be N P-complete, but there is a large gap between this approximation ratio and the strongest known hardness result.
In many of the graphs we would like to analyze (for example, graphs arising from sponsored search auctions, or from links between blogs), the densest subgraph is extremely small relative to the size of the graph. When this is the case, we would like to find a subgraph that is both large and dense, without solving the seemingly intractable densest k-subgraph problem. To address this concern, we introduce the densest at-least-k-subgraph problem, which is to find the densest subgraph on at least k vertices.
In this paper, we show that the densest at-least-k-subgraph problem can be solved nearly as efficiently as the densest subgraph problem. In fact, we show it can be solved by a careful application of the same techniques. We give a greedy 3-approximation algorithm for DalkS that runs in time O(m + n log n) in a weighted graph, and time O(m) in an unweighted graph. This algorithm is an extension of Charikar's algorithm for densest subgraph problem. We also give a 2-approximation algorithm for DalkS that runs in polynomial time, and can be computed by solving a single parametric flow problem. This is an extension of the algorithm of Gallo, Grigoriadis, and Tarjan [9] for the densest subgraph problem.
We also show that finding a dense subgraph with at most k vertices is nearly as hard as finding the densest subgraph with exactly k vertices. In particular, we prove that a polynomial time γ-approximation algorithm for the densest at-most-k-subgraph problem would imply a polynomial time 4(γ 2 + γ)-approximation algorithm for the densest k-subgraph problem. More generally, if there exists a polynomial time algorithm that approximates DamkS in a weak sense, returning a set of at most βk vertices with density at least 1/γ times the density of the densest subgraph on at most k vertices, then there is a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ).
Our algorithms for DalkS can find subgraphs with nearly optimal density in extremely large graphs, while providing considerable control over the sizes of those subgraphs. Our reduction of DkS to DamkS gives additional insight into when DkS is hard, and suggests a possible approach for improving the approximation ratio for DkS.
The paper is organized as follows. We first consider the DalkS problem, presenting the greedy 3-approximation in Section 3, and the polynomial time 2-approximation in Section 4. We consider the DamkS problem in Section 5. In Section 6, we discuss the possibility of finding a good approximation algorithm for DamkS.
Definitions
Let G = (V, E) be an undirected graph with a weight function w : E → R + which assigns a positive weight to each edge. The weighted degree w(v, G) is the sum of the weights of the edges incident with v. The total weight W (G) is the sum of the weights of the edges in G. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph on at least k vertices achieving density dal(G, k). Similarly, the densest at-most-k-subgraph problem (DamkS) is to find an induced subgraph on at most k vertices achieving density dam(G, k). The densest ksubgraph problem (DkS) is to find an induced subgraph on exactly k vertices achieving dex(G, k), and the densest subgraph problem is to find an induced subgraph of any size achieving dmax(G).
We now define formally what it means to be an approximation algorithm for DalkS. Approximation algorithms for Damks, DkS, and the densest subgraph problem, are defined similarly.
The densest at-least-k-subgraph problem
In this section, we give 3-approximation algorithm for the densest at-least-ksubgraph problem that runs in time O(m + n log n) in a weighted graph, and time O(m) in an unweighted graph. The algorithm is a simple extension of Charikar's greedy algorithm for the densest subgraph problem. To analyze the algorithm, we relate the density of a graph to the size of its w-cores, which are subgraphs with minimum weighted degree at least w.
ChALK(G, k) : Input: a graph G with n vertices, and an integer k. Output: an induced subgraph of G with at least k vertices.
1. Let H n = G and repeat the following step for i = n, . . . , 1:
(a) Let r i be the minimum weighted degree of any vertex in H i .
(b) Let v i be a vertex where w(v i , H i ) = r i . (c) Remove v i from H i to form the induced subgraph H i−1 . 2. Compute the density of d(H i ) for each i ∈ [1, n].
Output the induced subgraph
H i maximizing max i≥k d(H i ).
Theorem 1. ChALK(G, k) is a 3-approximation algorithm for the densest at-least-k-subgraph problem.
We will prove Theorem 1 in the following subsection. The implementation of step 1 described by Charikar (see [5]) gives us the following bound on the running time of ChALK.
Analysis of ChALK
The ChALK algorithm is easy to understand if we consider the relationship between induced subgraphs of G with high average degree (dense subgraphs) and induced subgraphs of G with high minimum degree (w-cores).
Definition 4. Given a graph G and a weight w ∈ R, the w-core C w (G) is the unique largest induced subgraph of G with minimum weighted degree at least w.
Here is an outline of how we will proceed. We first prove that the ChALK algorithm computes all the w-cores of G (Lemma 1). We then prove that for any induced subgraph H of G with density d, the (2d/3)-core of G has total weight at least W (H)/3 (Lemma 2). We will prove Theorem 1 using these two lemmas. Lemma 1. Let {H 1 , . . . , H n }, {v 1 , . . . , v n }, and {r 1 , . . . , r n } be the induced subgraphs, vertices, and weighted degrees determined by ChALK on the input graph G. For any w ∈ R, if I(w) is the largest index such that r(v I(w) ) ≥ w, then H I(w) = C w (G).
Proof. Fix a value of w. It easy to prove by induction that none of the vertices v n . . . v I(w)+1 that were removed before v I(w) is contained in any induced subgraph with minimum degree at least w. That implies C w (G) ⊆ H I(w) . On the other hand, the minimum degree of H I(w) is at least w, so H I(w) ⊆ C w (G). Therefore, H I(w) = C w (G).
Lemma 2.
For any graph G with total weight W and density d = W/|G|, the d-core of G is nonempty. Furthermore, for any α ∈ [0, 1], the total weight of the (αd)-core of G is strictly greater than (1 − α)W .
Proof. Let {H 1 , . . . , H n } be the induced subgraphs determined by ChALK on the input graph G. Fix a value of w, let I(w) be the largest index such that r(v I(w) ) ≥ w, and recall that H I(w) = C w (G) by Lemma 1. Since each edge in G is removed exactly once during the course of the algorithm,
W = |G| i=1 r(i) = I(w) i=1 r(i) + |G| i=I(w)+1 r(i) < W (H I(w) ) + w · (|G| − I(w)) ≤ W (C w (G)) + w|G|.
Therefore,
W (C w (G)) > W − w|G|.
Taking w = d = W/|G| in the equation above, we learn that W (C d (G)) > 0. Taking w = αd = αW/|G|, we learn that W (C αd (G)) > (1 − α)W .
Proof of Theorem 1. Let {H 1 , . . . , H n } be the induced subgraphs determined by the ChALK algorithm on the input graph G. It suffices to show that for any k, there is an integer I ∈ [k, n] satisfying d(H I ) ≥ dal(G, k)/3.
Let H * be an induced subgraph of G with at least k vertices and with density d * = W (H * )/|H * | = dal(G, k). We may apply Lemma 2 to H * with α = 2/3 to show that C (2d * /3) (H * ) has total weight at least W (H * )/3. This implies that C (2d * /3) (G) has total weight at least W (H * )/3.
The core C (2d * /3) (G) has density at least d * /3, because its minimum degree is at least 2d * /3. Lemma 1 shows that C (2d * /3) (G) = H I , for I = |C (2d * /3) (G)|. If I ≥ k, then H I satisfies the requirements of the theorem. If I < k, then C (2d * /3) (G) = H I is contained in H k , and the following calculation shows that H k satisfies the requirements of the theorem.
d(H k ) = W (H k ) k ≥ W (C (2d * /3) (G)) k ≥ W (H * )/3 k = d * /3.
Remark 1. Charikar proved that ChALK(G, 1) is a 2-approximation algorithm for the densest subgraph problem. This can be derived from the fact that if w = dmax(G)
, the w-core of G is nonempty.
A 2-approximation algorithm for the densest atleast-k-subgraph problem
In this section, we will give a polynomial time 2-approximation algorithm for the densest at-least-k subgraph problem. The algorithm is based on the parametric flow algorithm of Gallo, Grigoriadis, and Tarjan [9]. It is well-known that the densest subgraph problem can be solved using similar techniques; Goldberg [11] showed that the densest subgraph can be found in polynomial time by solving a sequence of maximum flow problems, and Gallo, Grigoriadis, and Tarjan described how to find the densest subgraph using their parametric flow algorithm.
It is natural to ask whether there is a polynomial time algorithm for the densest at-least-k-subgraph problem. We do not know of such an algorithm, nor have we proved that DalkS is N P-complete.
Let H ′ be the modified collection of subgraphs obtained by padding each subgraph in H with arbitrary vertices until its size is at least k. We will show that there is a set H ∈ H ′ that satisfies d(H) ≥ dal(G, k)/2. Thus, a polynomial time 2-approximation algorithm for DalkS can be obtained by computing H, padding some of the sets with arbitrary vertices to form H ′ , and returning the densest set in H ′ . The running time is dominated by the parametric flow algorithm.
Let H * be an induced subgraph of G with at least k vertices that has density d(H * ) = dal(G, k). Let α = dal(G, k)/2, and let H be the set from H that maximizes (1) for this value of α. In particular,
|H|(d(H) − α) ≥ |H * |(d(H * ) − α) ≥ |H * |d(H * )/2.(2)
This implies that H satisfies d(H) ≥ α = dal(G, k)/2. If |H| ≥ k, then we are done. If |H| < k, then consider the set H ′ of size exactly k obtained by padding H with arbitrary vertices. We will show that d(H ′ ) ≥ dal(G, k)/2, which will complete the proof. First, notice that (2) implies a lower bound on the size of H.
|H| ≥ |H * | d(H * ) 2d(H) = |H * | dal(G, k) 2d(H) .
We can then bound the density of the padded set H ′ .
d(H ′ ) ≥ d(H) |H| k ≥ d(H) |H * | k dal(G, k) 2d(H) = dal(G, k) 2 |H * | k ≥ dal(G, k) 2 .
The densest at-most-k-subgraph problem
In this section, we show that the densest at-most-k-subgraph problem is nearly as hard to approximate as the densest k-subgraph problem. We will show that if there exists a polynomial time algorithm that approximates DamkS in a weak sense, returning a set of at most βk vertices with density at least 1/γ times the density of the densest subgraph on at most k vertices, then there exists a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ). As an immediate consequence, a polynomial time γapproximation algorithm for the densest at-most-k-subgraph problem would imply a polynomial time 4(γ 2 + γ)-approximation algorithm for the densest k-subgraph problem. Proof. Assume there exists a polynomial time algorithm A(G, k) that is (β, γ)-algorithm for DamkS. We will now describe a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ).
Given as input a graph G and integer k, let H 1 = G, let i = 1, and repeat the following procedure. Let H i = A(G i , k) be an induced subgraph of G i with at most βk vertices and with density at least dam(G i , k)/γ. Remove all the edges in H i from G i to form a new graph G i+1 on the same vertex set as G. Repeat this procedure until all edges have been removed from G.
Let n i be the number of vertices in H i , let W i = W (H i ), and let d i = d(H i ) = W i /n i . Let H * be an induced subgraph of G with exactly k vertices and density d * = dex(G, k). Notice that if (W 1 + · · · + W t−1 ) ≤ W (H * )/2, then d t ≥ d * /2γ. This is because d t is at least 1/γ times the density of the induced subgraph of G t on the vertex set of H * , which is at least
W (H * ) − (W 1 + · · · + W t−1 ) k ≥ W (H * ) 2k = d * 2 .
Let T be the smallest integer such that (W 1 +· · ·+W T ) ≥ W (H * )/2, and let U T be the induced subgraph on the union of the vertex sets of H 1 , . . . , H T . The total weight W (U T ) is at least W (H * )/2. The density of U T is
d(U T ) = W (U T ) |U T | ≥ W 1 + · · · + W T n 1 + · · · + n T ≥ min 1≤t≤T W t n t ≥ d * 2γ .
To bound the number of vertices in U T , notice that (n 1 + · · · + n T −1 ) ≤ γk, because
d * k 2 = W (H * ) 2 ≥ T −1 i=1 W i = T −1 i=1 n i d i ≥ d * 2γ T −1 i=1 n i .
Since n T is at most βk, we have |U T | ≤ (n 1 + · · · + n T ) ≤ (γ + β)k.
There are now two cases to consider. If |U T | ≤ k, we add vertices to U T arbitrarily to form a set U ′ T of size exactly k. The set U ′ T is more than dense enough to prove the theorem,
d(U ′ T ) ≥ W (H * )/2 k = d * 2 .
If |U T | > k, then we employ a simple greedy procedure to reduce the number of vertices. We begin with the induced subgraph U T , greedily remove the vertex with smallest degree to obtain a smaller subgraph, and repeat until exactly k vertices remain. The resulting subgraph U ′′ T has density at least d(U T )(k/2|U T |) by the method of conditional expectations (see also [7]). The set U ′′ T is sufficiently dense,
d(U ′′ T ) ≥ d(U T ) k 2|U T | ≥ d * 2γ k 2(γ + β)k = d * 4(γ 2 + γβ)
.
Remark 2. The argument from Theorem 4 proves a slightly more general statement: if there is a polynomial time algorithm for DamkS that is a (β, γ)algorithm for certain values of k, then there is a polynomial time algorithm for DkS that is a 4(γ 2 + γβ)-approximation algorithm for those same values of k.
We remark that the densest at-most-k-subgraph is easily seen to be N Pcomplete, since a subgraph of size at most k has density at least (k − 1)/2 if and only if it is a k-clique. As mentioned previously, Feige and Seltser [8] proved that the densest k-subgraph problem remains N P-complete when restricted to graphs with maximum degree 3, and their proof shows that the same statement is true for the densest at-most-k-subgraph problem.
Conclusion
In this section, we discuss the possibility of improving the approximation ratio for DkS via an approximation algorithm for DamkS. One possible approach is to develop a local algorithm for DamkS, analogous to the recently developed local algorithms for graph partitioning [15,1]. For any partition separating k vertices, these algorithms can produce a partition separating O(k) vertices that is nearly as good (in terms of conductance).
| 3,144 |
cs0702032
|
1664024785
|
We consider two optimization problems related to finding dense subgraphs. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph of highest average degree among all subgraphs with at least k vertices, and the densest at-most-k-subgraph problem (DamkS) is defined similarly. These problems are related to the well-known densest k-subgraph problem (DkS), which is to find the densest subgraph on exactly k vertices. We show that DalkS can be approximated efficiently, while DamkS is nearly as hard to approximate as the densest k-subgraph problem.
|
Arora, Karger, and Karpinski @cite_11 gave a PTAS for the special case @math and @math . Asahiro, Hassin, and Iwama @cite_12 showed that the problem is still @math -complete in very sparse graphs.
|
{
"abstract": [
"The k-f(k) dense subgraph problem ((k, f(k))-DSP) asks whether there is a k-vertex subgraph of a given graph G which has at least f(k) edges. When f(k)=k(k - 1) 2, (k,f(k))-DSP is equivalent to the well-known k-clique problem. The main purpose of this paper is to discuss the problem of finding slightly dense subgraphs. Note that f(k) is about k2 for the k-clique problem. It is shown that (k,f(k))-DSP remains NP-complete for f(k)= Θ(k1+e) where e may be any constant such that 0 < e < 1. It is also NP-complete for \"relatively\" slightly-dense subgraphs, i.e., (k, f(k))-DSP is NP-complete for f(k)= ek2 υ2(1 +O(υe-1)), where υ is the number of G's vertices and e is the number of G's edges. This condition is quite tight because the answer to (k, f(k))-DSP is always yes for f(k)= ek2 υ2(1 -(υ- k) (υk- k)) that is the average number of edges in a subgraph of k vertices. Also, we show that the hardness of (k, f(k))-DSP remains for regular graphs: (k, f(k))-DSP is NP-complete for Θ(υe1)-regular graphs if f(k)= Θ(k1-e2) for any 0 < e1, e2 < 1.",
"We present a unified framework for designing polynomial time approximation schemes (PTASs) for “dense” instances of many NP-hard optimization problems, including maximum cut, graph bisection, graph separation, minimum k-way cut with and without specified terminals, and maximum 3-satisfiability. By dense graphs we mean graphs with minimum degree Ω(n), although our algorithms solve most of these problems so long as the average degree is Ω(n). Denseness for non-graph problems is defined similarly. The unified framework begins with the idea of exhaustive sampling: picking a small random set of vertices, guessing where they go on the optimum solution, and then using their placement to determine the placement of everything else. The approach then develops into a PTAS for approximating certain smooth integer programs where the objective function and the constraints are “dense” polynomials of constant degree."
],
"cite_N": [
"@cite_12",
"@cite_11"
],
"mid": [
"2115659887",
"2027048490"
]
}
|
Finding large and small dense subgraphs
|
The density of an induced subgraph is the total weight of its edges divided by the size of its vertex set, or half its average degree. The problem of finding the densest subgraph of a given graph, and various related problems, have been studied extensively. In the past decade, identifying subgraphs with high density has become an important task in the analysis of large networks [14,10].
There are a variety of efficient algorithms for finding the densest subgraph of a given graph. The densest subgraph can be identified in polynomial time by solving a maximum flow problem [11,9]. Charikar [5] gave a greedy algorithm that produces a 2-approximation of the densest subgraph in linear time. Kannan and Vinay [12] gave a spectral approximation algorithm for a related notion of density. Both of these approximation algorithms are fast enough to run on extremely large graphs.
In contrast, no practical algorithms are known for finding the densest subgraph on exactly k vertices. If k is specified as part of the input, and is allowed to vary with the graph size n, the best polynomial time algorithm known has approximation ratio n δ , where δ is slightly less than 1/3. This algorithm is due to Feige, Peleg, and Korsarz [7]. The densest k-subgraph problem is known to be N P-complete, but there is a large gap between this approximation ratio and the strongest known hardness result.
In many of the graphs we would like to analyze (for example, graphs arising from sponsored search auctions, or from links between blogs), the densest subgraph is extremely small relative to the size of the graph. When this is the case, we would like to find a subgraph that is both large and dense, without solving the seemingly intractable densest k-subgraph problem. To address this concern, we introduce the densest at-least-k-subgraph problem, which is to find the densest subgraph on at least k vertices.
In this paper, we show that the densest at-least-k-subgraph problem can be solved nearly as efficiently as the densest subgraph problem. In fact, we show it can be solved by a careful application of the same techniques. We give a greedy 3-approximation algorithm for DalkS that runs in time O(m + n log n) in a weighted graph, and time O(m) in an unweighted graph. This algorithm is an extension of Charikar's algorithm for densest subgraph problem. We also give a 2-approximation algorithm for DalkS that runs in polynomial time, and can be computed by solving a single parametric flow problem. This is an extension of the algorithm of Gallo, Grigoriadis, and Tarjan [9] for the densest subgraph problem.
We also show that finding a dense subgraph with at most k vertices is nearly as hard as finding the densest subgraph with exactly k vertices. In particular, we prove that a polynomial time γ-approximation algorithm for the densest at-most-k-subgraph problem would imply a polynomial time 4(γ 2 + γ)-approximation algorithm for the densest k-subgraph problem. More generally, if there exists a polynomial time algorithm that approximates DamkS in a weak sense, returning a set of at most βk vertices with density at least 1/γ times the density of the densest subgraph on at most k vertices, then there is a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ).
Our algorithms for DalkS can find subgraphs with nearly optimal density in extremely large graphs, while providing considerable control over the sizes of those subgraphs. Our reduction of DkS to DamkS gives additional insight into when DkS is hard, and suggests a possible approach for improving the approximation ratio for DkS.
The paper is organized as follows. We first consider the DalkS problem, presenting the greedy 3-approximation in Section 3, and the polynomial time 2-approximation in Section 4. We consider the DamkS problem in Section 5. In Section 6, we discuss the possibility of finding a good approximation algorithm for DamkS.
Definitions
Let G = (V, E) be an undirected graph with a weight function w : E → R + which assigns a positive weight to each edge. The weighted degree w(v, G) is the sum of the weights of the edges incident with v. The total weight W (G) is the sum of the weights of the edges in G. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph on at least k vertices achieving density dal(G, k). Similarly, the densest at-most-k-subgraph problem (DamkS) is to find an induced subgraph on at most k vertices achieving density dam(G, k). The densest ksubgraph problem (DkS) is to find an induced subgraph on exactly k vertices achieving dex(G, k), and the densest subgraph problem is to find an induced subgraph of any size achieving dmax(G).
We now define formally what it means to be an approximation algorithm for DalkS. Approximation algorithms for Damks, DkS, and the densest subgraph problem, are defined similarly.
The densest at-least-k-subgraph problem
In this section, we give 3-approximation algorithm for the densest at-least-ksubgraph problem that runs in time O(m + n log n) in a weighted graph, and time O(m) in an unweighted graph. The algorithm is a simple extension of Charikar's greedy algorithm for the densest subgraph problem. To analyze the algorithm, we relate the density of a graph to the size of its w-cores, which are subgraphs with minimum weighted degree at least w.
ChALK(G, k) : Input: a graph G with n vertices, and an integer k. Output: an induced subgraph of G with at least k vertices.
1. Let H n = G and repeat the following step for i = n, . . . , 1:
(a) Let r i be the minimum weighted degree of any vertex in H i .
(b) Let v i be a vertex where w(v i , H i ) = r i . (c) Remove v i from H i to form the induced subgraph H i−1 . 2. Compute the density of d(H i ) for each i ∈ [1, n].
Output the induced subgraph
H i maximizing max i≥k d(H i ).
Theorem 1. ChALK(G, k) is a 3-approximation algorithm for the densest at-least-k-subgraph problem.
We will prove Theorem 1 in the following subsection. The implementation of step 1 described by Charikar (see [5]) gives us the following bound on the running time of ChALK.
Analysis of ChALK
The ChALK algorithm is easy to understand if we consider the relationship between induced subgraphs of G with high average degree (dense subgraphs) and induced subgraphs of G with high minimum degree (w-cores).
Definition 4. Given a graph G and a weight w ∈ R, the w-core C w (G) is the unique largest induced subgraph of G with minimum weighted degree at least w.
Here is an outline of how we will proceed. We first prove that the ChALK algorithm computes all the w-cores of G (Lemma 1). We then prove that for any induced subgraph H of G with density d, the (2d/3)-core of G has total weight at least W (H)/3 (Lemma 2). We will prove Theorem 1 using these two lemmas. Lemma 1. Let {H 1 , . . . , H n }, {v 1 , . . . , v n }, and {r 1 , . . . , r n } be the induced subgraphs, vertices, and weighted degrees determined by ChALK on the input graph G. For any w ∈ R, if I(w) is the largest index such that r(v I(w) ) ≥ w, then H I(w) = C w (G).
Proof. Fix a value of w. It easy to prove by induction that none of the vertices v n . . . v I(w)+1 that were removed before v I(w) is contained in any induced subgraph with minimum degree at least w. That implies C w (G) ⊆ H I(w) . On the other hand, the minimum degree of H I(w) is at least w, so H I(w) ⊆ C w (G). Therefore, H I(w) = C w (G).
Lemma 2.
For any graph G with total weight W and density d = W/|G|, the d-core of G is nonempty. Furthermore, for any α ∈ [0, 1], the total weight of the (αd)-core of G is strictly greater than (1 − α)W .
Proof. Let {H 1 , . . . , H n } be the induced subgraphs determined by ChALK on the input graph G. Fix a value of w, let I(w) be the largest index such that r(v I(w) ) ≥ w, and recall that H I(w) = C w (G) by Lemma 1. Since each edge in G is removed exactly once during the course of the algorithm,
W = |G| i=1 r(i) = I(w) i=1 r(i) + |G| i=I(w)+1 r(i) < W (H I(w) ) + w · (|G| − I(w)) ≤ W (C w (G)) + w|G|.
Therefore,
W (C w (G)) > W − w|G|.
Taking w = d = W/|G| in the equation above, we learn that W (C d (G)) > 0. Taking w = αd = αW/|G|, we learn that W (C αd (G)) > (1 − α)W .
Proof of Theorem 1. Let {H 1 , . . . , H n } be the induced subgraphs determined by the ChALK algorithm on the input graph G. It suffices to show that for any k, there is an integer I ∈ [k, n] satisfying d(H I ) ≥ dal(G, k)/3.
Let H * be an induced subgraph of G with at least k vertices and with density d * = W (H * )/|H * | = dal(G, k). We may apply Lemma 2 to H * with α = 2/3 to show that C (2d * /3) (H * ) has total weight at least W (H * )/3. This implies that C (2d * /3) (G) has total weight at least W (H * )/3.
The core C (2d * /3) (G) has density at least d * /3, because its minimum degree is at least 2d * /3. Lemma 1 shows that C (2d * /3) (G) = H I , for I = |C (2d * /3) (G)|. If I ≥ k, then H I satisfies the requirements of the theorem. If I < k, then C (2d * /3) (G) = H I is contained in H k , and the following calculation shows that H k satisfies the requirements of the theorem.
d(H k ) = W (H k ) k ≥ W (C (2d * /3) (G)) k ≥ W (H * )/3 k = d * /3.
Remark 1. Charikar proved that ChALK(G, 1) is a 2-approximation algorithm for the densest subgraph problem. This can be derived from the fact that if w = dmax(G)
, the w-core of G is nonempty.
A 2-approximation algorithm for the densest atleast-k-subgraph problem
In this section, we will give a polynomial time 2-approximation algorithm for the densest at-least-k subgraph problem. The algorithm is based on the parametric flow algorithm of Gallo, Grigoriadis, and Tarjan [9]. It is well-known that the densest subgraph problem can be solved using similar techniques; Goldberg [11] showed that the densest subgraph can be found in polynomial time by solving a sequence of maximum flow problems, and Gallo, Grigoriadis, and Tarjan described how to find the densest subgraph using their parametric flow algorithm.
It is natural to ask whether there is a polynomial time algorithm for the densest at-least-k-subgraph problem. We do not know of such an algorithm, nor have we proved that DalkS is N P-complete.
Let H ′ be the modified collection of subgraphs obtained by padding each subgraph in H with arbitrary vertices until its size is at least k. We will show that there is a set H ∈ H ′ that satisfies d(H) ≥ dal(G, k)/2. Thus, a polynomial time 2-approximation algorithm for DalkS can be obtained by computing H, padding some of the sets with arbitrary vertices to form H ′ , and returning the densest set in H ′ . The running time is dominated by the parametric flow algorithm.
Let H * be an induced subgraph of G with at least k vertices that has density d(H * ) = dal(G, k). Let α = dal(G, k)/2, and let H be the set from H that maximizes (1) for this value of α. In particular,
|H|(d(H) − α) ≥ |H * |(d(H * ) − α) ≥ |H * |d(H * )/2.(2)
This implies that H satisfies d(H) ≥ α = dal(G, k)/2. If |H| ≥ k, then we are done. If |H| < k, then consider the set H ′ of size exactly k obtained by padding H with arbitrary vertices. We will show that d(H ′ ) ≥ dal(G, k)/2, which will complete the proof. First, notice that (2) implies a lower bound on the size of H.
|H| ≥ |H * | d(H * ) 2d(H) = |H * | dal(G, k) 2d(H) .
We can then bound the density of the padded set H ′ .
d(H ′ ) ≥ d(H) |H| k ≥ d(H) |H * | k dal(G, k) 2d(H) = dal(G, k) 2 |H * | k ≥ dal(G, k) 2 .
The densest at-most-k-subgraph problem
In this section, we show that the densest at-most-k-subgraph problem is nearly as hard to approximate as the densest k-subgraph problem. We will show that if there exists a polynomial time algorithm that approximates DamkS in a weak sense, returning a set of at most βk vertices with density at least 1/γ times the density of the densest subgraph on at most k vertices, then there exists a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ). As an immediate consequence, a polynomial time γapproximation algorithm for the densest at-most-k-subgraph problem would imply a polynomial time 4(γ 2 + γ)-approximation algorithm for the densest k-subgraph problem. Proof. Assume there exists a polynomial time algorithm A(G, k) that is (β, γ)-algorithm for DamkS. We will now describe a polynomial time approximation algorithm for DkS with ratio 4(γ 2 + γβ).
Given as input a graph G and integer k, let H 1 = G, let i = 1, and repeat the following procedure. Let H i = A(G i , k) be an induced subgraph of G i with at most βk vertices and with density at least dam(G i , k)/γ. Remove all the edges in H i from G i to form a new graph G i+1 on the same vertex set as G. Repeat this procedure until all edges have been removed from G.
Let n i be the number of vertices in H i , let W i = W (H i ), and let d i = d(H i ) = W i /n i . Let H * be an induced subgraph of G with exactly k vertices and density d * = dex(G, k). Notice that if (W 1 + · · · + W t−1 ) ≤ W (H * )/2, then d t ≥ d * /2γ. This is because d t is at least 1/γ times the density of the induced subgraph of G t on the vertex set of H * , which is at least
W (H * ) − (W 1 + · · · + W t−1 ) k ≥ W (H * ) 2k = d * 2 .
Let T be the smallest integer such that (W 1 +· · ·+W T ) ≥ W (H * )/2, and let U T be the induced subgraph on the union of the vertex sets of H 1 , . . . , H T . The total weight W (U T ) is at least W (H * )/2. The density of U T is
d(U T ) = W (U T ) |U T | ≥ W 1 + · · · + W T n 1 + · · · + n T ≥ min 1≤t≤T W t n t ≥ d * 2γ .
To bound the number of vertices in U T , notice that (n 1 + · · · + n T −1 ) ≤ γk, because
d * k 2 = W (H * ) 2 ≥ T −1 i=1 W i = T −1 i=1 n i d i ≥ d * 2γ T −1 i=1 n i .
Since n T is at most βk, we have |U T | ≤ (n 1 + · · · + n T ) ≤ (γ + β)k.
There are now two cases to consider. If |U T | ≤ k, we add vertices to U T arbitrarily to form a set U ′ T of size exactly k. The set U ′ T is more than dense enough to prove the theorem,
d(U ′ T ) ≥ W (H * )/2 k = d * 2 .
If |U T | > k, then we employ a simple greedy procedure to reduce the number of vertices. We begin with the induced subgraph U T , greedily remove the vertex with smallest degree to obtain a smaller subgraph, and repeat until exactly k vertices remain. The resulting subgraph U ′′ T has density at least d(U T )(k/2|U T |) by the method of conditional expectations (see also [7]). The set U ′′ T is sufficiently dense,
d(U ′′ T ) ≥ d(U T ) k 2|U T | ≥ d * 2γ k 2(γ + β)k = d * 4(γ 2 + γβ)
.
Remark 2. The argument from Theorem 4 proves a slightly more general statement: if there is a polynomial time algorithm for DamkS that is a (β, γ)algorithm for certain values of k, then there is a polynomial time algorithm for DkS that is a 4(γ 2 + γβ)-approximation algorithm for those same values of k.
We remark that the densest at-most-k-subgraph is easily seen to be N Pcomplete, since a subgraph of size at most k has density at least (k − 1)/2 if and only if it is a k-clique. As mentioned previously, Feige and Seltser [8] proved that the densest k-subgraph problem remains N P-complete when restricted to graphs with maximum degree 3, and their proof shows that the same statement is true for the densest at-most-k-subgraph problem.
Conclusion
In this section, we discuss the possibility of improving the approximation ratio for DkS via an approximation algorithm for DamkS. One possible approach is to develop a local algorithm for DamkS, analogous to the recently developed local algorithms for graph partitioning [15,1]. For any partition separating k vertices, these algorithms can produce a partition separating O(k) vertices that is nearly as good (in terms of conductance).
| 3,144 |
cs0702036
|
2108134414
|
In this paper we consider the specification and verification of infinite-state systems using temporal logic. In particular, we describe parameterised systems using a new variety of first-order temporal logic that is both powerful enough for this form of specification and tractable enough for practical deductive verification. Importantly, the power of the temporal language allows us to describe (and verify) asynchronous systems, communication delays and more complex properties such as liveness and fairness properties. These aspects appear difficult for many other approaches to infinite-state verification.
|
Constraint based verification using counting abstractions @cite_16 @cite_28 @cite_23 , provides complete procedures for checking safety properties of broadcast protocols. However, such approaches have theoretically non-primitive recursive upper bounds for decision procedures (although they work well for small, interesting, examples) --- in our case the upper bounds are definitely primitive-recursive;
|
{
"abstract": [
"",
"We propose a new method for the verification of parameterized cache coherence protocols. Cache coherence protocols are used to maintain data consistency in multiprocessor systems equipped with local fast caches. In our approach we use arithmetic constraints to model possibly infinite sets of global states of a multiprocessor system with many identical caches. In preliminary experiments using symbolic model checkers for infinite-state systems based on real arithmetics (HyTech [HHW97] and DMC [DP99]) we have automatically verified safety properties for parameterized versions of widely implemented write-invalidate and write-update cache coherence policies like the Mesi, Berkeley, Illinois, Firefly and Dragon protocols [Han93]. With this application, we show that symbolic model checking tools originally designed for hybrid and concurrent systems can be applied successfully to a new class of infinite-state systems of practical interest.",
"We analyze the model-checking problems for safety and liveness properties in parameterized broadcast protocols. We show that the procedure suggested previously for safety properties may not terminate, whereas termination is guaranteed for the procedure based on upward closed sets. We show that the model-checking problem for liveness properties is undecidable. In fact, even the problem of deciding if a broadcast protocol may exhibit an infinite behavior is undecidable."
],
"cite_N": [
"@cite_28",
"@cite_16",
"@cite_23"
],
"mid": [
"",
"2504057811",
"2129073086"
]
}
|
Efficient First-Order Temporal Logic for Infinite-State Systems
|
In describing such automata, both automata-theoretic and logical approaches may be used. While temporal logic [16] provides a clear, concise and intuitive description of the system, automate-theoretic techniques such as model checking [6] have been shown to be more useful in practice. Recently, however, a propositional, linear-time temporal logic with improved deductive properties has been introduced [13,14], providing the possibility of practical deductive verification in the future. The essence of this approach is to provide an XOR constraint between key propositions. These constraints state that exactly one proposition from a XOR set can be true at any moment in time. Thus, the automaton above can be described by the following clauses which are implicitly in the scope of a ' ' ('always in the future') operator.
1. start ⇒ s t 2. s t ⇒ h (s t ∨ s a ) 3. s b ⇒ h s t 4. s a ⇒ h s w 5. s w ⇒ h (s w ∨ s b )
Here ' h ' is a temporal operator denoting 'at the next moment' and 'start' is a temporal operator which holds only at the initial moment in time. The inherent assumption that at any moment in time exactly one of s a , s b , s t or s w holds, is denoted by the following.
(s a ⊕ s b ⊕ s t ⊕ s w )
With the complexity of the decision problem (regarding s a , s b , etc) being polynomial, then the properties of any finite collection of such automata can be tractably verified using this propositional XOR temporal logic.
However, one might argue that this deductive approach, although elegant and concise, is still no better than a model checking approach, since it targets just finite collections of (finite) state machines. Thus, this naturally leads to the question of whether the XOR temporal approach can be extended to first-order temporal logics and, if so, whether a form of tractability still applies. In such an approach, we can consider infinite numbers of finite-state automata (initially, all of the same structure). Previously, we have shown that FOTL can be used to elegantly specify such a system, simply by assuming the argument to each predicate represents a particular automaton [19]. Thus, in the following s a (X) is true if automaton X is in state s a :
1. start ⇒ ∃x.s t (x) 2. ∀x. (s t (x) ⇒ h (s t (x) ∨ s a (x))) 3. ∀x. (s b (x) ⇒ h s t (x)) 4. ∀x. (s a (x) ⇒ h s w (x)) 5. ∀x. (s w (x) ⇒ h (s w (x) ∨ s b (x)))
Thus, FOTL can be used to specify and verify broadcast protocols between synchronous components [17]. In this paper we define a logic, FOTLX, which allows us to not only to specify and verify systems of the above form, but also to specify and verify more sophisticated asynchronous systems, and to carry out verification with a reasonable complexity.
FOTLX
First-Order Temporal Logic
First-Order (discrete, linear time) Temporal Logic, FOTL, is an extension of classical first-order logic with operators that deal with a discrete and linear model of time (isomorphic to the Natural Numbers, AE).
Syntax. The symbols used in FOTL are • Predicate symbols: P 0 , P 1 , . . . each of which is of a fixed arity (null-ary predicate symbols are propositions);
• Variables: x 0 , x 1 , . . .;
• Constants: c 0 , c 1 , . . .;
• Boolean operators: ∧, ¬, ∨, ⇒, ≡, true ('true'), false ('false');
• First-order Quantifiers: ∀ ('for all') and ∃ ('there exists'); and
• Temporal operators: ('always in the future'), ♦ ('sometime in the future'), h ('at the next moment'), U (until), W (weak until), and start (at the first moment in time).
Although the language contains constants, neither equality nor function symbols are allowed.
The set of well-formed FOTL-formulae is defined in the standard way [24,7]:
• Booleans true and false are atomic FOTL-formulae;
• if P is an n-ary predicate symbol and t i , 1 ≤ i ≤ n, are variables or constants, then P (t 1 , . . . , t n ) is an atomic FOTL-formula;
• if φ and ψ are FOTL-formulae, so are ¬φ, φ ∧ ψ, φ ∨ ψ, φ ⇒ ψ, and φ ≡ ψ;
• if φ is an FOTL-formula and x is a variable, then ∀xφ and ∃xφ are FOTL-formulae;
• if φ and ψ are FOTL-formulae, then so are φ, ♦φ, h φ, φ U ψ, φ W ψ, and start.
A literal is an atomic FOTL-formula or its negation.
M n |= a true M n |= a false M n |= a start iff n = 0 M n |= a P (t 1 , . . . , t m ) iff I a n (t 1 ), . . . I a n (t m ) ∈ I n (P ), where I a n (t i ) = I n (t i ), if t i is a constant, and I a
n (t i ) = a(t i ), if t i is a variable M n |= a ¬φ iff M n |= a φ M n |= a φ ∧ ψ iff M n |= a φ and M n |= a ψ M n |= a φ ∨ ψ iff M n |= a φ or M n |= a ψ M n |= a φ ⇒ ψ iff M n |= a (¬φ ∨ ψ) M n |= a φ ≡ ψ iff M n |= a ((φ ⇒ ψ) ∧ (ψ ⇒ φ)) M n |= a ∀xφ
iff M n |= b φ for every assignment b that may differ from a only in x and such that b(x) ∈ D n M n |= a ∃xφ iff M n |= b φ for some assignment b that may differ from a only in x and such that
b(x) ∈ D n M n |= a g φ iff M n+1 |= a φ; M n |= a ♦φ iff there exists m ≥ n such that M m |= a φ; M n |= a φ iff for all m ≥ n, M m |= a φ; M n |= a (φ U ψ)
iff there exists m ≥ n, such that M m |= a ψ and, for all i ∈ AE,n ≤ i < m implies M i |= a φ; More formally, for every moment of time n ≥ 0, there is a corresponding first-order structure, M n = D n , I n , where every D n is a non-empty set such that whenever n < m, D n ⊆ D m , and I n is an interpretation of predicate and constant symbols over D n . We require that the interpretation of constants is rigid. Thus, for every constant c and all moments of time i, j ≥ 0, we have I i (c) = I j (c).
M n |= a (φ W ψ) iff M n |= a (φ U ψ) or M n |= a φ.
A (variable) assignment a is a function from the set of individual variables to ∪ n∈AE D n . We denote the set of all assignments by V. The set of variable assignments V n corresponding to M n is a subset of the set of all assignments, V n = {a ∈ V | a(x) ∈ D n for every variable x}; clearly, V n ⊆ V m if n < m.
The truth relation M n |= a φ in a structure M, is defined inductively on the construction of φ only for those assignments a that satisfy the condition a ∈ V n . See Fig. 1 for details. M is a model for a formula φ (or φ is true in M) if, and only if, there exists an assignment a in D 0 such that M 0 |= a φ. A formula is satisfiable if, and only if, it has a model. A formula is valid if, and only if, it is true in any temporal structure M under any assignment a in D 0 .
The models introduced above are known as models with expanding domains since D n ⊆ D n+1 . Another important class of models consists of models with constant domains in which the class of first-order temporal structures, where FOTL formulae are interpreted, is restricted to structures M = D n , I n , n ∈ AE, such that D i = D j for all i, j ∈ AE. The notions of truth and validity are defined similarly to the expanding domain case. It is known [32] that satisfiability over expanding domains can be reduced to satisfiability over constant domains with only a polynomial increase in the size of formulae.
Monodicity and Monadicity
The set of valid formulae of FOTL is not recursively enumerable. Furthermore, it is known that even "small" fragments of FOTL, such as the two-variable monadic fragment (where all predicates are unary), are not recursively enumerable [30,24]. However, the set of valid monodic formulae is known to be finitely axiomatisable [33].
Definition 1 An FOTL-formula φ is called monodic if, and only if, any subformula of the form T ψ, where T is one of h , , ♦ (or ψ 1 T ψ 2 , where T is one of U , W ), contains at most one free variable.
We note that the addition of either equality or function symbols to the monodic fragment generally leads to the loss of recursive enumerability [33,8,22]. Thus, monodic FOTL is expressive, yet even small extensions lead to serious problems. Further, even with its recursive enumerability, monodic FOTL is generally undecidable. To recover decidability, the easiest route is to restrict the first order part to some decidable fragment of first-order logic, such as the guarded, two-variable or monadic fragments. We here choose the latter, since monadic predicates fit well with our intended application to parameterised systems. Recall that monadicity requires that all predicates have arity of at most '1'. Thus, we use monadic, monodic FOTL [7].
A practical approach to proving monodic temporal formulae is to use fine-grained temporal resolution [26], which has been implemented in the theorem prover TeMP [25]. In the past, TeMP has been successfully applied to problems from several domains [21], in particular, to examples specified in the temporal logics of knowledge (the fusion of propositional linear-time temporal logic with multi-modal S5) [15,11,13]. From this work it is clear that monodic first-order temporal logic is an important tool for specifying complex systems. However, it is also clear that the complexity, even of monadic monodic first-order temporal logic, makes this approach difficult to use for larger applications [21,19].
XOR Restrictions
An additional restriction we make to the above logic involves implicit XOR constraints over predicates. Such restrictions were introduced into temporal logics in [13], where the correspondence with Büchi automata was described, and generalised in [14]. In both cases, the decision problem is of much better (generally, polynomial) complexity than that for the standard, unconstrained, logic. However, in these papers only propositional temporal logic was considered. We now add such an XOR constraint to FOTLX.
The set of predicate symbols Π = {P 0 , P 1 , . . .}, is now partitioned into a set of XOR-sets, X 1 , X 2 , . . ., X n , with one non-XOR set N such that
1. all X i are disjoint with each other, 2. N is disjoint with every X i , 3. Π = n j=0
X j ∪ N , and 4. for each X i , exactly one predicate within X i is satisfied (for any element of the domain) at any moment in time.
Example 1 Consider the formula
∀x. ((P 1 (x) ∨ P 2 (x)) ∧ (P 4 (x) ∨ P 7 (x) ∨ P 8 (x)))
where {P 1 , P 2 } ⊆ X 1 and {P 4 , P 7 , P 8 } ⊆ X 2 . The above formula states that, for any element of the domain, a, then one of P 1 (a) or P 2 (a) must be satisfied and one of P 4 (a), P 7 (a) or P 8 (a) must be satisfied.
Normal Form
To simplify our description, we will define a normal form into which FOTLX formulae can be translated. In the following:
• ∧ X − ij (x)
denotes a conjunction of negated XOR predicates from the set X i ;
• ∨ X + ij (x) denotes a disjunction of (positive) XOR predicates from the set X i ;
• ∧ N i (x)
denotes a conjunction of non-XOR literals;
• ∨ N i (x) denotes a disjunction of non-XOR literals.
A step clause is defined as follows:
∧ X − 1j (x) ∧ . . . ∧ X − nj (x) ∧ ∧ N j (x) ⇒ h ( ∨ X + 1j (x) ∨ . . . ∨ ∨ X + nj (x) ∨ ∨ N j (x))
A monodic temporal problem in Divided Separated Normal Form (DSNF) [7] is a quadruple U, I, S, E , where:
1. the universal part, U, is a finite set of arbitrary closed first-order formulae;
2. the initial part, I, is, again, a finite set of arbitrary closed first-order formulae;
3. the step part, S, is a finite set of step clauses; and 4. the eventuality part, E, is a finite set of eventuality clauses of the form
♦L(x), where L(x) is a unary literal.
In what follows, we will not distinguish between a finite set of formulae X and the conjunction X of formulae within the set. With each monodic temporal problem, we associate the formula
I ∧ U ∧ ∀xS ∧ ∀xE.
Now, when we talk about particular properties of a temporal problem (e.g., satisfiability, validity, logical consequences etc) we mean properties of the associated formula. Every monodic FOTLX formula can be translated to the normal form in satisfiability preserving way using a renaming and unwinding technique which substitutes non-atomic subformulae and replaces temporal operators by their fixed point definitions as described, for example, in [18]. A step in this transformation is the following: We recursively rename each innermost open subformula ξ(x), whose main connective is a temporal operator, by P ξ (x), where P ξ(x) is a new unary predicate, and rename each innermost closed subformula ζ, whose main connective is a temporal operator, by p ζ , where p ζ is a new propositional variable. While renaming introduces new, non-XOR predicates and propositions, practical problems stemming from verification are nearly in the normal form, see Section 3.
Complexity
First-order temporal logics are notorious for being of a high complexity. Even decidable sub-fragments of monodic first-order temporal logic can be too complex for practical use. For example, satisfiability of monodic monadic FOTL logic is known to be EXPSPACE-complete [23]. However, imposing XOR restrictions we obtain better complexity bounds.
Theorem 1 Satisfiability of monodic monadic FOTLX formulae (in the normal form) can be decided in
2 O(N 1 ·N 2 ·...·Nn·2 Na ) time, where N 1 ,.
. . , N n are cardinalities of the sets of XOR predicates, and N a is the cardinality of the set of non-XOR predicates.
Before we sketch the proof of this result, we show how the XOR restrictions influence the complexity of the satisfiability problem for monadic first-order (non-temporal) logic.
Lemma 2 Satisfiability of monadic first-order formulae can be decided in
NTime(O(n·N 1 · N 2 · . . . · N n · 2 Na )),
where n is the length of the formula, and N 1 ,. . . , N n , N a are as in Theorem 1.
Proof As in [4], Proposition 6.2.9, the non-deterministic decision procedure first guesses a structure and then verifies that the structure is a model for the given formula. It was shown, [4], Proposition 6.2.1, Exercise 6.2.3, that if a monadic first-order formula has a model, it also has a model, whose domain is the set of all predicate colours. A predicate colour, γ, is a set of unary literals such that for every predicate P (x) from the set of all predicates X 1 ∪ . . . , X n ∪ N , either P (x) or ¬P (x) belongs to γ. Notice that under the conditions of the lemma, there are at most N 1 · N 2 · . . . · N n · 2 Na different predicate colours. Hence, the structure to guess is of O(N 1 · N 2 · . . . · N n · 2 Na ) size.
It should be clear that one can evaluate a monadic formula of the size n in a structure of the size O(N 1 · N 2 · . . . · N n · 2 Na ) in deterministic O(n · N 1 · N 2 · . . . · N n · 2 Na ) time. Therefore, the overall complexity of the decision procedure is NTime(O(n · N 1 · N 2 · . . . · N n · 2 Na )).
Proof [of Theorem 1, Sketch] For simplicity of presentation, we assume the formula contains no propositions. Satisfiability of a monodic FOTL formula is equivalent to a property of the behaviour graph for the formula, checkable in time polynomial in the product of the number of different predicate colours and the size of the graph, see [7], Theorem 5.15. For unrestricted FOTL formulae, the size of the behaviour graph is double exponential in the number of predicates. We estimate now the size of the behaviour graph and time needed for its construction for FOTLX formulae.
Let Γ be a set of predicate colours and ρ be a map from the set of constants, const(P), to Γ. A couple Γ, ρ is called a colour scheme. Nodes of the behaviour graph are colour schemes. Clearly, there are no more than 2 O(N 1 ·N 2 ·...·Nn·2 Na ) different colour schemes. However, not every colour scheme is a node of the behaviour graph: a colour scheme C is a node if, and only if, a monadic formula of first-order (nontemporal) logic, constructed from the given FOTLX formula and the colour scheme itself, is satisfiable (for details see [7]). A similar first-order monadic condition determines which nodes are connected with edges. It can be seen that the size of the formula is polynomial in both cases. By Lemma 2, satisfiability of monadic first-order formulae can be decided in deterministic 2 O(N 1 ·N 2 ·...·Nn·2 Na ) time.
Overall, the behaviour graph, representing all possible models, for an FOTLX formula can be constructed in 2 O(N 1 ·N 2 ·...·Nn·2 Na ) time.
Infinite-State Systems
In previous work, notably [17,9] a parameterised finite state machine based model, suitable for the specification and verification of protocols over arbitrary numbers of processes was defined. Essentially, this uses a family of identical, and synchronously executing, finite state automata with a rudimentary form of communication: if one automaton makes a transition (an action) a, then it is required that all other automata simultaneously make a complementary transition (reaction)ā. In [19] we translated this automata model into monodic FOTL and used automated theorem proving in that logic to verify parameterised cache coherence protocols [10]. The model assumed not only synchronous behaviour of the communicating automata, but instantaneous broadcast.
Here we present a more general model suitable for specification of both synchronous and asynchronous systems (protocols) with (possibly) delayed broadcast and give its faithful translation into FOTLX. This not only exhibits the power of the logic but, with the improved complexity results of the previous section, provides a route towards the practical verification of temporal properties of infinite state systems.
Process Model
We begin with a description of both the asynchronous model, and the delayed broadcast approach.
Definition 2 (Protocol) A protocol, P is a tuple Q, I, Σ, τ , where
• Q is a finite set of states;
• I ⊆ Q is a set of initial states; • τ ⊆ Q × Σ × Q is a transition relation that satisfies the following property ∀σ ∈ Σ M . ∀q ∈ Q. ∃q ′ ∈ Q. q,σ, q ′ ∈ τ i.e., "readiness to receive a message in any state".
• Σ = Σ L ∪ Σ M ∪Σ M , where -Σ L is
Further, we define a notion of global machine, which is a set of n finite automata, where n is a parameter, each following the protocol and able to communicate with others via (possibly delayed) broadcast. To model asynchrony, we introduce a special automaton action, idle ∈ Σ, meaning the automaton is not active and so its state does not change. At any moment an arbitrary group of automata may be idle and all non-idle automata perform their actions in accordance with the transition function τ ; different automata may perform different actions.
Definition 3 (Asynchronous Global Machine)
Given a protocol, P = Q, I, Σ, τ , the global machine
M G of dimension n is the tuple Q M G , I M G τ M G , E , where • Q M G = Q n • I M G = I n • τ M G ⊆ Q M G × (Σ ∪ {idle}) n × Q M G is∈ τ M G iff ∀1 ≤ i ≤ n. [(σ i = idle ⇒ s i , σ i , s ′ i ∈ τ ) ∧(σ i = idle ⇒ s i = s ′ i ] .
• E = 2 Σ M is a communication environment, that is a set of possible sets of messages in transition.
An element G ∈ Q M G × (Σ ∪ {idle}) n × E is said to be a global configuration of the machine. A run of a global machine M G is a possibly infinite sequence s 1 , σ 1 , E 1 . . . s i , σ i , E i . . . of global configurations of M G satisfying the properties (1)- (6) listed below. In this formulation we assume s i = s i 1 , . . . , s i n and σ i = σ i 1 , . . . , σ i n .
1. s 1 ∈ I n ("initially all automata are in initial states");
2. E 1 = ∅ ("initially there are no messages in transition");
3. ∀i. s i , σ i , s i+1 ∈ τ M G ("an arbitrary part of the automata can fire";
4. ∀a ∈ Σ M . ∀i. ∀j. ((σ i j = a) ⇒ ∀k. ∃l ≥ i. (σ l k =ā)) ("delivery to all participants is guaranteed"); 5. ∀a ∈ Σ M . ∀i. ∀j. [(σ i j =ā) ⇒ (a ∈ E i ) ∨ ∃k. σ i k = a
)] ("one can receive only messages kept by the environment, or sent at the same moment of time ")
In order to formulate further requirements we introduce the following notation:
Sent i = {a ∈ Σ M | ∃j. σ i j = a} Delivered k = ∃i ≤ k. (a ∈ Sent i ) ∧ a ∈ Σ M (∀l. (i < l < k) → a ∈ Sent l ) ∧ (∀j.∃l. (i ≤ l ≤ k) ∧ (σ l j =ā))
Then, the last requirement the run should satisfy is
6. ∀i. E i+1 = (E i ∪ Sent i ) − Delivered i
Example: Asynchronous Floodset Protocol. We illustrate the use of the above model by presenting the specification of an asynchronous FloodSet protocol in our model. This is a variant of the FloodSet algorithm with alternative decision rule (in terms of [28], p.105) designed for solution of the Consensus problem. The setting is as follows. There are n processes, each having an input bit and an output bit. The processes work asynchronously, run the same algorithm and use broadcast for communication. The broadcasted messages are guaranteed to be delivered, though possibly with arbitrary delays. (The process is described graphically in Fig. 2.) The goal of the algorithm is to eventually reach an agreement, i.e. to produce an output bit, which would be the same for all processes. It is required also that if all processes have the same input bit, that bit should be produced as an output bit.
The asynchronous FloodSet protocol we consider here is adapted from [28]. Main differences with original protocol are:
• the original protocol was synchronous, while our variant is asynchronous; • the original protocol assumed instantaneous message delivery, while we allow arbitrary delays in delivery; and
• although the original protocol was designed to work in the presence of crash (or fail-stop) failures, we assume, for simplicity, that there are no failures.
Because of the absence of failures the protocol is very simple and unlike the original one does not require "retransmission" of any value. We will show later (in Section 3.3) how to include the case of crash failures in the specification (and verification). Thus, the asynchronous FloodSet protocol is defined, informally, as follows.
• At the first round of computations, every process broadcasts its input bit.
• At every round the (tentative) output bit is set to the minimum value ever seen so far.
The correctness criterion for this protocol is that, eventually, the output bits of all processes will be the same.
Temporal Translation
Given a protocol P = Q, I, Σ, τ , we define its translation to FOTLX as follows.
For each q ∈ Q, introduce a monadic predicate symbol P q and for each σ ∈ Σ ∪ {idle} introduce a monadic predicate symbol A σ . For each σ ∈ Σ M we introduce also a propositional symbol m σ . Intuitively, elements of the domain in the temporal representation will represent exemplars of finite automata, and the formula P q (x) is intended to represent "automaton x is in state q". The formula A σ (x) is going to represent "automaton x performs action σ". Proposition m σ will denote the fact "message σ is in transition" (i.e. it has been sent but not all participants have received it.)
Because of intended meaning we define two XOR-sets: X 1 = {P q | q ∈ Q} and X 2 = {A σ | σ ∈ Σ ∪ {idle}}. All other predicates belong to the set of non-XOR predicates.
I.
Each automaton either performs one of the actions available in its state, or is idle:
[∀x. P q (x) → A σ 1 (x) ∨ . . . ∨ A σ k (x) ∨ A idle (x)], where {σ 1 , . . . σ k } = {σ ∈ Σ | ∃r q, σ, r ∈ τ }.
II. Action effects (non-deterministic actions):
[∀xP q (x) ∧ A σ (x) → h q,σ,r ∈τ P r (x)] for all q ∈ S and σ ∈ Σ.
III. Effect of being idle:
[
∀xP q (x) ∧ A idle (x) → h P q (x)], for all q ∈ S
IV.
Initially there are no messages in the transition and all automata are in initial states: start → ¬m σ for all σ ∈ Σ m and start → ∀x q∈I P q (x).
V. All messages are eventually received (Guarantee of Delivery):
[∃yA σ (y) → ∀x♦Aσ(x)], for all σ ∈ Σ m .
VI. Only messages kept in the environment (are in transition), or sent at the same moment of time can be received:
[∀xAσ(x) → m σ ∨ ∃yA σ (y)] for all σ ∈ Σ m .
VII.
Finally, for all σ ∈ Σ m , we have the conjunction of the following formulae: We define the temporal translation of P, called T P , as a conjunction of the formulae in Fig. 3. Note that, in order to define the temporal translation of requirement (6) above, (on the dynamics of environment updates) we introduce the unary predicate symbol Received σ for every σ ∈ Σ m .
1. start → ∀x. ¬Received σ (x) 2. [∀x. (Aσ(x) ∧ ¬∀y. Received σ (y)) → h Received σ (x)] 3. [∀x. (Received σ (x) ∧ ¬∀y. Received σ (y) → h Received σ (x)] 4. [∀x. (¬(Aσ(x) ∨ Received σ (x)) ∧ ¬∀y. Received σ (y)) → h ¬Received σ (x)] 5. [∀x. Received σ → h ¬m σ ] 6. [∃x. A σ (x) ∧ ¬∀y. Received σ (y) → h m σ ] 7. [¬∃x. A σ (x) ∧ ¬∀y. Received σ (y) → (m σ ↔ h m σ ]
We now consider the correctness of the temporal translation. This translation of protocol P is faithful in the following sense.
M i |= P q 1 (c 1 ) ∧ . . . P qn (c n ), M i |= A σ 1 (c 1 ) ∧ . . . A σn (c n ) and E = {σ ∈ Σ m | M i |= m σ }
Dually, for any run of M G there is a temporal model of T P with a domain of size n representing this run.
Proof By routine inspection of the definitions of runs, temporal models and the translation.
Variations of the model
The above model allows various modifications and corresponding version of Proposition 1 still holds.
Determinism. The basic model allows non-deterministic actions. To specify the case of deterministic actions only, one should replace the "Action Effects" axiom in Fig. 3 by the following variant:
[∀x. P q (x) ∧ A σ (x) → h P r (x)]
for all q, σ, r ∈ τ Explicit bounds on delivery. In the basic mode, no explicit bounds on delivery time are given. To introduce bounds one has to replace the "Guarantee of Delivery" axiom with the following one:
[∃y. A σ (y) → ∀x. h Aσ(x) ∨ h Aσ(x) ∨ . . . ∨ h n Aσ(x)]
for all σ ∈ Σ m and some n (representing the maximal delay).
Finite bounds on delivery. One may replace the "Guarantee of Delivery" axiom with the following one
[∃y. A σ (y) → ♦∀x. Receivedσ(x)] for all σ ∈ Σ m .
Crashes. One may replace the "Guarantee of Delivery" axiom by an axiom stating that only the messages sent by normal (non-crashed) participants will be delivered to all participants. (See [19] for examples of such specifications in a FOTL context.) Guarded actions. One can also extend the model with guarded actions, where action can be performed depending on global conditions in global configurations.
Returning to the FloodSet protocol, one may consider a variation of the asynchronous protocol suitable for resolving the Consensus problem in the presence of crash failures. We can modify the above setting as follows. Now, processes may fail and, from that point onward, such processes send no further messages. Note, however, that the messages sent by a process in the moment of failure may be delivered to an arbitrary subset of the non-faulty processes.
The goal of the algorithm also has to be modified, so only non-faulty processes are required to eventually reach an agreement. Thus, the FloodSet protocol considered above is modified by adding the following rule:
• At every round (later than the first), a process broadcasts any value the first time it sees it. Now, in order to specify this protocol the variation of the model with crashes should be used. The above rule can be easily encoded in the model and we leave it as an exercise for the reader.
An interesting point here is that the protocol is actually correct under the assumption that only finitely many processes may fail. This assumption is automatically satisfied in our automata model, but not in its temporal translation. Instead, one may use the above Finite bounds on delivery axiom to prove the correctness of this variation of the algorithm.
Verification
Now we have all the ingredients to perform the verification of parameterised protocols. Given a protocol P, we can translate it into a temporal formula T P . For the temporal representation, χ of a required correctness condition, we then check whether T P → χ is valid temporal formula. If it is valid, then the protocol is correct for all possible values of the parameter (sizes).
Correctness conditions can, of course, be described using any legal FOTLX formula. For example, for the above FloodSet protocol(s) we have a liveness condition to verify:
♦(∀x. o 0 (x) ∨ ∀x. o 1 (x)) or, alternatively ♦ (∀x. Non-faulty(x) → o 0 (x)) ∨ (∀x. Non-faulty(x) → o 1 (x))
in the case of a protocol working in presence of processor crashes. While space precludes describing many further conditions, we just note that, in [19], we have demonstrated how this approach can be used to verify safety properties, i.e with χ = φ. Since we have the power of FOTLX, but with decidability results, we can also automatically verify fairness formulae of the form χ = ♦φ.
Concluding Remarks
In the propositional case, the incorporation of XOR constraints within temporal logics has been shown to be advantageous, not only because of the reduced complexity of the decision procedure (essentially, polynomial rather than exponential; [14]), but also because of the strong fit between the scenarios to be modelled (for example, finite-state verification) and the XOR logic [13]). The XOR constraints essentially allow us to select a set of names/propositions that must occur exclusively. In the case of verification for finite state automata, we typically consider the automaton states, or the input symbols, as being represented by such sets. Modelling a scenario thus becomes a problem of engineering suitable (combinations of) XOR sets.
In this paper, we have developed an XOR version of FOTL, providing: its syntax and semantics; conditions for decidability; and detailed complexity of the decision procedure. As well as being an extension and combination of the work reported in both [7] and [14], this work forms the basis for tractable temporal reasoning over infinite state problems. In order to motivate this further, we considered a general model concerning the verification of infinite numbers of identical processes. We provide an extension of the work in [19] and [1,2], tackling liveness properties of infinite-state systems, verification of asynchronous infinitestate systems, and varieties of communication within infinite-state systems. In particular, we are able to capture some of the more complex aspects of asynchrony and communication, together with the verification of more sophisticated liveness and fairness properties.
The work in [19] on basic temporal specification such as the above have indeed shown that deductive verification can here be attempted but is expensive -the incorporation of XOR provides significant improvements in complexity.
Future Work
Future work involves exploring further the framework described in this paper in particular the development of an implementation to prove properties of protocols in practice. Further, we would like to see if we can extend the range of systems we can tackle beyond the monodic fragment. We also note that some of the variations we might desire to include in Section 3.3 can lead to undecidable fragments. However, for some of these variations, we have correct although (inevitably) incomplete methods, see [19]. We wish to explore these boundaries further.
| 5,984 |
cs0701046
|
2952231607
|
In a wireless network, mobile nodes (MNs) repeatedly perform tasks such as layer 2 (L2) handoff, layer 3 (L3) handoff and authentication. These tasks are critical, particularly for real-time applications such as VoIP. We propose a novel approach, namely Cooperative Roaming (CR), in which MNs can collaborate with each other and share useful information about the network in which they move. We show how we can achieve seamless L2 and L3 handoffs regardless of the authentication mechanism used and without any changes to either the infrastructure or the protocol. In particular, we provide a working implementation of CR and show how, with CR, MNs can achieve a total L2+L3 handoff time of less than 16 ms in an open network and of about 21 ms in an IEEE 802.11i network. We consider behaviors typical of IEEE 802.11 networks, although many of the concepts and problems addressed here apply to any kind of mobile network.
|
More recently, cooperative approaches have been proposed in the network community. , @cite_2 show how cooperation amongst MNs can be beneficial for all the MNs in the network in terms of bit-rate, coverage and throughput. Each MN builds a table in which possible helpers for that MN are listed. If an MN has a poor link with the AP and its bit-rate is low, it sends packets to the helper who relays them to the AP. The advantage in doing this is that the link from the MN to the helper and from the helper to the AP is a high bit-rate link. In this way the MN can use two high bit-rate links via the helper instead of the low bit-rate one directly to the AP.
|
{
"abstract": [
"In this paper, a novel idea of user cooperation in wireless networks has been exploited to improve the performance of the IEEE 802.11 medium access control (MAC) protocol. The new MAC protocol leverages the multi-rate capability of IEEE 802.11b and allows the mobile stations (STA) far away from the access point (AP) to transmit at a higher rate by using an intermediate station as a relay. Two specific variations of the new MAC protocol, namely CoopMAC I and CoopMAC II, are introduced in the paper. Both are able to increase the throughput of the whole network and reduce the average packet delay. Moreover, CoopMAC II also maintains backward compatibility with the legacy 802.11 protocol. The performance improvement is further evaluated by analysis and extensive simulations."
],
"cite_N": [
"@cite_2"
],
"mid": [
"2116820055"
]
}
|
Cooperation Between Stations in Wireless Networks
|
Enabling VoIP services in wireless networks presents many challenges, including QoS, terminal mobility and congestion control. In this paper we focus on IEEE 802.11 wireless networks and address issues introduced by terminal mobility.
In general, a handoff happens when an MN moves out of the range of one Access Point (AP) and enters the range of a new one. We have two possible scenarios: 1) If the old AP and the new AP belong to the same subnet, the MN's IP address does not have to change at the new AP. The MN performs a L2 handoff. 2) If the old AP and the new AP belong to different subnets, the MN has to go through the normal L2 handoff procedure and also has to request a new IP address in the new subnet, that is, it has to perform a L3 handoff. Fig. 1 shows the steps involved in a L2 handoff process in an open network. As we have shown in [1] and Mishra et al. have shown in [2], the time needed by an MN to perform a L2 handoff is usually on the order of a few hundred milliseconds, thus causing a noticeable interruption in any ongoing realtime multimedia session. In either open 802.11 networks or 802.11 networks with WEP enabled, the discovery phase constitutes more than 90% of the total handoff time [1], [2]. In 802.11 networks with either WPA or 802.11i enabled, the handoff delay is dominated by the authentication process that is performed after associating to the new AP. In particular, no data can be exchanged amongst MNs before the authentication process completes successfully. In the most general case, both authentication delay and scanning delay are present. These two delays are additive, so, in order to achieve seamless real-time multimedia sessions, both delays have to be addressed and, if possible, removed. When a L3 handoff occurs, an MN has to perform a normal L2 handoff and update its IP address. We can break the L3 handoff into two logical steps: subnet change detection and new IP address acquisition via DHCP [3]. Each of these steps introduces a significant delay.
In this paper we focus on the use of station cooperation to achieve seamless L2 and L3 handoffs. We refer to this specific use of cooperation as Cooperative Roaming (CR). The basic idea behind CR is that MNs subscribe to the same multicast group creating a new plane for exchanging information about the network and help each other in different tasks. For example, an MN can discover surrounding APs and subnets by just asking to other MNs for this information. Similarly, an MN can ask another MN to acquire a new IP address on its behalf so that the first MN can get an IP address for the new subnet while still in the old subnet.
For brevity and clarity's sake, in this paper we do not consider handoffs between different administrative domains and AAA-related issues although CR could be easily extended to support them. Incentives for cooperation are also not considered since they are a standard problem for any system using some form of cooperation (e.g., file sharing) and represent a separate research topic [4], [5], [6], [7], [8].
The rest of the paper is organized as follows. In Section II we show the state of the art for handoffs in wireless networks, in Section III we briefly describe how IPv4 and IPv6 multicast addressing is used in the present context, Section IV describes how, with cooperation, MNs can achieve seamless L2 and L3 handoffs. Section V introduces cooperation in the L2 authentication process to achieve seamless handoffs regardless of the particular authentication mechanism used. Section VI considers security and Section VII shows how streaming media can be supported in CR. In Section VIII we analyze CR in terms of bandwidth and energy usage, Section IX presents our experiments and results and Section X shows how we can achieve seamless application layer mobility with CR. In Section XI we apply CR to load balancing and Section XII presents an alternative to multicast. Finally, Section XIII concludes the paper.
IV. COOPERATIVE ROAMING
In this section we show how MNs can cooperate with each other in order to achieve seamless L2 and L3 handoffs.
A. Overview
In [1] we have introduced a fast MAC layer handoff mechanism for achieving seamless L2 handoffs in environments such as hospitals, schools, campuses, enterprises, and other places where MNs always encounter the same APs. Each MN saves information regarding the surrounding APs in a cache. When an MN needs to perform a handoff and it has valid entries in its cache, it will directly use the information in the cache without scanning. If it does not have any valid information in its cache, the MN will use an optimized scanning procedure called selective scanning to discover new APs and build the cache. In the cache, APs are ordered according to their signal strength that was registered when the scanning was performed, that is, right before changing AP. APs with stronger signal strength appear first. As mentioned in Section I, in open networks the scanning process is responsible for more than 90% of the total handoff time. The cache reduces the L2 handoff time to only a few milliseconds (see Table I) and cache misses due to errors in movement prediction introduce only a few milliseconds of Earlier, we had extended [27] the mechanism introduced in [1] to support L3 handoffs. MNs also cache L3 information such as their own IP address, default router's IP address and subnet identifier. A subnet identifier uniquely identifies a subnet. By caching the subnet identifier, a subnet change is detected much faster and L3 handoffs are triggered every time the new AP and old AP have different subnet identifiers. Faster L3 handoffs can be achieved since IP address and default router for the next AP and subnet are already known and can be immediately used. The approach in [27] achieves seamless handoffs in open networks only, it utilizes the default router's IP address as subnet identifier and it uses a suboptimal algorithm to acquire L3 information.
Here, we consider the same caching mechanism used in [27]. In order to support multi-homed routers, however, we use the subnet address as subnet identifier. By knowing the subnet mask and the default router's IP address we can calculate the network address of a certain subnet. Fig. 2 shows the structure of the cache. Additional information such as last IP address used by the MN, lease expiration time and default router's IP address can be extracted from the DHCP client lease file, available in each MN.
In CR, an MN needs to acquire information about the network if it does not have any valid information in the cache or if it does not have L3 information available for a particular subnet. In such a case, the MN asks other MNs for the information it needs so that the MN does not have to find out about neighboring APs by scanning. In order to share information, in CR, all MNs subscribe to the same multicast group. We call an MN that needs to acquire information about its neighboring APs and subnets a requesting MN (R-MN). By using CR, an R-MN can ask other MNs if they have such information by sending an INFOREQ multicast frame. The MNs that receive such a frame check if they have the information the R-MN needs and if so, they send an INFORESP multicast frame back to the R-MN containing the information the R-MN needs.
B. L2 Cooperation Protocol
In this section, we focus on the information exchange needed by a L2 handoff.
The information exchanged in the INFOREQ and IN-FORESP frames is a list of {BSSID, channel, subnet ID} entries, one for each AP in the MN's cache (see Fig. 2).
When an R-MN needs information about its neighboring APs and subnets, it sends an INFOREQ multicast frame. Such a frame contains the current content of the R-MN's cache, that is, all APs and subnets known to the R-MN. When an MN receives an INFOREQ frame, it checks if its own cache and the R-MN's cache have at least one AP in common. If the two caches have at least one AP in common and if the MN's cache has some APs that are not present in the R-MN's cache, the MN sends an INFORESP multicast frame containing the cache entries for the missing APs. MNs that have APs in common with the R-MN, have been in the same location of the R-MN and so have a higher probability of having the information the R-MN is looking for.
The MN sends the INFORESP frame after waiting for a random amount of time to be sure that no other MNs have already sent such information. In particular, the MN checks the information contained in INFORESP frames sent to the same R-MN by other MNs during the random waiting time. This prevents many MNs from sending the same information to the R-MN and all at the same time.
When an MN other than R-MN receives an INFORESP multicast frame, it performs two tasks. First, it checks if someone is lying by providing the wrong information and if so, it tries to fix it (see Section VI-A); secondly, it records the cache information provided by such a frame in its cache even though the MN did not request such information. By collecting unsolicited information, each MN can build a bigger cache in less time and in a more efficient manner requiring fewer frame exchanges. This is very similar to what happens in software such as Bit-Torrent where the client downloads different parts of the file from different peers. Here, we collect different cache chunks from different MNs.
In order to improve efficiency and further minimize frame exchange, MNs can also decide to collect information contained in the INFOREQ frames.
C. L3 Cooperation Protocol
In a L3 handoff an MN has to detect a change in subnet and also has to acquire a new IP address. When a L2 handoff occurs, the MN compares the cached subnet identifiers for the old and new AP. If the two identifiers are different, then the subnet has changed. When a change in subnet is detected, the MN needs to acquire a new IP address for the new subnet. The new IP address is usually acquired by using the DHCP infrastructure. Unfortunately, the typical DHCP procedure can take up to one second [27].
CR can help MNs acquire a new IP address for the new subnet while still in the old subnet. When an R-MN needs to perform a L3 handoff, it needs to find out which other MNs in the new subnet can help. We call such MNs Assisting MNs (A-MNs). Once the R-MN knows the A-MNs for the new subnet, it asks one of them to acquire a new IP address on its behalf. At this point, the selected A-MN acquires the new IP address via DHCP and sends it to the R-MN which is then able to update its multimedia session before the actual L2 handoff and can start using the new IP address right after the L2 handoff, hence not incurring any additional delay (see Section X).
We now show how A-MNs can be discovered and explain in detail how they can request an IP address on behalf of other MNs in a different subnet.
1) A-MNs Discovery: By using IP multicast, an MN can directly talk to different MNs in different subnets. In particular, the R-MN sends an AMN DISCOVER multicast packet containing the new subnet ID. Other MNs receiving such a packet check the subnet ID to see if they are in the subnet specified in the AMN DISCOVER. If so, they reply with an AMN RESP unicast packet. This packet contains the A-MN's default router IP address, the A-MN's MAC and IP addresses. This information is then used by the R-MN to build a list of available A-MNs for that particular subnet.
Once the MN knows which A-MNs are available in the new subnet, it can cooperate with them in order to acquire the L3 information it needs (e.g., new IP address, router information), as described below.
2) Address Acquisition: When an R-MN needs to acquire a new IP address for a particular subnet, it sends a unicast IP REQ packet to one of the available A-MNs for that subnet. Such packet contains the R-MN's MAC address. When an A-MN receives an IP REQ packet, it extracts the R-MN's MAC address from the packet and starts the DHCP process by inserting the R-MN's MAC address in the CHaddr field of DHCP packets 1 . The A-MN will also have to set the broadcast bit in the DHCP packets in order for it to receive DHCP packets with a different MAC address other than its own in the CHaddr field. All of this allows the A-MN to acquire a new IP address on behalf of the R-MN. This procedure is completely transparent to the DHCP server. Once the DHCP process has been completed, the A-MN sends an IP RESP multicast packet containing the default router's IP address for the new subnet, the R-MN's MAC address and the new IP address for the R-MN. The R-MN checks the MAC address in the IP RESP packet to be sure that the packet is not for a different R-MN. Once it has verified that the IP RESP is for itself, the R-MN saves the new IP address together with the new default router's IP address.
If the R-MN has more than one possible subnet to move to, it follows the same procedure for each subnet. In this way the R-MN builds a list of {router, new IP address} pairs, one pair for each one of the possible next subnets. After moving to the new subnet the R-MN renews the lease for the new IP address. The R-MN can start this process at any time before the L2 handoff, keeping in mind that the whole process might take one second or more to complete and that lease times of By reserving IP addresses before moving to the new subnet, we could waste IP addresses and exhaust the available IP pool. Usually, however, the lease time in a mobile environment is short enough to guarantee a sufficient re-use of IP addresses.
Acquiring an IP address from a different subnet other than the one the IP is for could also be achieved by introducing a new DHCP option. Using this option, the MN could ask the DHCP server for an IP address for a specific subnet. This would however, require changes to the DHCP protocol.
V. COOPERATIVE AUTHENTICATION
In this section we propose a cooperative approach for authentication in wireless networks. The proposed approach is independent of the particular authentication mechanism used. It can be used for VPN, IPsec, 802.1x or any other kind of authentication. We focus on the 802.1x framework used in Wi-Fi Protected Access (WPA) and IEEE 802.11i [29].
A. IEEE 802.1x Overview
The IEEE 802.1x standard defines a way to perform access control and authentication in IEEE 802 LANs and in particular in IEEE 802.11 wireless LANs using three main entities: supplicant, authenticator and authentication server 3 . The supplicant is the client that has to perform the authentication in order to gain access to the network; the authenticator, among other things, relays packets between supplicant and authentication server; the authentication server, typically a RADIUS server [30], performs the authentication process with the supplicant by exchanging and validating the supplicant's credentials. The critical point, in terms of handoff time in the 802.1x architecture, is that during the authentication process the authenticator allows only EAP Over LAN (EAPOL) traffic to be exchanged with the supplicant. No other kind of traffic is allowed. 2 The DHCP client lease file can provide information on current lease times. 3 The authentication server is not required in all authentication mechanisms.
B. Cooperation in the Authentication Process
A well-known property of the wireless medium in IEEE 802.11 networks is that the medium is shared and therefore every MN can hear packets that other stations (STAs) send and receive. This is true when MN and STAs are connected to the same AP -that is, are on the same channel. In [14] Liu et al. make use of this particular characteristic and show how MNs can cooperate with each other by relaying each other's packets so to achieve the optimum bit-rate. In this section we show how a similar approach can be used for authentication purposes.
For simplicity, in the following discussion we suppose that one authenticator manages one whole subnet, so that authentication is required after each L3 handoff. In such a scenario and in this context, we also refer to a subnet as an Authentication Domain (AD). In general, an MN can share the information about ADs in the same way it shares information about subnets. In doing so, an MN knows whether the next AP belongs to the same AD of the current AP or not. In a L2 or L3 handoff we have an MN which performs handoff and authentication, a Correspondent Node (CN) which has an established multimedia session with the MN and a Relay Node (RN) which relays packets to and from the MN. Available RNs for a particular AD can be discovered following a similar procedure to the one described earlier for the discovery of A-MNs (see Section IV-C.1). The difference here is that RN and MN have to be connected to the same AP after the handoff. In this scenario, we assume that RNs are a subset of the available A-MNs. The basic idea is that while the MN is authenticating in the new AD, it can still communicate with the CN via the RN which relays packets to and from the MN (see Fig. 3). Let us look at this mechanism in more detail. Before the MN changes AD/AP, it selects an RN from the list of available RNs for the new AD/AP and sends a RELAY REQ multicast frame to the multicast group. The RELAY REQ frame contains the MN's MAC and IP addresses, the CN's IP address and the selected RN's MAC and IP addresses. The RELAY REQ will be received by all the STAs subscribed to the multicast group and, in particular, it will be received by both the CN 4 and the RN. The RN will relay packets for the MN identified by the MAC address received in the RELAY REQ frame. After performing the handoff, the MN needs to authenticate before it can resume any communication via the AP. However, because of the shared nature of the medium, the MN will start sending packets to the RN as if it was already authenticated. The authenticator will drop the packets, but the RN can hear the packets on the medium and relay them to the CN using its own encryption keys, that is, using its secure connection with the AP. The CN is aware of the relaying because of the RELAY REQ, and so it will start sending packets for the MN to the RN as well. While the RN is relaying packets to and from the MN, the MN will perform its authentication via 802.1x or any other mechanism. Once the authentication process is over and the MN has access to the infrastructure, it can stop the relaying and resume normal communication via the AP. When this happens and the CN starts receiving packets from the MN via the AP, it will stop sending packets to the RN and will resume normal communication with the MN. The RN will detect that it does not need to relay any packet for the MN any longer and will return to normal operation.
In order for this relaying mechanism to work with WPA and 802.11i, MN and RN have to exchange unencrypted L2 data packets for the duration of the relay process. These packets are then encrypted by the RN by using its own encryption keys and are sent to the AP. By responding to an RN discovery, RNs implicitly agree to providing relay for such frames. Such an exchange of unencrypted L2 frames does not represent a security concern since packets can still be encrypted at higher layers and since the relaying happens for a very limited amount of time (see Section VI-B).
One last thing worth mentioning is that by using a relay, we remove the bridging delay in the L2 handoff [1], [2]. Usually, after an MN changes AP, the switch continues sending packets for the MN to the old AP until it updates the information regarding the new AP on its ports. The bridging delay is the amount of time needed by the switch to update this information on its ports. When we use a relay node in the new AP, this relay node is already registered to the correct port on the switch, therefore no update is required on the switch side and the MN can immediately receive packets via the RN.
C. Relay Process
In the previous section we have shown how an MN can perform authentication while having data packets relayed by the RN. In this section we explain in more detail how relaying is performed. Fig. 4 shows the format of a general IEEE 802.11 MAC layer frame. Among the many fields we can identify a Frame Control field and four Address fields. For the relay process we are interested in the four Address fields and in the To DS and From DS one-bit fields that are part of the Frame Control field. The To DS bit is set to one in data frames that are sent to the Distribution System (DS) 5 . The From DS bit is set to one in data frames exiting the DS. The four Address fields have 5 A DS is a system that interconnects BSSs and LANs to create an ESS [31]. Table II are: Destination Address (DA), Source Address (SA), BSSID, Receiver Address (RA) and Transmitter Address (TA). In infrastructure mode, when an MN sends a packet, this packet is always sent first to the AP even if both source and destination are associated with the same AP. For such packets the MN sets the To DS bit. Other MNs on the same channel can hear the packet but discard it because, as the To DS field and Address fields suggest, such packet is meant for the AP. When the AP has to send a packet to an MN, it sets the From DS bit. All MNs that can hear this packet discard it, except for the MN the packet is for.
When both fields, To DS and From DS, have a value of one, the packet is sent on the wireless medium from one AP to another AP. In ad-hoc mode, both fields have a value of zero and the frames are directly exchanged between MNs with the same Independent Basic Service Set (IBSS).
In [32] Chandra et al. present an optimal way to continuously switch a wireless card between two or more infrastructure networks or between infrastructure and ad-hoc networks so that the user has the perception of being connected to multiple networks at the same time although using one single wireless card. This approach works well if no real-time traffic is present. When we consider real-time traffic and its delay constraints, continuous switching between different networks and, in particular, between infrastructure and ad-hoc mode is no longer a feasible solution. Although optimal algorithms have been proposed for this [32], the continuous switching of the channel and or operating mode takes a non-negligible amount of time which becomes particularly significant if any form of L2 authentication is present in the network. In such cases, the time needed by the wireless card to continuously switch between networks can introduce significant delay and packet loss.
The approach we propose is based on the idea that ad-hoc mode and infrastructure mode do not have to be mutually exclusive, but rather can complement each other. In particular, MNs can send ad-hoc packets while in infrastructure mode so that other MNs on the shared medium, that is, on the same channel, can receive such packets without involving the AP. Such packets use the 802.11 ad-hoc MAC addresses as specified in [31]. That is, both fields To DS and From DS have a value of zero and the Address fields are set accordingly as specified in Table II. In doing so, MNs can directly send and receive packets to and from other MNs without involving the AP and without having to switch to ad-hoc mode.
This mechanism allows an RN to relay packets to and from an R-MN without significantly affecting any ongoing multimedia session that the RN might have via the AP. Such an approach can be useful in all those scenarios where an MN in infrastructure mode needs to communicate with other MNs in infrastructure or ad-hoc mode [33] and a continuous change between infrastructure mode and ad-hoc mode is either not possible or convenient.
VI. SECURITY
Security is a major concern in wireless environments. In this section we address some of the problems encountered in a cooperative environment, focusing on CR.
A. Roaming Security Issues
In this particular context, a malicious user might try to propagate false information among the cooperating MNs. In particular, we have to worry about three main vulnerabilities:
1) A malicious user might want to re-direct STAs to fake APs where their traffic can be sniffed and private information can be compromised. 2) A malicious user might try to perform DoS attacks by redirecting STAs to far or non-existing APs. This would cause the STAs to fail the association to the next AP during the handoff process. The STA would then have to rely on the legacy scanning process to re-establish network connectivity. 3) At L3, a malicious user might behave as an A-MN and try to disrupt a STA' service by providing invalid IP addresses. In general, we have to remember that the cooperative mechanism described here works on top of any other security mechanism that has been deployed in the wireless network (e.g., 802.11i, WPA). In order for a malicious user to send and receive packets from and to the multicast group, it has to have, first of all, access to the network and thus be authenticated. In such a scenario, a malicious user is a STA with legal access to the network. This means that MAC spoofing attacks are not possible as a change in MAC address would require a new authentication handshake with the network. This also means that once the malicious user has been identified, it can be isolated.
How can we attempt to isolate a malicious node? Since the INFORESP frame is multicast, each MN that has the same information than the one contained in such a frame, can check that the information in such a frame is correct and that no one is lying. If it finds out that the INFORESP frame contains the wrong information, it immediately sends an INFOALERT multicast frame. Such a frame contains the MAC address of the suspicious STA. This frame is also sent by an R-MN that has received a wrong IP address and contains the MAC address of the A-MN that provided that IP address. If more than one alert for the same suspicious node, is triggered by different nodes, the suspicious node is considered malicious and the information it provides is ignored. Let us look at this last point in more detail.
One single INFOALERT does not trigger anything. In order for an MN to be categorized as bad, there has to be a certain number of INFOALERT multicast frames sent by different nodes, all regarding the same suspicious MN. This certain number can be configured according to how paranoid someone is about security but, regardless, it has to be more than one. Let us assume this number to be five. If a node receives five INFOALERT multicast frames from five different nodes regarding the same MN, then it marks such an MN as bad. This mechanism could be compromised if either a malicious user can spoof five different MAC addresses (and this is not likely for the reasons we have explained earlier) or if there are five different malicious users that are correctly authenticated in the wireless network and that can coordinate their attacks. If this last situation occurs, then there are bigger problems in the network to worry about than handoff policies. Choosing the number of INFOALERT frames required to mark a node as malicious to be very large would have advantages and disadvantages. It would give more protection against the exploitation of this mechanism for DoS attacks as the number of malicious users trying to exploit INFOALERT frames would have to be high. On the other hand, it would also make the mechanism less sensitive to detect a malicious node as the number of INFOALERT frames required to mark the node as bad might never be reached or it might take too long to reach. So, there is clearly a trade-off.
Regardless, in either one of the three situations described at the beginning of this section, the MN targeted by the malicious user would be able to easily recover from an attack by using legacy mechanisms such as active scanning and DHCP address acquisition, typically used in non-cooperative environments.
B. Cooperative Authentication and Security
In order to improve security in the relay process, we introduce some countermeasures that nodes can use to prevent exploitation of the relay mechanism. The main concern in having a STA relay packets for an unauthenticated MN is that such an MN might try to repeatedly use the relay mechanism and never authenticate to the network. In order to prevent this, we introduce the following countermeasures: 1) Each RELAY REQ frame allows an RN to relay packets for a limited amount of time. After this time has passed, the relaying stops. The relaying of packets is required only for the time needed by the MN to perform the normal authentication process. 2) An RN relays packets only for those nodes which have sent a RELAY REQ packet to it while still connected to their previous AP. 3) RELAY REQ packets are multicast. All the nodes in the multicast group can help in detecting bad behaviors such as one node repeatedly sending RELAY REQ frames. All of the above countermeasures work if we can be sure of the identity of a node and, in general, this is not always the case as malicious users can perform MAC spoofing attacks, for example. However, as we have explained in Section VI-A, MAC spoofing attacks are not possible in the present framework.
This said, we have to remember that before an RN can relay packets for an MN, it has to receive the proper RELAY REQ packet from the MN. Such a packet has to be sent by the MN while still connected to the old AP. This means that the MN has to be authenticated with the previous AP in order to send such a packet. Furthermore, once the relaying timeout has expired, the RN will stop relaying packets for that MN. At this point, even if the MN can change its MAC address, it would not be able to send a new RELAY REQ as it has to first authenticate again with the network (e.g., using 802.11i) and therefore no relaying would take place. In the special case in which the old AP belongs to an open network 6 , a malicious node could perform MAC spoofing and exploit the relay mechanism in order to have access to the secure network. In this case, securing the multicast group by performing authentication and encryption at the multicast group level could prevent this kind of attacks although it may require infrastructure support.
In conclusion, we can consider the three countermeasures introduced at the beginning of this section, to be more than adequate in avoiding exploitation of the relaying mechanism.
VII. STREAMING MEDIA SUPPORT SIP can be used, among other things, to update new and ongoing media sessions. In particular, the IP address of one or more of the participants to the media session can be updated. In general, after an MN performs a L3 handoff, a media session update is required to inform the various parties about the MN's new IP address [34].
If the CN does not support cooperation, the relay mechanism as described in Section V-B does not work and the CN keeps sending packets to the MN's old IP address, being unaware of the relay process. This is the case for example, of an MN establishing a streaming video session with a stream media server. In this particular case, assuming that the media server supports SIP, a SIP session update is performed to inform the media server that the MN's IP address has changed. The MN sends a re-INVITE to the media server updating its IP address to the RN's IP address. In this way, the media server starts sending packets to the RN and relay can take place as described earlier.
Once the relaying is over, if the MN's authentication was successful, the MN sends a second re-INVITE including its new IP address, otherwise, once the timeout for relaying expires, the relaying process stops and the RN terminates the media session with the media server.
SIP and media session updates will be discussed further in Section X. 6 Under normal conditions this is very unluckily but it might happen for handoffs between different administration domains, for example.
VIII. BANDWIDTH AND ENERGY USAGE
By sharing information, the MNs in the network do not have to perform individual tasks such as scanning, which would normally consume a considerable amount of bandwidth and energy. This means that sharing data among MNs is usually more energy and bandwidth efficient than having each MN perform the correspondent individual task. We discuss the impact of CR on energy and bandwidth below.
In CR, bandwidth usage and energy expended are mainly determined by the number of multicast packets that each client has to send for acquiring the information it needs. The number of multicast packets is directly proportional to the number of clients supporting the protocol that are present in the network. In general, more clients introduce more requests and more responses. However, having more clients that support the protocol ensures that each client can collect more information with each request, which means that overall each client will need to send fewer packets. Furthermore, by having the INFORESP frames as multicast frames, many MNs will benefit from each response and not just the MN that sent the request. This will minimize the number of packets exchanged, in particular the number of INFOREQ sent.
To summarize, with increasing number of clients, suppression of multicast takes place, so the number of packets sent remains constant.
In general, sending a few long packets is more efficient than sending many short ones. As explained in Section IV-B, for each AP the information included in an INFOREQ or INFORESP packet is a cache entry (see Fig. 2), that is, a triple {BSSID, Channel, Subnet ID} for a total size of 6+4+4 = 14 bytes. Considering that an MTU size is 1500 bytes, that each cache entry takes about 14 bytes, and that IP and UDP headers take together a total of 28 bytes, each INFOREQ and INFORESP packet can carry information about no more than 105 APs for a maximum of 1472 bytes.
In [35] Henderson et al., analyze the behavior of wireless users in a campus-wide wireless network over a period of seventeen weeks. They found that:
• Users spend almost all of their time at their home location. The home location is defined as the AP where they spend most of the time and all the APs within 50 meters of this one.
• The median number of APs visited by a user is 12, but the median differs for each device type, with 17 for laptops, 9 for PDAs and 61 for VoIP devices such as VoIP phones. This shows that most devices will spend most of their time at their home location, which means that they will mostly deal with a small number of APs. However, even if we consider the median number of APs that clients use throughout the trace period of seventeen weeks, we can see that when using laptops and PDAs each MN would have to know about the nearest 9-17 APs. For VoIP devices that are always on, the median number of APs throughout the trace period is 61. In our implementation each INFOREQ and INFORESP packet carries information about 105 APs at most. Regardless of the The relay mechanism introduced in Section V for cooperative authentication introduces some bandwidth overhead. This is because for each packet that has to be sent by the MN to the CN and vice-versa, the packet occupies the medium twice; once when being transmitted between MN and RN and once when being transmitted between RN and AP. This however, happens only for the few seconds needed by the MN to authenticate. Furthermore, both of the links MN-RN and RN-AP are maximum bit-rate links, so the time on air for each data packet is small.
IX. EXPERIMENTS
In the present section we describe implementation details and measurement results for CR.
A. Environment
All the experiments were conducted at Columbia University on the 7th floor of the Schapiro building. We used four IBM Thinkpad laptops: three IBM T42 laptops using Intel Centrino Mobile technology with a 1.7 GHz Pentium processor and 1GB RAM and one IBM laptop with an 800 MHz Pentium III processor and 384 MB RAM. Linux kernel version 2.4.20 was installed on all the laptops. All the laptops were equipped with a Linksys PCMCIA Prism2 wireless card. Two of them were used as wireless sniffers, one of them was used as roaming client and one was used as "helper" to the roaming client, that is, it replied to INFOREQ frames and behaved as an A-MN. For cooperative authentication the A-MN was also used as RN. Two Dell Dimension 2400 desktops were used, one as CN and the other as RADIUS server [30]. The APs used for the experiments were a Cisco AP1231G which is an enterprise AP and a Netgear WG602 which is a SOHO/home AP.
B. Implementation Details
In order to implement the cooperation protocol we modified the wireless card driver and the DHCP client. Furthermore, a Fig. 6. L3 handoff environment cooperation manager was also created in order to preserve state information and coordinate wireless driver and DHCP client. For cooperative authentication, the WPA supplicant was also slightly modified to allow relay of unencrypted frames. The HostAP [36] wireless driver, an open-source WPA supplicant [37], and the ISC DHCP client [38] were chosen for the implementation. The different modules involved and their interaction is depicted in Fig. 5. A UDP packet generator was also used to generate small packets with a packetization interval of 20 ms in order to simulate voice traffic. For the authentication measurements, we used FreeRADIUS [39] as RADIUS server.
C. Experimental Setup
For the experiments we used the Columbia University 802.11b wireless network which is organized as one single subnet. In order to test L3 handoff, we introduced another AP connected to a different subnet (Fig. 6). The two APs operated on two different non-overlapping channels.
The experiments were conducted by moving the roaming client between two APs belonging to different subnets, thus having the client perform L2 and L3 handoffs in either direction.
Packet exchanges and handoff events were recorded using the two wireless sniffers (kismet [40]), one per channel. The trace files generated by the wireless sniffer were later analyzed using Ethereal [41].
In the experimental set-up we do not consider a large 7 presence of other MNs under the same AP since air-link congestion is not relevant to the handoff measurements. Delays due to collisions, backoff, propagation delay and AP queuing delay are irrelevant since they usually are on the order of micro-seconds under normal conditions. However, even if we consider these delays to be very high because of a high level of congestion, the MN should worry about not being able to make or continue a call as the AP has reached its maximum capacity. Handoff delay would, at this point, become a second order problem. Furthermore, in this last scenario, the MN should avoid to do handoff to a very congested AP in the first place as part of a good handoff policy (see Section XI). Updating information at the Home Agent or SIP Registrar is trivial and does not have the same stringent delay requirements that midcall mobility has, therefore it will not be considered.
D. Results
In this section we show the results obtained in our experiments. In Section IX-D.1, we consider an open network with no authentication in order to show the gain of CR in an open network. In Section IX-D.2, authentication is added and, in particular, we consider a wireless network with IEEE 802.11i enabled.
We define L2 handoff time as scanning time + open authentication and association time + IEEE 802.11i authentication time. The last contribution to the L2 handoff time is not present in open networks. Similarly, we define the L3 handoff time as subnet discovery time + IP address acquisition time.
In the following experiments we show the drastic improvement achieved by CR in terms of handoff time. At L2 such an improvement is possible because, as we have explained in Section IV-A, MNs build a cache of neighbor APs so that scanning for new APs is not required and the delay introduced by the scanning procedure during the L2 handoff is removed. Furthermore, by using relays (see Section V), an MN can send and receive data packets during the authentication process, thus eliminating the 802.11i authentication delay. At L3, MNs cache information about which AP belongs to which subnet, hence immediately detecting a change in subnet by comparing the subnet IDs of the old and new APs. This provides a way to detect a subnet change and at the same time makes the subnet discovery delay insignificant. Furthermore, with CR, the IP address acquisition delay is completely removed since each node can acquire a new IP address for the new subnet while still in the old subnet (see Section IV-C).
It is important to notice that in current networks 8 there is no standard way to detect a change in subnet in a timely manner 9 . 8 Within the IETF, the DNA working group is standardizing the detection of network attachments for IPv6 networks only [42]. 9 Router advertisements are typically broadcast only every few minutes. Recently, DNA for IPv4 (DNAv4) [43] was standardized by the DHC working group within the IETF in order to detect a subnet change in IPv4 networks. This mechanism, however, works only for previously visited subnets for which the MN still has a valid IP address and can take up to hundreds of milliseconds to complete. Furthermore, if L2 authentication is used, a change in subnet can be detected only after the authentication process completes successfully. Because of this, in the handoff time measurements for the standard IEEE 802.11 handoff procedure, the delay introduced by subnet change discovery is not considered.
To summarize, in theory by using CR the only contribution to the L2 handoff time is given by open authentication and association and there is no contribution to the L3 handoff time whatsoever, that is, the L3 handoff time is zero. In practice, this is not exactly true. Some other sources of delay have to be taken into consideration as we show in more detail in Section IX-D.3.
1) L2 and L3 Roaming:
We show the handoff time when an MN is performing a L2 and L3 handoff without any form of authentication, that is, the MN is moving in an open network. In such a scenario, before the L2 handoff occurs, the MN tries to build its L2 cache if it has not already done so. Furthermore, the MN also searches for any available A-MN that might help it in acquiring an IP address for the new subnet. The scenario is the same as the one depicted in Fig. 6. Fig. 7 shows the handoff time when CR is used. In particular, we show the L2, L3 and total L2+L3 handoff times over 30 handoffs. As we can see, the total L2+L3 handoff time has a maximum value of 21 ms in experiment 18. Also, we can see how, even though the L3 handoff time is higher on average than the corresponding L2 handoff time, there are situations where these two become comparable. For example, we can see in experiment 24 how the L2 and L3 handoff times are equal and in experiment 13 how the L2 handoff time exceeds the corresponding L3 handoff time. The main causes for this variance will be presented in Section IX-D.3. Fig. 7 and Table III show how, on average, with CR the total L2+L3 handoff time is less than 16 ms, which is less than half of the 50 ms requirement for assuring a seamless handoff when real-time traffic is present. Table III shows the average values of IP address acquisition time, handoff time, and packet loss during the handoff process. The time between IP REQ and IP RESP is the time needed by the A-MN to acquire a new IP address for the R-MN. This time can give a good approximation of the L3 handoff time that we would have without cooperation. As we can see, with cooperation we reduce the L3 handoff time to about 1.5% of what we would have without cooperation. Table III also shows that the packet loss experienced during a L2+L3 handoff is negligible when using CR. Fig. 8 shows the average delay over 30 handoffs of L2, L3 and L2+L3 handoff times for CR and for the legacy 802.11 handoff mechanism. The total L2+L3 handoff time is less than 2) L2 and L3 Roaming with Authentication: Here we show the handoff time when IEEE 802.11i is used together with EAP-TLS and PEAP/MSCHAPv2. Fig. 9 shows the average over 30 handoffs of the delay introduced in a L2 handoff by the certificate/credentials exchange and the session key exchange. Different key lengths are also considered for the generation of the certificates 10 . As expected, the exchange of certificates takes most of the time. This is the reason why mechanisms such as fast-reconnect [44], [45] improve L2 handoff times considerably, although still on the order of hundreds of milliseconds.
Generally speaking, any authentication mechanism can be used together with CR. Fig. 10 shows the average over 35 handoffs of the total L2, L3 and L2+L3 handoff times. In particular, we show the handoff time for EAP-TLS with 1024 and 2048 bits key, PEAP/MSCHAPv2 with 1024 bits key and CR. The average L2+L3 handoff times are respectively 1580 ms, 1669 ms, 1531 ms and 21 ms. By using CR, we achieve a drastic improvement in the total handoff time. As we can see, CR reduces the handoff time to 1.4% or less of the handoff time introduced by the standard 802.11 mechanism. This significant improvement is possible because at L2 with CR we bypass the whole authentication handshake by relaying packets. At L3 we are able to detect a change in subnet in a timely manner and acquire a new IP address for the new subnet while still in the old subnet. Fig. 11 shows in more detail the two main contributions to the L2 handoff time when a relay is used. We can see that, on average, the time needed for the first data packet to be transmitted after the handoff takes more than half of the total L2 handoff time. Here, with data packet we are referring to a packet sent by our UDP packet generator. By analyzing the wireless traces collected in our experiments, we found Fig. 11. CR L2 handoff time in IEEE 802.11i networks that the first data packet after the handoff is not transmitted immediately after the L2 handoff completes because the wireless driver needs to start the handshake for the authentication process. This means that the driver already has a few packets in the transmission queue that are waiting to be transmitted when our data packet enters the transmission queue. This, however, concerns only the first packet to be transmitted after the L2 handoff completes successfully. All subsequent data packets will not encounter any additional delay due to relay.
3) Measurement Variance: We have encountered a high variance in the L2 handoff time. In particular, most of the delay is between the authentication request and authentication response, before the association request. Within all the measurements taken, such behaviour appeared to be particularly prominent when moving from the Columbia AP to the Netgear AP. This behavior, together with the results shown by Mishra et al. in [2], have lead us to the conclusion that such a variance is caused by the cheap hardware used in the low-end Netgear AP.
At L3, ideally, the handoff time should be zero as we acquire all the required L3 information while still in the old subnet. The L3 handoff time shown in Fig. 7 can be roughly divided in two main components: signaling delay and polling delay. The signaling delay is due to various signaling messages exchanged among the different entities involved in setting up the new L3 information in the kernel (wireless driver and DHCP client); the polling delay is introduced by the polling of variables in between received-signal-strength samples 11 , done in order to start the L3 handoff process in a timely manner with respect to the L2 handoff process.
These two delays are both implementation dependent and can be reduced by further optimizing the implementation.
X. APPLICATION LAYER MOBILITY
We suggest a method for achieving seamless handoffs at the application layer using SIP and CR. Implementation and analysis of the proposed approach are reserved for future work. 11 Received-signal-strength values are measured by the wireless card driver. Generally speaking, there are two main problems with application layer mobility. One is that the SIP handshake (re-INVITE ⇒ 200 OK ⇒ ACK) takes a few hundred milliseconds to complete, exceeding the requirements of seamless handoff for real-time media. The second is that we do not know a priori in which direction the user is going to move.
In order to solve these two problems, we have to define a mechanism that allows the MN to start the application layer handoff before the L2 handoff and to do it so that the MN does not move to the wrong AP or subnet after updating the SIP session. Furthermore, the new mechanism also has to work in the event of the MN deciding not to perform the L2 handoff at all after performing the SIP session update, that is, after updating the SIP session with the new IP address.
The SIP mobility mechanism [34] and CR can be combined. In particular, we consider an extension of the relay mechanism discussed in Section V-B. Let us assume that the MN performing the handoff has already acquired all the necessary L2 and L3 information as described in Sections IV-B, IV-C and V. This means that the MN has a list of possible RNs and IP addresses to use after the L2 handoff, one for each of the various subnets it could move to next. At this point, before performing any L2 handoff, the MN needs to update its multimedia session. The up-link traffic does not cause particular problems as the MN already has a new IP address to use and can start sending packets via the RN right after the L2 handoff. The down-link traffic is more problematic since the CN will continue sending packets to the MN's old IP address as it is not aware of the change in the MN's IP address until the session has been updated.
The basic idea is to update the session so that the same media stream is sent, at the same time, to the MN and to all the RNs in the list previously built by the MN. In this way, regardless of which subnet/AP the MN will move to, the corresponding RN will be able to relay packets to it. If the MN does not change AP at all, nothing is lost as the MN is still receiving packets from the CN. After the MN has performed the L2 handoff and has connected to one of the RNs, it may send a second re-INVITE via the RN so that the CN sends packets to the current RN only, without involving the other RNs any longer. Once the authentication process successfully completes, communication via the AP can resume. At this point, one last session update is required so that the CN can send packets directly to the MN without any RN in between.
In order to send multiple copies of the same media stream to different nodes, that is, to the MN performing the handoff and its RNs, the MN can send to the CN a re-INVITE with an SDP format as described in RFC 3388 [46] and shown in Figure 12. In this particular format, multiple m lines are present with multiple c lines and grouped together by using the same Flow Identification (FID). A station receiving a re-INVITE with an SDP part as shown in Figure 12 sends an audio stream to a client with IP address 131.160.1.112 on port 30000 (if the PCM µ-law codec is used) and to a client with IP address 131.160.1.111 on port 20000. In order for the same media stream to be sent to different clients at the same time, all the clients have to support the same codec [46]. In our case, we have to remember that RNs relay traffic to MNs, they do not play such traffic. Because of this, we can safely say that each RN supports any codec during the relay process, hence a copy of the media stream can always be sent to an RN by using the SDP format described in [46].
It is worthwhile to notice that in the session update procedure described above, no buffering is necessary. As we have explained in Section IX-D and shown in Table III, the L2+L3 handoff time is on the order of 16 ms for open networks, which is less than the packetization interval for typical VoIP traffic. When authentication is used (see Figure 10), the total L2+L3 handoff time is on the order of 21 ms. In both cases packet loss is negligible, hence making any buffering of packets unnecessary.
XI. LOAD BALANCING CR can also play a role in AP load balancing. Today, there are many problems with the way MNs select the AP to connect to. The AP is selected according to the link signal strength and SNR levels while other factors such as effective throughput, number of retries, number of collisions, packet loss, bit-rate or BER are not taken into account. This can cause an MN to connect to an AP with the best SNR but low throughput, high number of collisions and packet loss because that AP is highly congested. If the MN disassociates or the AP deauthenticates it, the MN looks for a new candidate AP. Unfortunately, with a very high probability, the MN will pick the same AP because its link signal strength and SNR are still the "best" available. The information regarding the congestion of the AP is completely ignored and this bad behavior keeps repeating itself. This behavior can create situations where users end up connecting all to the "best" AP creating the scenario depicted earlier and at the same time leaving other APs underutilized [47], [48].
CR can be very helpful in such a context. In particular, we can imagine a situation where an MN wants to gather statistics about the APs that it might move to next, that is, the APs that are present in its cache. In order to do so, the MN can ask other nodes to send statistics about those APs. Each node can collect different kind of statistics, such as available throughput, bit-rate, packet loss, retry rate. Once these statistics have been gathered, they can be sent to the MN that has requested them. The MN, at this point has a clear picture of which APs are more congested and which others can support the required QoS, therefore making a smarter handoff decision. By using this approach we can achieve an even distribution of traffic flows among neighboring APs.
The details of this mechanism are reserved for future study but can be easily derived from the procedures earlier introduced for achieving fast L2 and L3 handoffs.
XII. AN ALTERNATIVE TO MULTICAST
Using IP multicast packets can become inefficient in highly congested environments with a dense distribution of MNs. In such environments a good alternative to multicast can be represented by ad-hoc networks. Switching back-and-forth between infrastructure mode and ad-hoc mode has already been used by MNs in order to share information for fault diagnosis [33]. As we pointed out in Section V-C, continuously switching between ad-hoc and infrastructure mode introduces synchronization problems and channel switching delays, making this approach unusable for real-time traffic. However, even if non-real-time traffic is present, synchronization problems could still arise when switching to ad-hoc mode while having an alive TCP connection on the infrastructure network, for example. Spending a longer time in ad-hoc mode might cause the TCP connection to time-out; on the other hand waiting too long in infrastructure mode might cause loss of data in the ad-hoc network.
In CR, MNs can exchange L2 and L3 information contained in their cache by using the mechanism used for relay as described in Section V-C. Following this approach, MNs can directly exchange information with each other without involving the AP and without having to switch their operating mode to ad-hoc. In particular, an MN can send broadcast and unicast packets such as INFOREQ and INFORESP with the To DS and From DS fields set to zero (see Section V-C). Because of this, only the MNs in the radio coverage of the first MN will be able to receive such packets. The AP will drop these packets since the To DS field is not set.
Ad-hoc multi-hop routing can also be used when needed. This may be helpful, for example, in the case of R-MNs acquiring a new IP address for a new subnet while still in the old subnet (see Section IV-C), when current AP and new AP use two different channels. In such a case, a third node on the same channel than the R-MN, could route packets between the R-MN and the A-MN by switching between the two channels of the two APs, thus leaving R-MN and A-MN operations unaffected. In this case we would not have synchronization problems since the node, switching between the two channels, would have to switch only twice. Once after receiving the IP REQ packet from the R-MN in order to send it to the A-MN, and a second time after receiving the IP RESP from the A-MN in order to send it to the R-MN.
An ad-hoc based approach, such as the relay mechanism presented in Section V-C, does not require any support on the infrastructure and it represents an effective solution in congested and densely populated environments. On the other hand, ad-hoc communication between MNs would not work very well in networks with a small population of MNs, where each MN might be able to see only a very small number of other MNs at any given time.
MNs with two wireless cards could use one card to connect to the ad-hoc network and share information with other MNs, while having the other card connected to the AP. The two cards could also operate on two different access technologies such as cellular and 802.11.
If it is possible to introduce some changes in the infrastructure, we can minimize the use of multicast packets by using the SIP presence model [49]. In such a model we introduce a new presence service in which each subnet is a presentity. Each subnet has a contact list of all the A-MNs available in that subnet for example, so that the presence information is represented by the available A-MNs in the subnet. When an R-MN subscribes to this service, it receives presence information about the new subnet, namely its contacts which are the available A-MNs in that subnet.
This approach could be more efficient in scenarios with a small number of users supporting CR. On the other hand, it would require changes in the infrastructure by introducing additional network elements. The presence and ad-hoc approaches are reserved for future study.
XIII. CONCLUSIONS AND FUTURE WORK
In this paper we have defined the Cooperative Roaming protocol. Such a protocol allows MNs to perform L2 and L3 handoffs seamlessly, with an average total L2+L3 handoff time of about 16 ms in an open network and of about 21 ms in an IEEE 802.11i network without requiring any changes to either the protocol or the infrastructure. Each of these values is less than half of the 50 ms requirement for realtime applications such as VoIP to achieve seamless handoffs. Furthermore, we are able to provide such a fast handoff regardless of the particular authentication mechanisms used while still preserving security and privacy.
MN cooperating has many advantages and does not introduce any significant disadvantage as in the worst case scenario MNs can rely on the standard IEEE 802.11 mechanisms achieving performances similar to a scenario with no cooperation.
Node cooperation can be useful in many other applications:
• In a multi-administrative-domain environment CR can help in discovering which APs are available for which domain. In this way an MN might decide to go to one AP/domain rather than some other AP/domain according to roaming agreements, billing, etc.
• In Section XI we have shown how CR can be used for load balancing. Following a very similar approach but using other metrics such as collision rate and available bandwidth, CR can also be used for admission control and call admission control.
• CR can help in propagating information about service availability. In particular, an MN might decide to perform a handoff to one particular AP because of the services that are available at that AP. A service might be a particular type of encryption, authentication, minimum guaranteed bit rate and available bandwidth or the availability of other types of networks such as Bluetooth, UWB and 3G cellular networks, for example.
• CR provides advantages also in terms of adaptation to changes in the network topology. In particular, when an MN finds some stale entries in its cache, it can update its cache and communicate such changes to the other MNs. This applies also to virtual changes of the network topology (i.e. changes in the APs power levels) which might become more common with the deployment of IEEE 802.11h equipment.
• CR can also be used by MNs to negotiate and adjust their transmission power levels so to achieve a minimum level of interference.
• In [26] Ramani et al. describe a passive scanning algorithm according to which an MN knows the exact moment when a particular AP will send its beacon frame. In this way the MN collects the statistics for the various APs using passive scanning but without having to wait for the whole beacon interval on each channel. This algorithm, however, requires all the APs in the network to be synchronized. By using a cooperative approach, we can have the various MNs sharing information about the beacon intervals of their APs. In this way, we only need to have the MNs synchronized amongst themselves (e.g., via NTP) without any synchronization required on the network side.
• Interaction between nodes in an infrastructure network and nodes in an ad-hoc/mesh network.
1) An MN in ad-hoc mode can send information about its ad-hoc network. In this way MNs of the infrastructure network can decide if it is convenient for them to switch to the ad-hoc network (this would also free resources on the infrastructure network). This, for example, can happen if there is a lack of coverage or if there is high congestion in the infrastructure network. Also, an MN might switch to an ad-hoc network if it has to recover some data available in the ad-hoc network itself (i.e. sensor networks). 2) If two parties are close to each other, they can decide to switch to the ad-hoc network discovered earlier and talk to each other without any infrastructure support. They might also create an ad-hoc network on their own using a default channel, if no other ad-hoc network is available. As future work, we will look in more detail at application layer mobility, load balancing and call admission control. We will investigate the possibility of having some network elements such as APs support A-MN and RN functionalities; this would be useful in scenarios where only few MNs support CR. Finally, we will look at how IEEE 802.21 [13] could integrate and extend CR.
| 10,914 |
cs0701046
|
2952231607
|
In a wireless network, mobile nodes (MNs) repeatedly perform tasks such as layer 2 (L2) handoff, layer 3 (L3) handoff and authentication. These tasks are critical, particularly for real-time applications such as VoIP. We propose a novel approach, namely Cooperative Roaming (CR), in which MNs can collaborate with each other and share useful information about the network in which they move. We show how we can achieve seamless L2 and L3 handoffs regardless of the authentication mechanism used and without any changes to either the infrastructure or the protocol. In particular, we provide a working implementation of CR and show how, with CR, MNs can achieve a total L2+L3 handoff time of less than 16 ms in an open network and of about 21 ms in an IEEE 802.11i network. We consider behaviors typical of IEEE 802.11 networks, although many of the concepts and problems addressed here apply to any kind of mobile network.
|
, @cite_6 introduce a location sensing mechanism based on cooperative behavior among stations. Stations share location information about other stations and about landmarks so to improve position prediction and reduce training.
|
{
"abstract": [
"We present the cooperative location-sensing system (CLS), an adaptive location-sensing system that enables devices to estimate their position in a self-organizing manner without the need for an extensive infrastructure or training. Hosts cooperate and share positioning information. CLS uses a grid representation that allows an easy incorporation of external information to improve the accuracy of the position estimation. We evaluated the performance of CLS via simulation and investigated the impact of the density of landmarks, degree of connectivity, range error, and grid resolution on the accuracy. We found that the average error is less than 2 of the transmission range, when used in a terrain with 20 of the hosts to be landmarks, average network connectivity above 7, and distance estimation error equal to 5 of the transmission range."
],
"cite_N": [
"@cite_6"
],
"mid": [
"2158442058"
]
}
|
Cooperation Between Stations in Wireless Networks
|
Enabling VoIP services in wireless networks presents many challenges, including QoS, terminal mobility and congestion control. In this paper we focus on IEEE 802.11 wireless networks and address issues introduced by terminal mobility.
In general, a handoff happens when an MN moves out of the range of one Access Point (AP) and enters the range of a new one. We have two possible scenarios: 1) If the old AP and the new AP belong to the same subnet, the MN's IP address does not have to change at the new AP. The MN performs a L2 handoff. 2) If the old AP and the new AP belong to different subnets, the MN has to go through the normal L2 handoff procedure and also has to request a new IP address in the new subnet, that is, it has to perform a L3 handoff. Fig. 1 shows the steps involved in a L2 handoff process in an open network. As we have shown in [1] and Mishra et al. have shown in [2], the time needed by an MN to perform a L2 handoff is usually on the order of a few hundred milliseconds, thus causing a noticeable interruption in any ongoing realtime multimedia session. In either open 802.11 networks or 802.11 networks with WEP enabled, the discovery phase constitutes more than 90% of the total handoff time [1], [2]. In 802.11 networks with either WPA or 802.11i enabled, the handoff delay is dominated by the authentication process that is performed after associating to the new AP. In particular, no data can be exchanged amongst MNs before the authentication process completes successfully. In the most general case, both authentication delay and scanning delay are present. These two delays are additive, so, in order to achieve seamless real-time multimedia sessions, both delays have to be addressed and, if possible, removed. When a L3 handoff occurs, an MN has to perform a normal L2 handoff and update its IP address. We can break the L3 handoff into two logical steps: subnet change detection and new IP address acquisition via DHCP [3]. Each of these steps introduces a significant delay.
In this paper we focus on the use of station cooperation to achieve seamless L2 and L3 handoffs. We refer to this specific use of cooperation as Cooperative Roaming (CR). The basic idea behind CR is that MNs subscribe to the same multicast group creating a new plane for exchanging information about the network and help each other in different tasks. For example, an MN can discover surrounding APs and subnets by just asking to other MNs for this information. Similarly, an MN can ask another MN to acquire a new IP address on its behalf so that the first MN can get an IP address for the new subnet while still in the old subnet.
For brevity and clarity's sake, in this paper we do not consider handoffs between different administrative domains and AAA-related issues although CR could be easily extended to support them. Incentives for cooperation are also not considered since they are a standard problem for any system using some form of cooperation (e.g., file sharing) and represent a separate research topic [4], [5], [6], [7], [8].
The rest of the paper is organized as follows. In Section II we show the state of the art for handoffs in wireless networks, in Section III we briefly describe how IPv4 and IPv6 multicast addressing is used in the present context, Section IV describes how, with cooperation, MNs can achieve seamless L2 and L3 handoffs. Section V introduces cooperation in the L2 authentication process to achieve seamless handoffs regardless of the particular authentication mechanism used. Section VI considers security and Section VII shows how streaming media can be supported in CR. In Section VIII we analyze CR in terms of bandwidth and energy usage, Section IX presents our experiments and results and Section X shows how we can achieve seamless application layer mobility with CR. In Section XI we apply CR to load balancing and Section XII presents an alternative to multicast. Finally, Section XIII concludes the paper.
IV. COOPERATIVE ROAMING
In this section we show how MNs can cooperate with each other in order to achieve seamless L2 and L3 handoffs.
A. Overview
In [1] we have introduced a fast MAC layer handoff mechanism for achieving seamless L2 handoffs in environments such as hospitals, schools, campuses, enterprises, and other places where MNs always encounter the same APs. Each MN saves information regarding the surrounding APs in a cache. When an MN needs to perform a handoff and it has valid entries in its cache, it will directly use the information in the cache without scanning. If it does not have any valid information in its cache, the MN will use an optimized scanning procedure called selective scanning to discover new APs and build the cache. In the cache, APs are ordered according to their signal strength that was registered when the scanning was performed, that is, right before changing AP. APs with stronger signal strength appear first. As mentioned in Section I, in open networks the scanning process is responsible for more than 90% of the total handoff time. The cache reduces the L2 handoff time to only a few milliseconds (see Table I) and cache misses due to errors in movement prediction introduce only a few milliseconds of Earlier, we had extended [27] the mechanism introduced in [1] to support L3 handoffs. MNs also cache L3 information such as their own IP address, default router's IP address and subnet identifier. A subnet identifier uniquely identifies a subnet. By caching the subnet identifier, a subnet change is detected much faster and L3 handoffs are triggered every time the new AP and old AP have different subnet identifiers. Faster L3 handoffs can be achieved since IP address and default router for the next AP and subnet are already known and can be immediately used. The approach in [27] achieves seamless handoffs in open networks only, it utilizes the default router's IP address as subnet identifier and it uses a suboptimal algorithm to acquire L3 information.
Here, we consider the same caching mechanism used in [27]. In order to support multi-homed routers, however, we use the subnet address as subnet identifier. By knowing the subnet mask and the default router's IP address we can calculate the network address of a certain subnet. Fig. 2 shows the structure of the cache. Additional information such as last IP address used by the MN, lease expiration time and default router's IP address can be extracted from the DHCP client lease file, available in each MN.
In CR, an MN needs to acquire information about the network if it does not have any valid information in the cache or if it does not have L3 information available for a particular subnet. In such a case, the MN asks other MNs for the information it needs so that the MN does not have to find out about neighboring APs by scanning. In order to share information, in CR, all MNs subscribe to the same multicast group. We call an MN that needs to acquire information about its neighboring APs and subnets a requesting MN (R-MN). By using CR, an R-MN can ask other MNs if they have such information by sending an INFOREQ multicast frame. The MNs that receive such a frame check if they have the information the R-MN needs and if so, they send an INFORESP multicast frame back to the R-MN containing the information the R-MN needs.
B. L2 Cooperation Protocol
In this section, we focus on the information exchange needed by a L2 handoff.
The information exchanged in the INFOREQ and IN-FORESP frames is a list of {BSSID, channel, subnet ID} entries, one for each AP in the MN's cache (see Fig. 2).
When an R-MN needs information about its neighboring APs and subnets, it sends an INFOREQ multicast frame. Such a frame contains the current content of the R-MN's cache, that is, all APs and subnets known to the R-MN. When an MN receives an INFOREQ frame, it checks if its own cache and the R-MN's cache have at least one AP in common. If the two caches have at least one AP in common and if the MN's cache has some APs that are not present in the R-MN's cache, the MN sends an INFORESP multicast frame containing the cache entries for the missing APs. MNs that have APs in common with the R-MN, have been in the same location of the R-MN and so have a higher probability of having the information the R-MN is looking for.
The MN sends the INFORESP frame after waiting for a random amount of time to be sure that no other MNs have already sent such information. In particular, the MN checks the information contained in INFORESP frames sent to the same R-MN by other MNs during the random waiting time. This prevents many MNs from sending the same information to the R-MN and all at the same time.
When an MN other than R-MN receives an INFORESP multicast frame, it performs two tasks. First, it checks if someone is lying by providing the wrong information and if so, it tries to fix it (see Section VI-A); secondly, it records the cache information provided by such a frame in its cache even though the MN did not request such information. By collecting unsolicited information, each MN can build a bigger cache in less time and in a more efficient manner requiring fewer frame exchanges. This is very similar to what happens in software such as Bit-Torrent where the client downloads different parts of the file from different peers. Here, we collect different cache chunks from different MNs.
In order to improve efficiency and further minimize frame exchange, MNs can also decide to collect information contained in the INFOREQ frames.
C. L3 Cooperation Protocol
In a L3 handoff an MN has to detect a change in subnet and also has to acquire a new IP address. When a L2 handoff occurs, the MN compares the cached subnet identifiers for the old and new AP. If the two identifiers are different, then the subnet has changed. When a change in subnet is detected, the MN needs to acquire a new IP address for the new subnet. The new IP address is usually acquired by using the DHCP infrastructure. Unfortunately, the typical DHCP procedure can take up to one second [27].
CR can help MNs acquire a new IP address for the new subnet while still in the old subnet. When an R-MN needs to perform a L3 handoff, it needs to find out which other MNs in the new subnet can help. We call such MNs Assisting MNs (A-MNs). Once the R-MN knows the A-MNs for the new subnet, it asks one of them to acquire a new IP address on its behalf. At this point, the selected A-MN acquires the new IP address via DHCP and sends it to the R-MN which is then able to update its multimedia session before the actual L2 handoff and can start using the new IP address right after the L2 handoff, hence not incurring any additional delay (see Section X).
We now show how A-MNs can be discovered and explain in detail how they can request an IP address on behalf of other MNs in a different subnet.
1) A-MNs Discovery: By using IP multicast, an MN can directly talk to different MNs in different subnets. In particular, the R-MN sends an AMN DISCOVER multicast packet containing the new subnet ID. Other MNs receiving such a packet check the subnet ID to see if they are in the subnet specified in the AMN DISCOVER. If so, they reply with an AMN RESP unicast packet. This packet contains the A-MN's default router IP address, the A-MN's MAC and IP addresses. This information is then used by the R-MN to build a list of available A-MNs for that particular subnet.
Once the MN knows which A-MNs are available in the new subnet, it can cooperate with them in order to acquire the L3 information it needs (e.g., new IP address, router information), as described below.
2) Address Acquisition: When an R-MN needs to acquire a new IP address for a particular subnet, it sends a unicast IP REQ packet to one of the available A-MNs for that subnet. Such packet contains the R-MN's MAC address. When an A-MN receives an IP REQ packet, it extracts the R-MN's MAC address from the packet and starts the DHCP process by inserting the R-MN's MAC address in the CHaddr field of DHCP packets 1 . The A-MN will also have to set the broadcast bit in the DHCP packets in order for it to receive DHCP packets with a different MAC address other than its own in the CHaddr field. All of this allows the A-MN to acquire a new IP address on behalf of the R-MN. This procedure is completely transparent to the DHCP server. Once the DHCP process has been completed, the A-MN sends an IP RESP multicast packet containing the default router's IP address for the new subnet, the R-MN's MAC address and the new IP address for the R-MN. The R-MN checks the MAC address in the IP RESP packet to be sure that the packet is not for a different R-MN. Once it has verified that the IP RESP is for itself, the R-MN saves the new IP address together with the new default router's IP address.
If the R-MN has more than one possible subnet to move to, it follows the same procedure for each subnet. In this way the R-MN builds a list of {router, new IP address} pairs, one pair for each one of the possible next subnets. After moving to the new subnet the R-MN renews the lease for the new IP address. The R-MN can start this process at any time before the L2 handoff, keeping in mind that the whole process might take one second or more to complete and that lease times of By reserving IP addresses before moving to the new subnet, we could waste IP addresses and exhaust the available IP pool. Usually, however, the lease time in a mobile environment is short enough to guarantee a sufficient re-use of IP addresses.
Acquiring an IP address from a different subnet other than the one the IP is for could also be achieved by introducing a new DHCP option. Using this option, the MN could ask the DHCP server for an IP address for a specific subnet. This would however, require changes to the DHCP protocol.
V. COOPERATIVE AUTHENTICATION
In this section we propose a cooperative approach for authentication in wireless networks. The proposed approach is independent of the particular authentication mechanism used. It can be used for VPN, IPsec, 802.1x or any other kind of authentication. We focus on the 802.1x framework used in Wi-Fi Protected Access (WPA) and IEEE 802.11i [29].
A. IEEE 802.1x Overview
The IEEE 802.1x standard defines a way to perform access control and authentication in IEEE 802 LANs and in particular in IEEE 802.11 wireless LANs using three main entities: supplicant, authenticator and authentication server 3 . The supplicant is the client that has to perform the authentication in order to gain access to the network; the authenticator, among other things, relays packets between supplicant and authentication server; the authentication server, typically a RADIUS server [30], performs the authentication process with the supplicant by exchanging and validating the supplicant's credentials. The critical point, in terms of handoff time in the 802.1x architecture, is that during the authentication process the authenticator allows only EAP Over LAN (EAPOL) traffic to be exchanged with the supplicant. No other kind of traffic is allowed. 2 The DHCP client lease file can provide information on current lease times. 3 The authentication server is not required in all authentication mechanisms.
B. Cooperation in the Authentication Process
A well-known property of the wireless medium in IEEE 802.11 networks is that the medium is shared and therefore every MN can hear packets that other stations (STAs) send and receive. This is true when MN and STAs are connected to the same AP -that is, are on the same channel. In [14] Liu et al. make use of this particular characteristic and show how MNs can cooperate with each other by relaying each other's packets so to achieve the optimum bit-rate. In this section we show how a similar approach can be used for authentication purposes.
For simplicity, in the following discussion we suppose that one authenticator manages one whole subnet, so that authentication is required after each L3 handoff. In such a scenario and in this context, we also refer to a subnet as an Authentication Domain (AD). In general, an MN can share the information about ADs in the same way it shares information about subnets. In doing so, an MN knows whether the next AP belongs to the same AD of the current AP or not. In a L2 or L3 handoff we have an MN which performs handoff and authentication, a Correspondent Node (CN) which has an established multimedia session with the MN and a Relay Node (RN) which relays packets to and from the MN. Available RNs for a particular AD can be discovered following a similar procedure to the one described earlier for the discovery of A-MNs (see Section IV-C.1). The difference here is that RN and MN have to be connected to the same AP after the handoff. In this scenario, we assume that RNs are a subset of the available A-MNs. The basic idea is that while the MN is authenticating in the new AD, it can still communicate with the CN via the RN which relays packets to and from the MN (see Fig. 3). Let us look at this mechanism in more detail. Before the MN changes AD/AP, it selects an RN from the list of available RNs for the new AD/AP and sends a RELAY REQ multicast frame to the multicast group. The RELAY REQ frame contains the MN's MAC and IP addresses, the CN's IP address and the selected RN's MAC and IP addresses. The RELAY REQ will be received by all the STAs subscribed to the multicast group and, in particular, it will be received by both the CN 4 and the RN. The RN will relay packets for the MN identified by the MAC address received in the RELAY REQ frame. After performing the handoff, the MN needs to authenticate before it can resume any communication via the AP. However, because of the shared nature of the medium, the MN will start sending packets to the RN as if it was already authenticated. The authenticator will drop the packets, but the RN can hear the packets on the medium and relay them to the CN using its own encryption keys, that is, using its secure connection with the AP. The CN is aware of the relaying because of the RELAY REQ, and so it will start sending packets for the MN to the RN as well. While the RN is relaying packets to and from the MN, the MN will perform its authentication via 802.1x or any other mechanism. Once the authentication process is over and the MN has access to the infrastructure, it can stop the relaying and resume normal communication via the AP. When this happens and the CN starts receiving packets from the MN via the AP, it will stop sending packets to the RN and will resume normal communication with the MN. The RN will detect that it does not need to relay any packet for the MN any longer and will return to normal operation.
In order for this relaying mechanism to work with WPA and 802.11i, MN and RN have to exchange unencrypted L2 data packets for the duration of the relay process. These packets are then encrypted by the RN by using its own encryption keys and are sent to the AP. By responding to an RN discovery, RNs implicitly agree to providing relay for such frames. Such an exchange of unencrypted L2 frames does not represent a security concern since packets can still be encrypted at higher layers and since the relaying happens for a very limited amount of time (see Section VI-B).
One last thing worth mentioning is that by using a relay, we remove the bridging delay in the L2 handoff [1], [2]. Usually, after an MN changes AP, the switch continues sending packets for the MN to the old AP until it updates the information regarding the new AP on its ports. The bridging delay is the amount of time needed by the switch to update this information on its ports. When we use a relay node in the new AP, this relay node is already registered to the correct port on the switch, therefore no update is required on the switch side and the MN can immediately receive packets via the RN.
C. Relay Process
In the previous section we have shown how an MN can perform authentication while having data packets relayed by the RN. In this section we explain in more detail how relaying is performed. Fig. 4 shows the format of a general IEEE 802.11 MAC layer frame. Among the many fields we can identify a Frame Control field and four Address fields. For the relay process we are interested in the four Address fields and in the To DS and From DS one-bit fields that are part of the Frame Control field. The To DS bit is set to one in data frames that are sent to the Distribution System (DS) 5 . The From DS bit is set to one in data frames exiting the DS. The four Address fields have 5 A DS is a system that interconnects BSSs and LANs to create an ESS [31]. Table II are: Destination Address (DA), Source Address (SA), BSSID, Receiver Address (RA) and Transmitter Address (TA). In infrastructure mode, when an MN sends a packet, this packet is always sent first to the AP even if both source and destination are associated with the same AP. For such packets the MN sets the To DS bit. Other MNs on the same channel can hear the packet but discard it because, as the To DS field and Address fields suggest, such packet is meant for the AP. When the AP has to send a packet to an MN, it sets the From DS bit. All MNs that can hear this packet discard it, except for the MN the packet is for.
When both fields, To DS and From DS, have a value of one, the packet is sent on the wireless medium from one AP to another AP. In ad-hoc mode, both fields have a value of zero and the frames are directly exchanged between MNs with the same Independent Basic Service Set (IBSS).
In [32] Chandra et al. present an optimal way to continuously switch a wireless card between two or more infrastructure networks or between infrastructure and ad-hoc networks so that the user has the perception of being connected to multiple networks at the same time although using one single wireless card. This approach works well if no real-time traffic is present. When we consider real-time traffic and its delay constraints, continuous switching between different networks and, in particular, between infrastructure and ad-hoc mode is no longer a feasible solution. Although optimal algorithms have been proposed for this [32], the continuous switching of the channel and or operating mode takes a non-negligible amount of time which becomes particularly significant if any form of L2 authentication is present in the network. In such cases, the time needed by the wireless card to continuously switch between networks can introduce significant delay and packet loss.
The approach we propose is based on the idea that ad-hoc mode and infrastructure mode do not have to be mutually exclusive, but rather can complement each other. In particular, MNs can send ad-hoc packets while in infrastructure mode so that other MNs on the shared medium, that is, on the same channel, can receive such packets without involving the AP. Such packets use the 802.11 ad-hoc MAC addresses as specified in [31]. That is, both fields To DS and From DS have a value of zero and the Address fields are set accordingly as specified in Table II. In doing so, MNs can directly send and receive packets to and from other MNs without involving the AP and without having to switch to ad-hoc mode.
This mechanism allows an RN to relay packets to and from an R-MN without significantly affecting any ongoing multimedia session that the RN might have via the AP. Such an approach can be useful in all those scenarios where an MN in infrastructure mode needs to communicate with other MNs in infrastructure or ad-hoc mode [33] and a continuous change between infrastructure mode and ad-hoc mode is either not possible or convenient.
VI. SECURITY
Security is a major concern in wireless environments. In this section we address some of the problems encountered in a cooperative environment, focusing on CR.
A. Roaming Security Issues
In this particular context, a malicious user might try to propagate false information among the cooperating MNs. In particular, we have to worry about three main vulnerabilities:
1) A malicious user might want to re-direct STAs to fake APs where their traffic can be sniffed and private information can be compromised. 2) A malicious user might try to perform DoS attacks by redirecting STAs to far or non-existing APs. This would cause the STAs to fail the association to the next AP during the handoff process. The STA would then have to rely on the legacy scanning process to re-establish network connectivity. 3) At L3, a malicious user might behave as an A-MN and try to disrupt a STA' service by providing invalid IP addresses. In general, we have to remember that the cooperative mechanism described here works on top of any other security mechanism that has been deployed in the wireless network (e.g., 802.11i, WPA). In order for a malicious user to send and receive packets from and to the multicast group, it has to have, first of all, access to the network and thus be authenticated. In such a scenario, a malicious user is a STA with legal access to the network. This means that MAC spoofing attacks are not possible as a change in MAC address would require a new authentication handshake with the network. This also means that once the malicious user has been identified, it can be isolated.
How can we attempt to isolate a malicious node? Since the INFORESP frame is multicast, each MN that has the same information than the one contained in such a frame, can check that the information in such a frame is correct and that no one is lying. If it finds out that the INFORESP frame contains the wrong information, it immediately sends an INFOALERT multicast frame. Such a frame contains the MAC address of the suspicious STA. This frame is also sent by an R-MN that has received a wrong IP address and contains the MAC address of the A-MN that provided that IP address. If more than one alert for the same suspicious node, is triggered by different nodes, the suspicious node is considered malicious and the information it provides is ignored. Let us look at this last point in more detail.
One single INFOALERT does not trigger anything. In order for an MN to be categorized as bad, there has to be a certain number of INFOALERT multicast frames sent by different nodes, all regarding the same suspicious MN. This certain number can be configured according to how paranoid someone is about security but, regardless, it has to be more than one. Let us assume this number to be five. If a node receives five INFOALERT multicast frames from five different nodes regarding the same MN, then it marks such an MN as bad. This mechanism could be compromised if either a malicious user can spoof five different MAC addresses (and this is not likely for the reasons we have explained earlier) or if there are five different malicious users that are correctly authenticated in the wireless network and that can coordinate their attacks. If this last situation occurs, then there are bigger problems in the network to worry about than handoff policies. Choosing the number of INFOALERT frames required to mark a node as malicious to be very large would have advantages and disadvantages. It would give more protection against the exploitation of this mechanism for DoS attacks as the number of malicious users trying to exploit INFOALERT frames would have to be high. On the other hand, it would also make the mechanism less sensitive to detect a malicious node as the number of INFOALERT frames required to mark the node as bad might never be reached or it might take too long to reach. So, there is clearly a trade-off.
Regardless, in either one of the three situations described at the beginning of this section, the MN targeted by the malicious user would be able to easily recover from an attack by using legacy mechanisms such as active scanning and DHCP address acquisition, typically used in non-cooperative environments.
B. Cooperative Authentication and Security
In order to improve security in the relay process, we introduce some countermeasures that nodes can use to prevent exploitation of the relay mechanism. The main concern in having a STA relay packets for an unauthenticated MN is that such an MN might try to repeatedly use the relay mechanism and never authenticate to the network. In order to prevent this, we introduce the following countermeasures: 1) Each RELAY REQ frame allows an RN to relay packets for a limited amount of time. After this time has passed, the relaying stops. The relaying of packets is required only for the time needed by the MN to perform the normal authentication process. 2) An RN relays packets only for those nodes which have sent a RELAY REQ packet to it while still connected to their previous AP. 3) RELAY REQ packets are multicast. All the nodes in the multicast group can help in detecting bad behaviors such as one node repeatedly sending RELAY REQ frames. All of the above countermeasures work if we can be sure of the identity of a node and, in general, this is not always the case as malicious users can perform MAC spoofing attacks, for example. However, as we have explained in Section VI-A, MAC spoofing attacks are not possible in the present framework.
This said, we have to remember that before an RN can relay packets for an MN, it has to receive the proper RELAY REQ packet from the MN. Such a packet has to be sent by the MN while still connected to the old AP. This means that the MN has to be authenticated with the previous AP in order to send such a packet. Furthermore, once the relaying timeout has expired, the RN will stop relaying packets for that MN. At this point, even if the MN can change its MAC address, it would not be able to send a new RELAY REQ as it has to first authenticate again with the network (e.g., using 802.11i) and therefore no relaying would take place. In the special case in which the old AP belongs to an open network 6 , a malicious node could perform MAC spoofing and exploit the relay mechanism in order to have access to the secure network. In this case, securing the multicast group by performing authentication and encryption at the multicast group level could prevent this kind of attacks although it may require infrastructure support.
In conclusion, we can consider the three countermeasures introduced at the beginning of this section, to be more than adequate in avoiding exploitation of the relaying mechanism.
VII. STREAMING MEDIA SUPPORT SIP can be used, among other things, to update new and ongoing media sessions. In particular, the IP address of one or more of the participants to the media session can be updated. In general, after an MN performs a L3 handoff, a media session update is required to inform the various parties about the MN's new IP address [34].
If the CN does not support cooperation, the relay mechanism as described in Section V-B does not work and the CN keeps sending packets to the MN's old IP address, being unaware of the relay process. This is the case for example, of an MN establishing a streaming video session with a stream media server. In this particular case, assuming that the media server supports SIP, a SIP session update is performed to inform the media server that the MN's IP address has changed. The MN sends a re-INVITE to the media server updating its IP address to the RN's IP address. In this way, the media server starts sending packets to the RN and relay can take place as described earlier.
Once the relaying is over, if the MN's authentication was successful, the MN sends a second re-INVITE including its new IP address, otherwise, once the timeout for relaying expires, the relaying process stops and the RN terminates the media session with the media server.
SIP and media session updates will be discussed further in Section X. 6 Under normal conditions this is very unluckily but it might happen for handoffs between different administration domains, for example.
VIII. BANDWIDTH AND ENERGY USAGE
By sharing information, the MNs in the network do not have to perform individual tasks such as scanning, which would normally consume a considerable amount of bandwidth and energy. This means that sharing data among MNs is usually more energy and bandwidth efficient than having each MN perform the correspondent individual task. We discuss the impact of CR on energy and bandwidth below.
In CR, bandwidth usage and energy expended are mainly determined by the number of multicast packets that each client has to send for acquiring the information it needs. The number of multicast packets is directly proportional to the number of clients supporting the protocol that are present in the network. In general, more clients introduce more requests and more responses. However, having more clients that support the protocol ensures that each client can collect more information with each request, which means that overall each client will need to send fewer packets. Furthermore, by having the INFORESP frames as multicast frames, many MNs will benefit from each response and not just the MN that sent the request. This will minimize the number of packets exchanged, in particular the number of INFOREQ sent.
To summarize, with increasing number of clients, suppression of multicast takes place, so the number of packets sent remains constant.
In general, sending a few long packets is more efficient than sending many short ones. As explained in Section IV-B, for each AP the information included in an INFOREQ or INFORESP packet is a cache entry (see Fig. 2), that is, a triple {BSSID, Channel, Subnet ID} for a total size of 6+4+4 = 14 bytes. Considering that an MTU size is 1500 bytes, that each cache entry takes about 14 bytes, and that IP and UDP headers take together a total of 28 bytes, each INFOREQ and INFORESP packet can carry information about no more than 105 APs for a maximum of 1472 bytes.
In [35] Henderson et al., analyze the behavior of wireless users in a campus-wide wireless network over a period of seventeen weeks. They found that:
• Users spend almost all of their time at their home location. The home location is defined as the AP where they spend most of the time and all the APs within 50 meters of this one.
• The median number of APs visited by a user is 12, but the median differs for each device type, with 17 for laptops, 9 for PDAs and 61 for VoIP devices such as VoIP phones. This shows that most devices will spend most of their time at their home location, which means that they will mostly deal with a small number of APs. However, even if we consider the median number of APs that clients use throughout the trace period of seventeen weeks, we can see that when using laptops and PDAs each MN would have to know about the nearest 9-17 APs. For VoIP devices that are always on, the median number of APs throughout the trace period is 61. In our implementation each INFOREQ and INFORESP packet carries information about 105 APs at most. Regardless of the The relay mechanism introduced in Section V for cooperative authentication introduces some bandwidth overhead. This is because for each packet that has to be sent by the MN to the CN and vice-versa, the packet occupies the medium twice; once when being transmitted between MN and RN and once when being transmitted between RN and AP. This however, happens only for the few seconds needed by the MN to authenticate. Furthermore, both of the links MN-RN and RN-AP are maximum bit-rate links, so the time on air for each data packet is small.
IX. EXPERIMENTS
In the present section we describe implementation details and measurement results for CR.
A. Environment
All the experiments were conducted at Columbia University on the 7th floor of the Schapiro building. We used four IBM Thinkpad laptops: three IBM T42 laptops using Intel Centrino Mobile technology with a 1.7 GHz Pentium processor and 1GB RAM and one IBM laptop with an 800 MHz Pentium III processor and 384 MB RAM. Linux kernel version 2.4.20 was installed on all the laptops. All the laptops were equipped with a Linksys PCMCIA Prism2 wireless card. Two of them were used as wireless sniffers, one of them was used as roaming client and one was used as "helper" to the roaming client, that is, it replied to INFOREQ frames and behaved as an A-MN. For cooperative authentication the A-MN was also used as RN. Two Dell Dimension 2400 desktops were used, one as CN and the other as RADIUS server [30]. The APs used for the experiments were a Cisco AP1231G which is an enterprise AP and a Netgear WG602 which is a SOHO/home AP.
B. Implementation Details
In order to implement the cooperation protocol we modified the wireless card driver and the DHCP client. Furthermore, a Fig. 6. L3 handoff environment cooperation manager was also created in order to preserve state information and coordinate wireless driver and DHCP client. For cooperative authentication, the WPA supplicant was also slightly modified to allow relay of unencrypted frames. The HostAP [36] wireless driver, an open-source WPA supplicant [37], and the ISC DHCP client [38] were chosen for the implementation. The different modules involved and their interaction is depicted in Fig. 5. A UDP packet generator was also used to generate small packets with a packetization interval of 20 ms in order to simulate voice traffic. For the authentication measurements, we used FreeRADIUS [39] as RADIUS server.
C. Experimental Setup
For the experiments we used the Columbia University 802.11b wireless network which is organized as one single subnet. In order to test L3 handoff, we introduced another AP connected to a different subnet (Fig. 6). The two APs operated on two different non-overlapping channels.
The experiments were conducted by moving the roaming client between two APs belonging to different subnets, thus having the client perform L2 and L3 handoffs in either direction.
Packet exchanges and handoff events were recorded using the two wireless sniffers (kismet [40]), one per channel. The trace files generated by the wireless sniffer were later analyzed using Ethereal [41].
In the experimental set-up we do not consider a large 7 presence of other MNs under the same AP since air-link congestion is not relevant to the handoff measurements. Delays due to collisions, backoff, propagation delay and AP queuing delay are irrelevant since they usually are on the order of micro-seconds under normal conditions. However, even if we consider these delays to be very high because of a high level of congestion, the MN should worry about not being able to make or continue a call as the AP has reached its maximum capacity. Handoff delay would, at this point, become a second order problem. Furthermore, in this last scenario, the MN should avoid to do handoff to a very congested AP in the first place as part of a good handoff policy (see Section XI). Updating information at the Home Agent or SIP Registrar is trivial and does not have the same stringent delay requirements that midcall mobility has, therefore it will not be considered.
D. Results
In this section we show the results obtained in our experiments. In Section IX-D.1, we consider an open network with no authentication in order to show the gain of CR in an open network. In Section IX-D.2, authentication is added and, in particular, we consider a wireless network with IEEE 802.11i enabled.
We define L2 handoff time as scanning time + open authentication and association time + IEEE 802.11i authentication time. The last contribution to the L2 handoff time is not present in open networks. Similarly, we define the L3 handoff time as subnet discovery time + IP address acquisition time.
In the following experiments we show the drastic improvement achieved by CR in terms of handoff time. At L2 such an improvement is possible because, as we have explained in Section IV-A, MNs build a cache of neighbor APs so that scanning for new APs is not required and the delay introduced by the scanning procedure during the L2 handoff is removed. Furthermore, by using relays (see Section V), an MN can send and receive data packets during the authentication process, thus eliminating the 802.11i authentication delay. At L3, MNs cache information about which AP belongs to which subnet, hence immediately detecting a change in subnet by comparing the subnet IDs of the old and new APs. This provides a way to detect a subnet change and at the same time makes the subnet discovery delay insignificant. Furthermore, with CR, the IP address acquisition delay is completely removed since each node can acquire a new IP address for the new subnet while still in the old subnet (see Section IV-C).
It is important to notice that in current networks 8 there is no standard way to detect a change in subnet in a timely manner 9 . 8 Within the IETF, the DNA working group is standardizing the detection of network attachments for IPv6 networks only [42]. 9 Router advertisements are typically broadcast only every few minutes. Recently, DNA for IPv4 (DNAv4) [43] was standardized by the DHC working group within the IETF in order to detect a subnet change in IPv4 networks. This mechanism, however, works only for previously visited subnets for which the MN still has a valid IP address and can take up to hundreds of milliseconds to complete. Furthermore, if L2 authentication is used, a change in subnet can be detected only after the authentication process completes successfully. Because of this, in the handoff time measurements for the standard IEEE 802.11 handoff procedure, the delay introduced by subnet change discovery is not considered.
To summarize, in theory by using CR the only contribution to the L2 handoff time is given by open authentication and association and there is no contribution to the L3 handoff time whatsoever, that is, the L3 handoff time is zero. In practice, this is not exactly true. Some other sources of delay have to be taken into consideration as we show in more detail in Section IX-D.3.
1) L2 and L3 Roaming:
We show the handoff time when an MN is performing a L2 and L3 handoff without any form of authentication, that is, the MN is moving in an open network. In such a scenario, before the L2 handoff occurs, the MN tries to build its L2 cache if it has not already done so. Furthermore, the MN also searches for any available A-MN that might help it in acquiring an IP address for the new subnet. The scenario is the same as the one depicted in Fig. 6. Fig. 7 shows the handoff time when CR is used. In particular, we show the L2, L3 and total L2+L3 handoff times over 30 handoffs. As we can see, the total L2+L3 handoff time has a maximum value of 21 ms in experiment 18. Also, we can see how, even though the L3 handoff time is higher on average than the corresponding L2 handoff time, there are situations where these two become comparable. For example, we can see in experiment 24 how the L2 and L3 handoff times are equal and in experiment 13 how the L2 handoff time exceeds the corresponding L3 handoff time. The main causes for this variance will be presented in Section IX-D.3. Fig. 7 and Table III show how, on average, with CR the total L2+L3 handoff time is less than 16 ms, which is less than half of the 50 ms requirement for assuring a seamless handoff when real-time traffic is present. Table III shows the average values of IP address acquisition time, handoff time, and packet loss during the handoff process. The time between IP REQ and IP RESP is the time needed by the A-MN to acquire a new IP address for the R-MN. This time can give a good approximation of the L3 handoff time that we would have without cooperation. As we can see, with cooperation we reduce the L3 handoff time to about 1.5% of what we would have without cooperation. Table III also shows that the packet loss experienced during a L2+L3 handoff is negligible when using CR. Fig. 8 shows the average delay over 30 handoffs of L2, L3 and L2+L3 handoff times for CR and for the legacy 802.11 handoff mechanism. The total L2+L3 handoff time is less than 2) L2 and L3 Roaming with Authentication: Here we show the handoff time when IEEE 802.11i is used together with EAP-TLS and PEAP/MSCHAPv2. Fig. 9 shows the average over 30 handoffs of the delay introduced in a L2 handoff by the certificate/credentials exchange and the session key exchange. Different key lengths are also considered for the generation of the certificates 10 . As expected, the exchange of certificates takes most of the time. This is the reason why mechanisms such as fast-reconnect [44], [45] improve L2 handoff times considerably, although still on the order of hundreds of milliseconds.
Generally speaking, any authentication mechanism can be used together with CR. Fig. 10 shows the average over 35 handoffs of the total L2, L3 and L2+L3 handoff times. In particular, we show the handoff time for EAP-TLS with 1024 and 2048 bits key, PEAP/MSCHAPv2 with 1024 bits key and CR. The average L2+L3 handoff times are respectively 1580 ms, 1669 ms, 1531 ms and 21 ms. By using CR, we achieve a drastic improvement in the total handoff time. As we can see, CR reduces the handoff time to 1.4% or less of the handoff time introduced by the standard 802.11 mechanism. This significant improvement is possible because at L2 with CR we bypass the whole authentication handshake by relaying packets. At L3 we are able to detect a change in subnet in a timely manner and acquire a new IP address for the new subnet while still in the old subnet. Fig. 11 shows in more detail the two main contributions to the L2 handoff time when a relay is used. We can see that, on average, the time needed for the first data packet to be transmitted after the handoff takes more than half of the total L2 handoff time. Here, with data packet we are referring to a packet sent by our UDP packet generator. By analyzing the wireless traces collected in our experiments, we found Fig. 11. CR L2 handoff time in IEEE 802.11i networks that the first data packet after the handoff is not transmitted immediately after the L2 handoff completes because the wireless driver needs to start the handshake for the authentication process. This means that the driver already has a few packets in the transmission queue that are waiting to be transmitted when our data packet enters the transmission queue. This, however, concerns only the first packet to be transmitted after the L2 handoff completes successfully. All subsequent data packets will not encounter any additional delay due to relay.
3) Measurement Variance: We have encountered a high variance in the L2 handoff time. In particular, most of the delay is between the authentication request and authentication response, before the association request. Within all the measurements taken, such behaviour appeared to be particularly prominent when moving from the Columbia AP to the Netgear AP. This behavior, together with the results shown by Mishra et al. in [2], have lead us to the conclusion that such a variance is caused by the cheap hardware used in the low-end Netgear AP.
At L3, ideally, the handoff time should be zero as we acquire all the required L3 information while still in the old subnet. The L3 handoff time shown in Fig. 7 can be roughly divided in two main components: signaling delay and polling delay. The signaling delay is due to various signaling messages exchanged among the different entities involved in setting up the new L3 information in the kernel (wireless driver and DHCP client); the polling delay is introduced by the polling of variables in between received-signal-strength samples 11 , done in order to start the L3 handoff process in a timely manner with respect to the L2 handoff process.
These two delays are both implementation dependent and can be reduced by further optimizing the implementation.
X. APPLICATION LAYER MOBILITY
We suggest a method for achieving seamless handoffs at the application layer using SIP and CR. Implementation and analysis of the proposed approach are reserved for future work. 11 Received-signal-strength values are measured by the wireless card driver. Generally speaking, there are two main problems with application layer mobility. One is that the SIP handshake (re-INVITE ⇒ 200 OK ⇒ ACK) takes a few hundred milliseconds to complete, exceeding the requirements of seamless handoff for real-time media. The second is that we do not know a priori in which direction the user is going to move.
In order to solve these two problems, we have to define a mechanism that allows the MN to start the application layer handoff before the L2 handoff and to do it so that the MN does not move to the wrong AP or subnet after updating the SIP session. Furthermore, the new mechanism also has to work in the event of the MN deciding not to perform the L2 handoff at all after performing the SIP session update, that is, after updating the SIP session with the new IP address.
The SIP mobility mechanism [34] and CR can be combined. In particular, we consider an extension of the relay mechanism discussed in Section V-B. Let us assume that the MN performing the handoff has already acquired all the necessary L2 and L3 information as described in Sections IV-B, IV-C and V. This means that the MN has a list of possible RNs and IP addresses to use after the L2 handoff, one for each of the various subnets it could move to next. At this point, before performing any L2 handoff, the MN needs to update its multimedia session. The up-link traffic does not cause particular problems as the MN already has a new IP address to use and can start sending packets via the RN right after the L2 handoff. The down-link traffic is more problematic since the CN will continue sending packets to the MN's old IP address as it is not aware of the change in the MN's IP address until the session has been updated.
The basic idea is to update the session so that the same media stream is sent, at the same time, to the MN and to all the RNs in the list previously built by the MN. In this way, regardless of which subnet/AP the MN will move to, the corresponding RN will be able to relay packets to it. If the MN does not change AP at all, nothing is lost as the MN is still receiving packets from the CN. After the MN has performed the L2 handoff and has connected to one of the RNs, it may send a second re-INVITE via the RN so that the CN sends packets to the current RN only, without involving the other RNs any longer. Once the authentication process successfully completes, communication via the AP can resume. At this point, one last session update is required so that the CN can send packets directly to the MN without any RN in between.
In order to send multiple copies of the same media stream to different nodes, that is, to the MN performing the handoff and its RNs, the MN can send to the CN a re-INVITE with an SDP format as described in RFC 3388 [46] and shown in Figure 12. In this particular format, multiple m lines are present with multiple c lines and grouped together by using the same Flow Identification (FID). A station receiving a re-INVITE with an SDP part as shown in Figure 12 sends an audio stream to a client with IP address 131.160.1.112 on port 30000 (if the PCM µ-law codec is used) and to a client with IP address 131.160.1.111 on port 20000. In order for the same media stream to be sent to different clients at the same time, all the clients have to support the same codec [46]. In our case, we have to remember that RNs relay traffic to MNs, they do not play such traffic. Because of this, we can safely say that each RN supports any codec during the relay process, hence a copy of the media stream can always be sent to an RN by using the SDP format described in [46].
It is worthwhile to notice that in the session update procedure described above, no buffering is necessary. As we have explained in Section IX-D and shown in Table III, the L2+L3 handoff time is on the order of 16 ms for open networks, which is less than the packetization interval for typical VoIP traffic. When authentication is used (see Figure 10), the total L2+L3 handoff time is on the order of 21 ms. In both cases packet loss is negligible, hence making any buffering of packets unnecessary.
XI. LOAD BALANCING CR can also play a role in AP load balancing. Today, there are many problems with the way MNs select the AP to connect to. The AP is selected according to the link signal strength and SNR levels while other factors such as effective throughput, number of retries, number of collisions, packet loss, bit-rate or BER are not taken into account. This can cause an MN to connect to an AP with the best SNR but low throughput, high number of collisions and packet loss because that AP is highly congested. If the MN disassociates or the AP deauthenticates it, the MN looks for a new candidate AP. Unfortunately, with a very high probability, the MN will pick the same AP because its link signal strength and SNR are still the "best" available. The information regarding the congestion of the AP is completely ignored and this bad behavior keeps repeating itself. This behavior can create situations where users end up connecting all to the "best" AP creating the scenario depicted earlier and at the same time leaving other APs underutilized [47], [48].
CR can be very helpful in such a context. In particular, we can imagine a situation where an MN wants to gather statistics about the APs that it might move to next, that is, the APs that are present in its cache. In order to do so, the MN can ask other nodes to send statistics about those APs. Each node can collect different kind of statistics, such as available throughput, bit-rate, packet loss, retry rate. Once these statistics have been gathered, they can be sent to the MN that has requested them. The MN, at this point has a clear picture of which APs are more congested and which others can support the required QoS, therefore making a smarter handoff decision. By using this approach we can achieve an even distribution of traffic flows among neighboring APs.
The details of this mechanism are reserved for future study but can be easily derived from the procedures earlier introduced for achieving fast L2 and L3 handoffs.
XII. AN ALTERNATIVE TO MULTICAST
Using IP multicast packets can become inefficient in highly congested environments with a dense distribution of MNs. In such environments a good alternative to multicast can be represented by ad-hoc networks. Switching back-and-forth between infrastructure mode and ad-hoc mode has already been used by MNs in order to share information for fault diagnosis [33]. As we pointed out in Section V-C, continuously switching between ad-hoc and infrastructure mode introduces synchronization problems and channel switching delays, making this approach unusable for real-time traffic. However, even if non-real-time traffic is present, synchronization problems could still arise when switching to ad-hoc mode while having an alive TCP connection on the infrastructure network, for example. Spending a longer time in ad-hoc mode might cause the TCP connection to time-out; on the other hand waiting too long in infrastructure mode might cause loss of data in the ad-hoc network.
In CR, MNs can exchange L2 and L3 information contained in their cache by using the mechanism used for relay as described in Section V-C. Following this approach, MNs can directly exchange information with each other without involving the AP and without having to switch their operating mode to ad-hoc. In particular, an MN can send broadcast and unicast packets such as INFOREQ and INFORESP with the To DS and From DS fields set to zero (see Section V-C). Because of this, only the MNs in the radio coverage of the first MN will be able to receive such packets. The AP will drop these packets since the To DS field is not set.
Ad-hoc multi-hop routing can also be used when needed. This may be helpful, for example, in the case of R-MNs acquiring a new IP address for a new subnet while still in the old subnet (see Section IV-C), when current AP and new AP use two different channels. In such a case, a third node on the same channel than the R-MN, could route packets between the R-MN and the A-MN by switching between the two channels of the two APs, thus leaving R-MN and A-MN operations unaffected. In this case we would not have synchronization problems since the node, switching between the two channels, would have to switch only twice. Once after receiving the IP REQ packet from the R-MN in order to send it to the A-MN, and a second time after receiving the IP RESP from the A-MN in order to send it to the R-MN.
An ad-hoc based approach, such as the relay mechanism presented in Section V-C, does not require any support on the infrastructure and it represents an effective solution in congested and densely populated environments. On the other hand, ad-hoc communication between MNs would not work very well in networks with a small population of MNs, where each MN might be able to see only a very small number of other MNs at any given time.
MNs with two wireless cards could use one card to connect to the ad-hoc network and share information with other MNs, while having the other card connected to the AP. The two cards could also operate on two different access technologies such as cellular and 802.11.
If it is possible to introduce some changes in the infrastructure, we can minimize the use of multicast packets by using the SIP presence model [49]. In such a model we introduce a new presence service in which each subnet is a presentity. Each subnet has a contact list of all the A-MNs available in that subnet for example, so that the presence information is represented by the available A-MNs in the subnet. When an R-MN subscribes to this service, it receives presence information about the new subnet, namely its contacts which are the available A-MNs in that subnet.
This approach could be more efficient in scenarios with a small number of users supporting CR. On the other hand, it would require changes in the infrastructure by introducing additional network elements. The presence and ad-hoc approaches are reserved for future study.
XIII. CONCLUSIONS AND FUTURE WORK
In this paper we have defined the Cooperative Roaming protocol. Such a protocol allows MNs to perform L2 and L3 handoffs seamlessly, with an average total L2+L3 handoff time of about 16 ms in an open network and of about 21 ms in an IEEE 802.11i network without requiring any changes to either the protocol or the infrastructure. Each of these values is less than half of the 50 ms requirement for realtime applications such as VoIP to achieve seamless handoffs. Furthermore, we are able to provide such a fast handoff regardless of the particular authentication mechanisms used while still preserving security and privacy.
MN cooperating has many advantages and does not introduce any significant disadvantage as in the worst case scenario MNs can rely on the standard IEEE 802.11 mechanisms achieving performances similar to a scenario with no cooperation.
Node cooperation can be useful in many other applications:
• In a multi-administrative-domain environment CR can help in discovering which APs are available for which domain. In this way an MN might decide to go to one AP/domain rather than some other AP/domain according to roaming agreements, billing, etc.
• In Section XI we have shown how CR can be used for load balancing. Following a very similar approach but using other metrics such as collision rate and available bandwidth, CR can also be used for admission control and call admission control.
• CR can help in propagating information about service availability. In particular, an MN might decide to perform a handoff to one particular AP because of the services that are available at that AP. A service might be a particular type of encryption, authentication, minimum guaranteed bit rate and available bandwidth or the availability of other types of networks such as Bluetooth, UWB and 3G cellular networks, for example.
• CR provides advantages also in terms of adaptation to changes in the network topology. In particular, when an MN finds some stale entries in its cache, it can update its cache and communicate such changes to the other MNs. This applies also to virtual changes of the network topology (i.e. changes in the APs power levels) which might become more common with the deployment of IEEE 802.11h equipment.
• CR can also be used by MNs to negotiate and adjust their transmission power levels so to achieve a minimum level of interference.
• In [26] Ramani et al. describe a passive scanning algorithm according to which an MN knows the exact moment when a particular AP will send its beacon frame. In this way the MN collects the statistics for the various APs using passive scanning but without having to wait for the whole beacon interval on each channel. This algorithm, however, requires all the APs in the network to be synchronized. By using a cooperative approach, we can have the various MNs sharing information about the beacon intervals of their APs. In this way, we only need to have the MNs synchronized amongst themselves (e.g., via NTP) without any synchronization required on the network side.
• Interaction between nodes in an infrastructure network and nodes in an ad-hoc/mesh network.
1) An MN in ad-hoc mode can send information about its ad-hoc network. In this way MNs of the infrastructure network can decide if it is convenient for them to switch to the ad-hoc network (this would also free resources on the infrastructure network). This, for example, can happen if there is a lack of coverage or if there is high congestion in the infrastructure network. Also, an MN might switch to an ad-hoc network if it has to recover some data available in the ad-hoc network itself (i.e. sensor networks). 2) If two parties are close to each other, they can decide to switch to the ad-hoc network discovered earlier and talk to each other without any infrastructure support. They might also create an ad-hoc network on their own using a default channel, if no other ad-hoc network is available. As future work, we will look in more detail at application layer mobility, load balancing and call admission control. We will investigate the possibility of having some network elements such as APs support A-MN and RN functionalities; this would be useful in scenarios where only few MNs support CR. Finally, we will look at how IEEE 802.21 [13] could integrate and extend CR.
| 10,914 |
cs0701046
|
2952231607
|
In a wireless network, mobile nodes (MNs) repeatedly perform tasks such as layer 2 (L2) handoff, layer 3 (L3) handoff and authentication. These tasks are critical, particularly for real-time applications such as VoIP. We propose a novel approach, namely Cooperative Roaming (CR), in which MNs can collaborate with each other and share useful information about the network in which they move. We show how we can achieve seamless L2 and L3 handoffs regardless of the authentication mechanism used and without any changes to either the infrastructure or the protocol. In particular, we provide a working implementation of CR and show how, with CR, MNs can achieve a total L2+L3 handoff time of less than 16 ms in an open network and of about 21 ms in an IEEE 802.11i network. We consider behaviors typical of IEEE 802.11 networks, although many of the concepts and problems addressed here apply to any kind of mobile network.
|
In @cite_9 suggest an algorithm called syncscan which does not require changes to either the protocol or the infrastructure. It does require, however, that all the APs in the network are synchronized and only accelerates unauthenticated L2 handoffs.
|
{
"abstract": [
"Wireless access networks scale by replicating base stations geographically and then allowing mobile clients to seamlessly \"hand off\" from one station to the next as they traverse the network. However, providing the illusion of continuous connectivity requires selecting the right moment to handoff and the right base station to transfer to. Unfortunately, 802.11-based networks only attempt a handoff when a client's service degrades to a point where connectivity is threatened. Worse, the overhead of scanning for nearby base stations is routinely over 250 ms - during which incoming packets are dropped - far longer than what can be tolerated by highly interactive applications such as voice telephony. In this paper we describe SyncScan, a low-cost technique for continuously tracking nearby base stations by synchronizing short listening periods at the client with periodic transmissions from each base station. We have implemented this SyncScan algorithm using commodity 802.11 hardware and we demonstrate that it allows better handoff decisions and over an order of magnitude improvement in handoff delay. Finally, our approach only requires trivial implementation changes, is incrementally deployable and is completely backward compatible with existing 802.11 standards."
],
"cite_N": [
"@cite_9"
],
"mid": [
"1742958260"
]
}
|
Cooperation Between Stations in Wireless Networks
|
Enabling VoIP services in wireless networks presents many challenges, including QoS, terminal mobility and congestion control. In this paper we focus on IEEE 802.11 wireless networks and address issues introduced by terminal mobility.
In general, a handoff happens when an MN moves out of the range of one Access Point (AP) and enters the range of a new one. We have two possible scenarios: 1) If the old AP and the new AP belong to the same subnet, the MN's IP address does not have to change at the new AP. The MN performs a L2 handoff. 2) If the old AP and the new AP belong to different subnets, the MN has to go through the normal L2 handoff procedure and also has to request a new IP address in the new subnet, that is, it has to perform a L3 handoff. Fig. 1 shows the steps involved in a L2 handoff process in an open network. As we have shown in [1] and Mishra et al. have shown in [2], the time needed by an MN to perform a L2 handoff is usually on the order of a few hundred milliseconds, thus causing a noticeable interruption in any ongoing realtime multimedia session. In either open 802.11 networks or 802.11 networks with WEP enabled, the discovery phase constitutes more than 90% of the total handoff time [1], [2]. In 802.11 networks with either WPA or 802.11i enabled, the handoff delay is dominated by the authentication process that is performed after associating to the new AP. In particular, no data can be exchanged amongst MNs before the authentication process completes successfully. In the most general case, both authentication delay and scanning delay are present. These two delays are additive, so, in order to achieve seamless real-time multimedia sessions, both delays have to be addressed and, if possible, removed. When a L3 handoff occurs, an MN has to perform a normal L2 handoff and update its IP address. We can break the L3 handoff into two logical steps: subnet change detection and new IP address acquisition via DHCP [3]. Each of these steps introduces a significant delay.
In this paper we focus on the use of station cooperation to achieve seamless L2 and L3 handoffs. We refer to this specific use of cooperation as Cooperative Roaming (CR). The basic idea behind CR is that MNs subscribe to the same multicast group creating a new plane for exchanging information about the network and help each other in different tasks. For example, an MN can discover surrounding APs and subnets by just asking to other MNs for this information. Similarly, an MN can ask another MN to acquire a new IP address on its behalf so that the first MN can get an IP address for the new subnet while still in the old subnet.
For brevity and clarity's sake, in this paper we do not consider handoffs between different administrative domains and AAA-related issues although CR could be easily extended to support them. Incentives for cooperation are also not considered since they are a standard problem for any system using some form of cooperation (e.g., file sharing) and represent a separate research topic [4], [5], [6], [7], [8].
The rest of the paper is organized as follows. In Section II we show the state of the art for handoffs in wireless networks, in Section III we briefly describe how IPv4 and IPv6 multicast addressing is used in the present context, Section IV describes how, with cooperation, MNs can achieve seamless L2 and L3 handoffs. Section V introduces cooperation in the L2 authentication process to achieve seamless handoffs regardless of the particular authentication mechanism used. Section VI considers security and Section VII shows how streaming media can be supported in CR. In Section VIII we analyze CR in terms of bandwidth and energy usage, Section IX presents our experiments and results and Section X shows how we can achieve seamless application layer mobility with CR. In Section XI we apply CR to load balancing and Section XII presents an alternative to multicast. Finally, Section XIII concludes the paper.
IV. COOPERATIVE ROAMING
In this section we show how MNs can cooperate with each other in order to achieve seamless L2 and L3 handoffs.
A. Overview
In [1] we have introduced a fast MAC layer handoff mechanism for achieving seamless L2 handoffs in environments such as hospitals, schools, campuses, enterprises, and other places where MNs always encounter the same APs. Each MN saves information regarding the surrounding APs in a cache. When an MN needs to perform a handoff and it has valid entries in its cache, it will directly use the information in the cache without scanning. If it does not have any valid information in its cache, the MN will use an optimized scanning procedure called selective scanning to discover new APs and build the cache. In the cache, APs are ordered according to their signal strength that was registered when the scanning was performed, that is, right before changing AP. APs with stronger signal strength appear first. As mentioned in Section I, in open networks the scanning process is responsible for more than 90% of the total handoff time. The cache reduces the L2 handoff time to only a few milliseconds (see Table I) and cache misses due to errors in movement prediction introduce only a few milliseconds of Earlier, we had extended [27] the mechanism introduced in [1] to support L3 handoffs. MNs also cache L3 information such as their own IP address, default router's IP address and subnet identifier. A subnet identifier uniquely identifies a subnet. By caching the subnet identifier, a subnet change is detected much faster and L3 handoffs are triggered every time the new AP and old AP have different subnet identifiers. Faster L3 handoffs can be achieved since IP address and default router for the next AP and subnet are already known and can be immediately used. The approach in [27] achieves seamless handoffs in open networks only, it utilizes the default router's IP address as subnet identifier and it uses a suboptimal algorithm to acquire L3 information.
Here, we consider the same caching mechanism used in [27]. In order to support multi-homed routers, however, we use the subnet address as subnet identifier. By knowing the subnet mask and the default router's IP address we can calculate the network address of a certain subnet. Fig. 2 shows the structure of the cache. Additional information such as last IP address used by the MN, lease expiration time and default router's IP address can be extracted from the DHCP client lease file, available in each MN.
In CR, an MN needs to acquire information about the network if it does not have any valid information in the cache or if it does not have L3 information available for a particular subnet. In such a case, the MN asks other MNs for the information it needs so that the MN does not have to find out about neighboring APs by scanning. In order to share information, in CR, all MNs subscribe to the same multicast group. We call an MN that needs to acquire information about its neighboring APs and subnets a requesting MN (R-MN). By using CR, an R-MN can ask other MNs if they have such information by sending an INFOREQ multicast frame. The MNs that receive such a frame check if they have the information the R-MN needs and if so, they send an INFORESP multicast frame back to the R-MN containing the information the R-MN needs.
B. L2 Cooperation Protocol
In this section, we focus on the information exchange needed by a L2 handoff.
The information exchanged in the INFOREQ and IN-FORESP frames is a list of {BSSID, channel, subnet ID} entries, one for each AP in the MN's cache (see Fig. 2).
When an R-MN needs information about its neighboring APs and subnets, it sends an INFOREQ multicast frame. Such a frame contains the current content of the R-MN's cache, that is, all APs and subnets known to the R-MN. When an MN receives an INFOREQ frame, it checks if its own cache and the R-MN's cache have at least one AP in common. If the two caches have at least one AP in common and if the MN's cache has some APs that are not present in the R-MN's cache, the MN sends an INFORESP multicast frame containing the cache entries for the missing APs. MNs that have APs in common with the R-MN, have been in the same location of the R-MN and so have a higher probability of having the information the R-MN is looking for.
The MN sends the INFORESP frame after waiting for a random amount of time to be sure that no other MNs have already sent such information. In particular, the MN checks the information contained in INFORESP frames sent to the same R-MN by other MNs during the random waiting time. This prevents many MNs from sending the same information to the R-MN and all at the same time.
When an MN other than R-MN receives an INFORESP multicast frame, it performs two tasks. First, it checks if someone is lying by providing the wrong information and if so, it tries to fix it (see Section VI-A); secondly, it records the cache information provided by such a frame in its cache even though the MN did not request such information. By collecting unsolicited information, each MN can build a bigger cache in less time and in a more efficient manner requiring fewer frame exchanges. This is very similar to what happens in software such as Bit-Torrent where the client downloads different parts of the file from different peers. Here, we collect different cache chunks from different MNs.
In order to improve efficiency and further minimize frame exchange, MNs can also decide to collect information contained in the INFOREQ frames.
C. L3 Cooperation Protocol
In a L3 handoff an MN has to detect a change in subnet and also has to acquire a new IP address. When a L2 handoff occurs, the MN compares the cached subnet identifiers for the old and new AP. If the two identifiers are different, then the subnet has changed. When a change in subnet is detected, the MN needs to acquire a new IP address for the new subnet. The new IP address is usually acquired by using the DHCP infrastructure. Unfortunately, the typical DHCP procedure can take up to one second [27].
CR can help MNs acquire a new IP address for the new subnet while still in the old subnet. When an R-MN needs to perform a L3 handoff, it needs to find out which other MNs in the new subnet can help. We call such MNs Assisting MNs (A-MNs). Once the R-MN knows the A-MNs for the new subnet, it asks one of them to acquire a new IP address on its behalf. At this point, the selected A-MN acquires the new IP address via DHCP and sends it to the R-MN which is then able to update its multimedia session before the actual L2 handoff and can start using the new IP address right after the L2 handoff, hence not incurring any additional delay (see Section X).
We now show how A-MNs can be discovered and explain in detail how they can request an IP address on behalf of other MNs in a different subnet.
1) A-MNs Discovery: By using IP multicast, an MN can directly talk to different MNs in different subnets. In particular, the R-MN sends an AMN DISCOVER multicast packet containing the new subnet ID. Other MNs receiving such a packet check the subnet ID to see if they are in the subnet specified in the AMN DISCOVER. If so, they reply with an AMN RESP unicast packet. This packet contains the A-MN's default router IP address, the A-MN's MAC and IP addresses. This information is then used by the R-MN to build a list of available A-MNs for that particular subnet.
Once the MN knows which A-MNs are available in the new subnet, it can cooperate with them in order to acquire the L3 information it needs (e.g., new IP address, router information), as described below.
2) Address Acquisition: When an R-MN needs to acquire a new IP address for a particular subnet, it sends a unicast IP REQ packet to one of the available A-MNs for that subnet. Such packet contains the R-MN's MAC address. When an A-MN receives an IP REQ packet, it extracts the R-MN's MAC address from the packet and starts the DHCP process by inserting the R-MN's MAC address in the CHaddr field of DHCP packets 1 . The A-MN will also have to set the broadcast bit in the DHCP packets in order for it to receive DHCP packets with a different MAC address other than its own in the CHaddr field. All of this allows the A-MN to acquire a new IP address on behalf of the R-MN. This procedure is completely transparent to the DHCP server. Once the DHCP process has been completed, the A-MN sends an IP RESP multicast packet containing the default router's IP address for the new subnet, the R-MN's MAC address and the new IP address for the R-MN. The R-MN checks the MAC address in the IP RESP packet to be sure that the packet is not for a different R-MN. Once it has verified that the IP RESP is for itself, the R-MN saves the new IP address together with the new default router's IP address.
If the R-MN has more than one possible subnet to move to, it follows the same procedure for each subnet. In this way the R-MN builds a list of {router, new IP address} pairs, one pair for each one of the possible next subnets. After moving to the new subnet the R-MN renews the lease for the new IP address. The R-MN can start this process at any time before the L2 handoff, keeping in mind that the whole process might take one second or more to complete and that lease times of By reserving IP addresses before moving to the new subnet, we could waste IP addresses and exhaust the available IP pool. Usually, however, the lease time in a mobile environment is short enough to guarantee a sufficient re-use of IP addresses.
Acquiring an IP address from a different subnet other than the one the IP is for could also be achieved by introducing a new DHCP option. Using this option, the MN could ask the DHCP server for an IP address for a specific subnet. This would however, require changes to the DHCP protocol.
V. COOPERATIVE AUTHENTICATION
In this section we propose a cooperative approach for authentication in wireless networks. The proposed approach is independent of the particular authentication mechanism used. It can be used for VPN, IPsec, 802.1x or any other kind of authentication. We focus on the 802.1x framework used in Wi-Fi Protected Access (WPA) and IEEE 802.11i [29].
A. IEEE 802.1x Overview
The IEEE 802.1x standard defines a way to perform access control and authentication in IEEE 802 LANs and in particular in IEEE 802.11 wireless LANs using three main entities: supplicant, authenticator and authentication server 3 . The supplicant is the client that has to perform the authentication in order to gain access to the network; the authenticator, among other things, relays packets between supplicant and authentication server; the authentication server, typically a RADIUS server [30], performs the authentication process with the supplicant by exchanging and validating the supplicant's credentials. The critical point, in terms of handoff time in the 802.1x architecture, is that during the authentication process the authenticator allows only EAP Over LAN (EAPOL) traffic to be exchanged with the supplicant. No other kind of traffic is allowed. 2 The DHCP client lease file can provide information on current lease times. 3 The authentication server is not required in all authentication mechanisms.
B. Cooperation in the Authentication Process
A well-known property of the wireless medium in IEEE 802.11 networks is that the medium is shared and therefore every MN can hear packets that other stations (STAs) send and receive. This is true when MN and STAs are connected to the same AP -that is, are on the same channel. In [14] Liu et al. make use of this particular characteristic and show how MNs can cooperate with each other by relaying each other's packets so to achieve the optimum bit-rate. In this section we show how a similar approach can be used for authentication purposes.
For simplicity, in the following discussion we suppose that one authenticator manages one whole subnet, so that authentication is required after each L3 handoff. In such a scenario and in this context, we also refer to a subnet as an Authentication Domain (AD). In general, an MN can share the information about ADs in the same way it shares information about subnets. In doing so, an MN knows whether the next AP belongs to the same AD of the current AP or not. In a L2 or L3 handoff we have an MN which performs handoff and authentication, a Correspondent Node (CN) which has an established multimedia session with the MN and a Relay Node (RN) which relays packets to and from the MN. Available RNs for a particular AD can be discovered following a similar procedure to the one described earlier for the discovery of A-MNs (see Section IV-C.1). The difference here is that RN and MN have to be connected to the same AP after the handoff. In this scenario, we assume that RNs are a subset of the available A-MNs. The basic idea is that while the MN is authenticating in the new AD, it can still communicate with the CN via the RN which relays packets to and from the MN (see Fig. 3). Let us look at this mechanism in more detail. Before the MN changes AD/AP, it selects an RN from the list of available RNs for the new AD/AP and sends a RELAY REQ multicast frame to the multicast group. The RELAY REQ frame contains the MN's MAC and IP addresses, the CN's IP address and the selected RN's MAC and IP addresses. The RELAY REQ will be received by all the STAs subscribed to the multicast group and, in particular, it will be received by both the CN 4 and the RN. The RN will relay packets for the MN identified by the MAC address received in the RELAY REQ frame. After performing the handoff, the MN needs to authenticate before it can resume any communication via the AP. However, because of the shared nature of the medium, the MN will start sending packets to the RN as if it was already authenticated. The authenticator will drop the packets, but the RN can hear the packets on the medium and relay them to the CN using its own encryption keys, that is, using its secure connection with the AP. The CN is aware of the relaying because of the RELAY REQ, and so it will start sending packets for the MN to the RN as well. While the RN is relaying packets to and from the MN, the MN will perform its authentication via 802.1x or any other mechanism. Once the authentication process is over and the MN has access to the infrastructure, it can stop the relaying and resume normal communication via the AP. When this happens and the CN starts receiving packets from the MN via the AP, it will stop sending packets to the RN and will resume normal communication with the MN. The RN will detect that it does not need to relay any packet for the MN any longer and will return to normal operation.
In order for this relaying mechanism to work with WPA and 802.11i, MN and RN have to exchange unencrypted L2 data packets for the duration of the relay process. These packets are then encrypted by the RN by using its own encryption keys and are sent to the AP. By responding to an RN discovery, RNs implicitly agree to providing relay for such frames. Such an exchange of unencrypted L2 frames does not represent a security concern since packets can still be encrypted at higher layers and since the relaying happens for a very limited amount of time (see Section VI-B).
One last thing worth mentioning is that by using a relay, we remove the bridging delay in the L2 handoff [1], [2]. Usually, after an MN changes AP, the switch continues sending packets for the MN to the old AP until it updates the information regarding the new AP on its ports. The bridging delay is the amount of time needed by the switch to update this information on its ports. When we use a relay node in the new AP, this relay node is already registered to the correct port on the switch, therefore no update is required on the switch side and the MN can immediately receive packets via the RN.
C. Relay Process
In the previous section we have shown how an MN can perform authentication while having data packets relayed by the RN. In this section we explain in more detail how relaying is performed. Fig. 4 shows the format of a general IEEE 802.11 MAC layer frame. Among the many fields we can identify a Frame Control field and four Address fields. For the relay process we are interested in the four Address fields and in the To DS and From DS one-bit fields that are part of the Frame Control field. The To DS bit is set to one in data frames that are sent to the Distribution System (DS) 5 . The From DS bit is set to one in data frames exiting the DS. The four Address fields have 5 A DS is a system that interconnects BSSs and LANs to create an ESS [31]. Table II are: Destination Address (DA), Source Address (SA), BSSID, Receiver Address (RA) and Transmitter Address (TA). In infrastructure mode, when an MN sends a packet, this packet is always sent first to the AP even if both source and destination are associated with the same AP. For such packets the MN sets the To DS bit. Other MNs on the same channel can hear the packet but discard it because, as the To DS field and Address fields suggest, such packet is meant for the AP. When the AP has to send a packet to an MN, it sets the From DS bit. All MNs that can hear this packet discard it, except for the MN the packet is for.
When both fields, To DS and From DS, have a value of one, the packet is sent on the wireless medium from one AP to another AP. In ad-hoc mode, both fields have a value of zero and the frames are directly exchanged between MNs with the same Independent Basic Service Set (IBSS).
In [32] Chandra et al. present an optimal way to continuously switch a wireless card between two or more infrastructure networks or between infrastructure and ad-hoc networks so that the user has the perception of being connected to multiple networks at the same time although using one single wireless card. This approach works well if no real-time traffic is present. When we consider real-time traffic and its delay constraints, continuous switching between different networks and, in particular, between infrastructure and ad-hoc mode is no longer a feasible solution. Although optimal algorithms have been proposed for this [32], the continuous switching of the channel and or operating mode takes a non-negligible amount of time which becomes particularly significant if any form of L2 authentication is present in the network. In such cases, the time needed by the wireless card to continuously switch between networks can introduce significant delay and packet loss.
The approach we propose is based on the idea that ad-hoc mode and infrastructure mode do not have to be mutually exclusive, but rather can complement each other. In particular, MNs can send ad-hoc packets while in infrastructure mode so that other MNs on the shared medium, that is, on the same channel, can receive such packets without involving the AP. Such packets use the 802.11 ad-hoc MAC addresses as specified in [31]. That is, both fields To DS and From DS have a value of zero and the Address fields are set accordingly as specified in Table II. In doing so, MNs can directly send and receive packets to and from other MNs without involving the AP and without having to switch to ad-hoc mode.
This mechanism allows an RN to relay packets to and from an R-MN without significantly affecting any ongoing multimedia session that the RN might have via the AP. Such an approach can be useful in all those scenarios where an MN in infrastructure mode needs to communicate with other MNs in infrastructure or ad-hoc mode [33] and a continuous change between infrastructure mode and ad-hoc mode is either not possible or convenient.
VI. SECURITY
Security is a major concern in wireless environments. In this section we address some of the problems encountered in a cooperative environment, focusing on CR.
A. Roaming Security Issues
In this particular context, a malicious user might try to propagate false information among the cooperating MNs. In particular, we have to worry about three main vulnerabilities:
1) A malicious user might want to re-direct STAs to fake APs where their traffic can be sniffed and private information can be compromised. 2) A malicious user might try to perform DoS attacks by redirecting STAs to far or non-existing APs. This would cause the STAs to fail the association to the next AP during the handoff process. The STA would then have to rely on the legacy scanning process to re-establish network connectivity. 3) At L3, a malicious user might behave as an A-MN and try to disrupt a STA' service by providing invalid IP addresses. In general, we have to remember that the cooperative mechanism described here works on top of any other security mechanism that has been deployed in the wireless network (e.g., 802.11i, WPA). In order for a malicious user to send and receive packets from and to the multicast group, it has to have, first of all, access to the network and thus be authenticated. In such a scenario, a malicious user is a STA with legal access to the network. This means that MAC spoofing attacks are not possible as a change in MAC address would require a new authentication handshake with the network. This also means that once the malicious user has been identified, it can be isolated.
How can we attempt to isolate a malicious node? Since the INFORESP frame is multicast, each MN that has the same information than the one contained in such a frame, can check that the information in such a frame is correct and that no one is lying. If it finds out that the INFORESP frame contains the wrong information, it immediately sends an INFOALERT multicast frame. Such a frame contains the MAC address of the suspicious STA. This frame is also sent by an R-MN that has received a wrong IP address and contains the MAC address of the A-MN that provided that IP address. If more than one alert for the same suspicious node, is triggered by different nodes, the suspicious node is considered malicious and the information it provides is ignored. Let us look at this last point in more detail.
One single INFOALERT does not trigger anything. In order for an MN to be categorized as bad, there has to be a certain number of INFOALERT multicast frames sent by different nodes, all regarding the same suspicious MN. This certain number can be configured according to how paranoid someone is about security but, regardless, it has to be more than one. Let us assume this number to be five. If a node receives five INFOALERT multicast frames from five different nodes regarding the same MN, then it marks such an MN as bad. This mechanism could be compromised if either a malicious user can spoof five different MAC addresses (and this is not likely for the reasons we have explained earlier) or if there are five different malicious users that are correctly authenticated in the wireless network and that can coordinate their attacks. If this last situation occurs, then there are bigger problems in the network to worry about than handoff policies. Choosing the number of INFOALERT frames required to mark a node as malicious to be very large would have advantages and disadvantages. It would give more protection against the exploitation of this mechanism for DoS attacks as the number of malicious users trying to exploit INFOALERT frames would have to be high. On the other hand, it would also make the mechanism less sensitive to detect a malicious node as the number of INFOALERT frames required to mark the node as bad might never be reached or it might take too long to reach. So, there is clearly a trade-off.
Regardless, in either one of the three situations described at the beginning of this section, the MN targeted by the malicious user would be able to easily recover from an attack by using legacy mechanisms such as active scanning and DHCP address acquisition, typically used in non-cooperative environments.
B. Cooperative Authentication and Security
In order to improve security in the relay process, we introduce some countermeasures that nodes can use to prevent exploitation of the relay mechanism. The main concern in having a STA relay packets for an unauthenticated MN is that such an MN might try to repeatedly use the relay mechanism and never authenticate to the network. In order to prevent this, we introduce the following countermeasures: 1) Each RELAY REQ frame allows an RN to relay packets for a limited amount of time. After this time has passed, the relaying stops. The relaying of packets is required only for the time needed by the MN to perform the normal authentication process. 2) An RN relays packets only for those nodes which have sent a RELAY REQ packet to it while still connected to their previous AP. 3) RELAY REQ packets are multicast. All the nodes in the multicast group can help in detecting bad behaviors such as one node repeatedly sending RELAY REQ frames. All of the above countermeasures work if we can be sure of the identity of a node and, in general, this is not always the case as malicious users can perform MAC spoofing attacks, for example. However, as we have explained in Section VI-A, MAC spoofing attacks are not possible in the present framework.
This said, we have to remember that before an RN can relay packets for an MN, it has to receive the proper RELAY REQ packet from the MN. Such a packet has to be sent by the MN while still connected to the old AP. This means that the MN has to be authenticated with the previous AP in order to send such a packet. Furthermore, once the relaying timeout has expired, the RN will stop relaying packets for that MN. At this point, even if the MN can change its MAC address, it would not be able to send a new RELAY REQ as it has to first authenticate again with the network (e.g., using 802.11i) and therefore no relaying would take place. In the special case in which the old AP belongs to an open network 6 , a malicious node could perform MAC spoofing and exploit the relay mechanism in order to have access to the secure network. In this case, securing the multicast group by performing authentication and encryption at the multicast group level could prevent this kind of attacks although it may require infrastructure support.
In conclusion, we can consider the three countermeasures introduced at the beginning of this section, to be more than adequate in avoiding exploitation of the relaying mechanism.
VII. STREAMING MEDIA SUPPORT SIP can be used, among other things, to update new and ongoing media sessions. In particular, the IP address of one or more of the participants to the media session can be updated. In general, after an MN performs a L3 handoff, a media session update is required to inform the various parties about the MN's new IP address [34].
If the CN does not support cooperation, the relay mechanism as described in Section V-B does not work and the CN keeps sending packets to the MN's old IP address, being unaware of the relay process. This is the case for example, of an MN establishing a streaming video session with a stream media server. In this particular case, assuming that the media server supports SIP, a SIP session update is performed to inform the media server that the MN's IP address has changed. The MN sends a re-INVITE to the media server updating its IP address to the RN's IP address. In this way, the media server starts sending packets to the RN and relay can take place as described earlier.
Once the relaying is over, if the MN's authentication was successful, the MN sends a second re-INVITE including its new IP address, otherwise, once the timeout for relaying expires, the relaying process stops and the RN terminates the media session with the media server.
SIP and media session updates will be discussed further in Section X. 6 Under normal conditions this is very unluckily but it might happen for handoffs between different administration domains, for example.
VIII. BANDWIDTH AND ENERGY USAGE
By sharing information, the MNs in the network do not have to perform individual tasks such as scanning, which would normally consume a considerable amount of bandwidth and energy. This means that sharing data among MNs is usually more energy and bandwidth efficient than having each MN perform the correspondent individual task. We discuss the impact of CR on energy and bandwidth below.
In CR, bandwidth usage and energy expended are mainly determined by the number of multicast packets that each client has to send for acquiring the information it needs. The number of multicast packets is directly proportional to the number of clients supporting the protocol that are present in the network. In general, more clients introduce more requests and more responses. However, having more clients that support the protocol ensures that each client can collect more information with each request, which means that overall each client will need to send fewer packets. Furthermore, by having the INFORESP frames as multicast frames, many MNs will benefit from each response and not just the MN that sent the request. This will minimize the number of packets exchanged, in particular the number of INFOREQ sent.
To summarize, with increasing number of clients, suppression of multicast takes place, so the number of packets sent remains constant.
In general, sending a few long packets is more efficient than sending many short ones. As explained in Section IV-B, for each AP the information included in an INFOREQ or INFORESP packet is a cache entry (see Fig. 2), that is, a triple {BSSID, Channel, Subnet ID} for a total size of 6+4+4 = 14 bytes. Considering that an MTU size is 1500 bytes, that each cache entry takes about 14 bytes, and that IP and UDP headers take together a total of 28 bytes, each INFOREQ and INFORESP packet can carry information about no more than 105 APs for a maximum of 1472 bytes.
In [35] Henderson et al., analyze the behavior of wireless users in a campus-wide wireless network over a period of seventeen weeks. They found that:
• Users spend almost all of their time at their home location. The home location is defined as the AP where they spend most of the time and all the APs within 50 meters of this one.
• The median number of APs visited by a user is 12, but the median differs for each device type, with 17 for laptops, 9 for PDAs and 61 for VoIP devices such as VoIP phones. This shows that most devices will spend most of their time at their home location, which means that they will mostly deal with a small number of APs. However, even if we consider the median number of APs that clients use throughout the trace period of seventeen weeks, we can see that when using laptops and PDAs each MN would have to know about the nearest 9-17 APs. For VoIP devices that are always on, the median number of APs throughout the trace period is 61. In our implementation each INFOREQ and INFORESP packet carries information about 105 APs at most. Regardless of the The relay mechanism introduced in Section V for cooperative authentication introduces some bandwidth overhead. This is because for each packet that has to be sent by the MN to the CN and vice-versa, the packet occupies the medium twice; once when being transmitted between MN and RN and once when being transmitted between RN and AP. This however, happens only for the few seconds needed by the MN to authenticate. Furthermore, both of the links MN-RN and RN-AP are maximum bit-rate links, so the time on air for each data packet is small.
IX. EXPERIMENTS
In the present section we describe implementation details and measurement results for CR.
A. Environment
All the experiments were conducted at Columbia University on the 7th floor of the Schapiro building. We used four IBM Thinkpad laptops: three IBM T42 laptops using Intel Centrino Mobile technology with a 1.7 GHz Pentium processor and 1GB RAM and one IBM laptop with an 800 MHz Pentium III processor and 384 MB RAM. Linux kernel version 2.4.20 was installed on all the laptops. All the laptops were equipped with a Linksys PCMCIA Prism2 wireless card. Two of them were used as wireless sniffers, one of them was used as roaming client and one was used as "helper" to the roaming client, that is, it replied to INFOREQ frames and behaved as an A-MN. For cooperative authentication the A-MN was also used as RN. Two Dell Dimension 2400 desktops were used, one as CN and the other as RADIUS server [30]. The APs used for the experiments were a Cisco AP1231G which is an enterprise AP and a Netgear WG602 which is a SOHO/home AP.
B. Implementation Details
In order to implement the cooperation protocol we modified the wireless card driver and the DHCP client. Furthermore, a Fig. 6. L3 handoff environment cooperation manager was also created in order to preserve state information and coordinate wireless driver and DHCP client. For cooperative authentication, the WPA supplicant was also slightly modified to allow relay of unencrypted frames. The HostAP [36] wireless driver, an open-source WPA supplicant [37], and the ISC DHCP client [38] were chosen for the implementation. The different modules involved and their interaction is depicted in Fig. 5. A UDP packet generator was also used to generate small packets with a packetization interval of 20 ms in order to simulate voice traffic. For the authentication measurements, we used FreeRADIUS [39] as RADIUS server.
C. Experimental Setup
For the experiments we used the Columbia University 802.11b wireless network which is organized as one single subnet. In order to test L3 handoff, we introduced another AP connected to a different subnet (Fig. 6). The two APs operated on two different non-overlapping channels.
The experiments were conducted by moving the roaming client between two APs belonging to different subnets, thus having the client perform L2 and L3 handoffs in either direction.
Packet exchanges and handoff events were recorded using the two wireless sniffers (kismet [40]), one per channel. The trace files generated by the wireless sniffer were later analyzed using Ethereal [41].
In the experimental set-up we do not consider a large 7 presence of other MNs under the same AP since air-link congestion is not relevant to the handoff measurements. Delays due to collisions, backoff, propagation delay and AP queuing delay are irrelevant since they usually are on the order of micro-seconds under normal conditions. However, even if we consider these delays to be very high because of a high level of congestion, the MN should worry about not being able to make or continue a call as the AP has reached its maximum capacity. Handoff delay would, at this point, become a second order problem. Furthermore, in this last scenario, the MN should avoid to do handoff to a very congested AP in the first place as part of a good handoff policy (see Section XI). Updating information at the Home Agent or SIP Registrar is trivial and does not have the same stringent delay requirements that midcall mobility has, therefore it will not be considered.
D. Results
In this section we show the results obtained in our experiments. In Section IX-D.1, we consider an open network with no authentication in order to show the gain of CR in an open network. In Section IX-D.2, authentication is added and, in particular, we consider a wireless network with IEEE 802.11i enabled.
We define L2 handoff time as scanning time + open authentication and association time + IEEE 802.11i authentication time. The last contribution to the L2 handoff time is not present in open networks. Similarly, we define the L3 handoff time as subnet discovery time + IP address acquisition time.
In the following experiments we show the drastic improvement achieved by CR in terms of handoff time. At L2 such an improvement is possible because, as we have explained in Section IV-A, MNs build a cache of neighbor APs so that scanning for new APs is not required and the delay introduced by the scanning procedure during the L2 handoff is removed. Furthermore, by using relays (see Section V), an MN can send and receive data packets during the authentication process, thus eliminating the 802.11i authentication delay. At L3, MNs cache information about which AP belongs to which subnet, hence immediately detecting a change in subnet by comparing the subnet IDs of the old and new APs. This provides a way to detect a subnet change and at the same time makes the subnet discovery delay insignificant. Furthermore, with CR, the IP address acquisition delay is completely removed since each node can acquire a new IP address for the new subnet while still in the old subnet (see Section IV-C).
It is important to notice that in current networks 8 there is no standard way to detect a change in subnet in a timely manner 9 . 8 Within the IETF, the DNA working group is standardizing the detection of network attachments for IPv6 networks only [42]. 9 Router advertisements are typically broadcast only every few minutes. Recently, DNA for IPv4 (DNAv4) [43] was standardized by the DHC working group within the IETF in order to detect a subnet change in IPv4 networks. This mechanism, however, works only for previously visited subnets for which the MN still has a valid IP address and can take up to hundreds of milliseconds to complete. Furthermore, if L2 authentication is used, a change in subnet can be detected only after the authentication process completes successfully. Because of this, in the handoff time measurements for the standard IEEE 802.11 handoff procedure, the delay introduced by subnet change discovery is not considered.
To summarize, in theory by using CR the only contribution to the L2 handoff time is given by open authentication and association and there is no contribution to the L3 handoff time whatsoever, that is, the L3 handoff time is zero. In practice, this is not exactly true. Some other sources of delay have to be taken into consideration as we show in more detail in Section IX-D.3.
1) L2 and L3 Roaming:
We show the handoff time when an MN is performing a L2 and L3 handoff without any form of authentication, that is, the MN is moving in an open network. In such a scenario, before the L2 handoff occurs, the MN tries to build its L2 cache if it has not already done so. Furthermore, the MN also searches for any available A-MN that might help it in acquiring an IP address for the new subnet. The scenario is the same as the one depicted in Fig. 6. Fig. 7 shows the handoff time when CR is used. In particular, we show the L2, L3 and total L2+L3 handoff times over 30 handoffs. As we can see, the total L2+L3 handoff time has a maximum value of 21 ms in experiment 18. Also, we can see how, even though the L3 handoff time is higher on average than the corresponding L2 handoff time, there are situations where these two become comparable. For example, we can see in experiment 24 how the L2 and L3 handoff times are equal and in experiment 13 how the L2 handoff time exceeds the corresponding L3 handoff time. The main causes for this variance will be presented in Section IX-D.3. Fig. 7 and Table III show how, on average, with CR the total L2+L3 handoff time is less than 16 ms, which is less than half of the 50 ms requirement for assuring a seamless handoff when real-time traffic is present. Table III shows the average values of IP address acquisition time, handoff time, and packet loss during the handoff process. The time between IP REQ and IP RESP is the time needed by the A-MN to acquire a new IP address for the R-MN. This time can give a good approximation of the L3 handoff time that we would have without cooperation. As we can see, with cooperation we reduce the L3 handoff time to about 1.5% of what we would have without cooperation. Table III also shows that the packet loss experienced during a L2+L3 handoff is negligible when using CR. Fig. 8 shows the average delay over 30 handoffs of L2, L3 and L2+L3 handoff times for CR and for the legacy 802.11 handoff mechanism. The total L2+L3 handoff time is less than 2) L2 and L3 Roaming with Authentication: Here we show the handoff time when IEEE 802.11i is used together with EAP-TLS and PEAP/MSCHAPv2. Fig. 9 shows the average over 30 handoffs of the delay introduced in a L2 handoff by the certificate/credentials exchange and the session key exchange. Different key lengths are also considered for the generation of the certificates 10 . As expected, the exchange of certificates takes most of the time. This is the reason why mechanisms such as fast-reconnect [44], [45] improve L2 handoff times considerably, although still on the order of hundreds of milliseconds.
Generally speaking, any authentication mechanism can be used together with CR. Fig. 10 shows the average over 35 handoffs of the total L2, L3 and L2+L3 handoff times. In particular, we show the handoff time for EAP-TLS with 1024 and 2048 bits key, PEAP/MSCHAPv2 with 1024 bits key and CR. The average L2+L3 handoff times are respectively 1580 ms, 1669 ms, 1531 ms and 21 ms. By using CR, we achieve a drastic improvement in the total handoff time. As we can see, CR reduces the handoff time to 1.4% or less of the handoff time introduced by the standard 802.11 mechanism. This significant improvement is possible because at L2 with CR we bypass the whole authentication handshake by relaying packets. At L3 we are able to detect a change in subnet in a timely manner and acquire a new IP address for the new subnet while still in the old subnet. Fig. 11 shows in more detail the two main contributions to the L2 handoff time when a relay is used. We can see that, on average, the time needed for the first data packet to be transmitted after the handoff takes more than half of the total L2 handoff time. Here, with data packet we are referring to a packet sent by our UDP packet generator. By analyzing the wireless traces collected in our experiments, we found Fig. 11. CR L2 handoff time in IEEE 802.11i networks that the first data packet after the handoff is not transmitted immediately after the L2 handoff completes because the wireless driver needs to start the handshake for the authentication process. This means that the driver already has a few packets in the transmission queue that are waiting to be transmitted when our data packet enters the transmission queue. This, however, concerns only the first packet to be transmitted after the L2 handoff completes successfully. All subsequent data packets will not encounter any additional delay due to relay.
3) Measurement Variance: We have encountered a high variance in the L2 handoff time. In particular, most of the delay is between the authentication request and authentication response, before the association request. Within all the measurements taken, such behaviour appeared to be particularly prominent when moving from the Columbia AP to the Netgear AP. This behavior, together with the results shown by Mishra et al. in [2], have lead us to the conclusion that such a variance is caused by the cheap hardware used in the low-end Netgear AP.
At L3, ideally, the handoff time should be zero as we acquire all the required L3 information while still in the old subnet. The L3 handoff time shown in Fig. 7 can be roughly divided in two main components: signaling delay and polling delay. The signaling delay is due to various signaling messages exchanged among the different entities involved in setting up the new L3 information in the kernel (wireless driver and DHCP client); the polling delay is introduced by the polling of variables in between received-signal-strength samples 11 , done in order to start the L3 handoff process in a timely manner with respect to the L2 handoff process.
These two delays are both implementation dependent and can be reduced by further optimizing the implementation.
X. APPLICATION LAYER MOBILITY
We suggest a method for achieving seamless handoffs at the application layer using SIP and CR. Implementation and analysis of the proposed approach are reserved for future work. 11 Received-signal-strength values are measured by the wireless card driver. Generally speaking, there are two main problems with application layer mobility. One is that the SIP handshake (re-INVITE ⇒ 200 OK ⇒ ACK) takes a few hundred milliseconds to complete, exceeding the requirements of seamless handoff for real-time media. The second is that we do not know a priori in which direction the user is going to move.
In order to solve these two problems, we have to define a mechanism that allows the MN to start the application layer handoff before the L2 handoff and to do it so that the MN does not move to the wrong AP or subnet after updating the SIP session. Furthermore, the new mechanism also has to work in the event of the MN deciding not to perform the L2 handoff at all after performing the SIP session update, that is, after updating the SIP session with the new IP address.
The SIP mobility mechanism [34] and CR can be combined. In particular, we consider an extension of the relay mechanism discussed in Section V-B. Let us assume that the MN performing the handoff has already acquired all the necessary L2 and L3 information as described in Sections IV-B, IV-C and V. This means that the MN has a list of possible RNs and IP addresses to use after the L2 handoff, one for each of the various subnets it could move to next. At this point, before performing any L2 handoff, the MN needs to update its multimedia session. The up-link traffic does not cause particular problems as the MN already has a new IP address to use and can start sending packets via the RN right after the L2 handoff. The down-link traffic is more problematic since the CN will continue sending packets to the MN's old IP address as it is not aware of the change in the MN's IP address until the session has been updated.
The basic idea is to update the session so that the same media stream is sent, at the same time, to the MN and to all the RNs in the list previously built by the MN. In this way, regardless of which subnet/AP the MN will move to, the corresponding RN will be able to relay packets to it. If the MN does not change AP at all, nothing is lost as the MN is still receiving packets from the CN. After the MN has performed the L2 handoff and has connected to one of the RNs, it may send a second re-INVITE via the RN so that the CN sends packets to the current RN only, without involving the other RNs any longer. Once the authentication process successfully completes, communication via the AP can resume. At this point, one last session update is required so that the CN can send packets directly to the MN without any RN in between.
In order to send multiple copies of the same media stream to different nodes, that is, to the MN performing the handoff and its RNs, the MN can send to the CN a re-INVITE with an SDP format as described in RFC 3388 [46] and shown in Figure 12. In this particular format, multiple m lines are present with multiple c lines and grouped together by using the same Flow Identification (FID). A station receiving a re-INVITE with an SDP part as shown in Figure 12 sends an audio stream to a client with IP address 131.160.1.112 on port 30000 (if the PCM µ-law codec is used) and to a client with IP address 131.160.1.111 on port 20000. In order for the same media stream to be sent to different clients at the same time, all the clients have to support the same codec [46]. In our case, we have to remember that RNs relay traffic to MNs, they do not play such traffic. Because of this, we can safely say that each RN supports any codec during the relay process, hence a copy of the media stream can always be sent to an RN by using the SDP format described in [46].
It is worthwhile to notice that in the session update procedure described above, no buffering is necessary. As we have explained in Section IX-D and shown in Table III, the L2+L3 handoff time is on the order of 16 ms for open networks, which is less than the packetization interval for typical VoIP traffic. When authentication is used (see Figure 10), the total L2+L3 handoff time is on the order of 21 ms. In both cases packet loss is negligible, hence making any buffering of packets unnecessary.
XI. LOAD BALANCING CR can also play a role in AP load balancing. Today, there are many problems with the way MNs select the AP to connect to. The AP is selected according to the link signal strength and SNR levels while other factors such as effective throughput, number of retries, number of collisions, packet loss, bit-rate or BER are not taken into account. This can cause an MN to connect to an AP with the best SNR but low throughput, high number of collisions and packet loss because that AP is highly congested. If the MN disassociates or the AP deauthenticates it, the MN looks for a new candidate AP. Unfortunately, with a very high probability, the MN will pick the same AP because its link signal strength and SNR are still the "best" available. The information regarding the congestion of the AP is completely ignored and this bad behavior keeps repeating itself. This behavior can create situations where users end up connecting all to the "best" AP creating the scenario depicted earlier and at the same time leaving other APs underutilized [47], [48].
CR can be very helpful in such a context. In particular, we can imagine a situation where an MN wants to gather statistics about the APs that it might move to next, that is, the APs that are present in its cache. In order to do so, the MN can ask other nodes to send statistics about those APs. Each node can collect different kind of statistics, such as available throughput, bit-rate, packet loss, retry rate. Once these statistics have been gathered, they can be sent to the MN that has requested them. The MN, at this point has a clear picture of which APs are more congested and which others can support the required QoS, therefore making a smarter handoff decision. By using this approach we can achieve an even distribution of traffic flows among neighboring APs.
The details of this mechanism are reserved for future study but can be easily derived from the procedures earlier introduced for achieving fast L2 and L3 handoffs.
XII. AN ALTERNATIVE TO MULTICAST
Using IP multicast packets can become inefficient in highly congested environments with a dense distribution of MNs. In such environments a good alternative to multicast can be represented by ad-hoc networks. Switching back-and-forth between infrastructure mode and ad-hoc mode has already been used by MNs in order to share information for fault diagnosis [33]. As we pointed out in Section V-C, continuously switching between ad-hoc and infrastructure mode introduces synchronization problems and channel switching delays, making this approach unusable for real-time traffic. However, even if non-real-time traffic is present, synchronization problems could still arise when switching to ad-hoc mode while having an alive TCP connection on the infrastructure network, for example. Spending a longer time in ad-hoc mode might cause the TCP connection to time-out; on the other hand waiting too long in infrastructure mode might cause loss of data in the ad-hoc network.
In CR, MNs can exchange L2 and L3 information contained in their cache by using the mechanism used for relay as described in Section V-C. Following this approach, MNs can directly exchange information with each other without involving the AP and without having to switch their operating mode to ad-hoc. In particular, an MN can send broadcast and unicast packets such as INFOREQ and INFORESP with the To DS and From DS fields set to zero (see Section V-C). Because of this, only the MNs in the radio coverage of the first MN will be able to receive such packets. The AP will drop these packets since the To DS field is not set.
Ad-hoc multi-hop routing can also be used when needed. This may be helpful, for example, in the case of R-MNs acquiring a new IP address for a new subnet while still in the old subnet (see Section IV-C), when current AP and new AP use two different channels. In such a case, a third node on the same channel than the R-MN, could route packets between the R-MN and the A-MN by switching between the two channels of the two APs, thus leaving R-MN and A-MN operations unaffected. In this case we would not have synchronization problems since the node, switching between the two channels, would have to switch only twice. Once after receiving the IP REQ packet from the R-MN in order to send it to the A-MN, and a second time after receiving the IP RESP from the A-MN in order to send it to the R-MN.
An ad-hoc based approach, such as the relay mechanism presented in Section V-C, does not require any support on the infrastructure and it represents an effective solution in congested and densely populated environments. On the other hand, ad-hoc communication between MNs would not work very well in networks with a small population of MNs, where each MN might be able to see only a very small number of other MNs at any given time.
MNs with two wireless cards could use one card to connect to the ad-hoc network and share information with other MNs, while having the other card connected to the AP. The two cards could also operate on two different access technologies such as cellular and 802.11.
If it is possible to introduce some changes in the infrastructure, we can minimize the use of multicast packets by using the SIP presence model [49]. In such a model we introduce a new presence service in which each subnet is a presentity. Each subnet has a contact list of all the A-MNs available in that subnet for example, so that the presence information is represented by the available A-MNs in the subnet. When an R-MN subscribes to this service, it receives presence information about the new subnet, namely its contacts which are the available A-MNs in that subnet.
This approach could be more efficient in scenarios with a small number of users supporting CR. On the other hand, it would require changes in the infrastructure by introducing additional network elements. The presence and ad-hoc approaches are reserved for future study.
XIII. CONCLUSIONS AND FUTURE WORK
In this paper we have defined the Cooperative Roaming protocol. Such a protocol allows MNs to perform L2 and L3 handoffs seamlessly, with an average total L2+L3 handoff time of about 16 ms in an open network and of about 21 ms in an IEEE 802.11i network without requiring any changes to either the protocol or the infrastructure. Each of these values is less than half of the 50 ms requirement for realtime applications such as VoIP to achieve seamless handoffs. Furthermore, we are able to provide such a fast handoff regardless of the particular authentication mechanisms used while still preserving security and privacy.
MN cooperating has many advantages and does not introduce any significant disadvantage as in the worst case scenario MNs can rely on the standard IEEE 802.11 mechanisms achieving performances similar to a scenario with no cooperation.
Node cooperation can be useful in many other applications:
• In a multi-administrative-domain environment CR can help in discovering which APs are available for which domain. In this way an MN might decide to go to one AP/domain rather than some other AP/domain according to roaming agreements, billing, etc.
• In Section XI we have shown how CR can be used for load balancing. Following a very similar approach but using other metrics such as collision rate and available bandwidth, CR can also be used for admission control and call admission control.
• CR can help in propagating information about service availability. In particular, an MN might decide to perform a handoff to one particular AP because of the services that are available at that AP. A service might be a particular type of encryption, authentication, minimum guaranteed bit rate and available bandwidth or the availability of other types of networks such as Bluetooth, UWB and 3G cellular networks, for example.
• CR provides advantages also in terms of adaptation to changes in the network topology. In particular, when an MN finds some stale entries in its cache, it can update its cache and communicate such changes to the other MNs. This applies also to virtual changes of the network topology (i.e. changes in the APs power levels) which might become more common with the deployment of IEEE 802.11h equipment.
• CR can also be used by MNs to negotiate and adjust their transmission power levels so to achieve a minimum level of interference.
• In [26] Ramani et al. describe a passive scanning algorithm according to which an MN knows the exact moment when a particular AP will send its beacon frame. In this way the MN collects the statistics for the various APs using passive scanning but without having to wait for the whole beacon interval on each channel. This algorithm, however, requires all the APs in the network to be synchronized. By using a cooperative approach, we can have the various MNs sharing information about the beacon intervals of their APs. In this way, we only need to have the MNs synchronized amongst themselves (e.g., via NTP) without any synchronization required on the network side.
• Interaction between nodes in an infrastructure network and nodes in an ad-hoc/mesh network.
1) An MN in ad-hoc mode can send information about its ad-hoc network. In this way MNs of the infrastructure network can decide if it is convenient for them to switch to the ad-hoc network (this would also free resources on the infrastructure network). This, for example, can happen if there is a lack of coverage or if there is high congestion in the infrastructure network. Also, an MN might switch to an ad-hoc network if it has to recover some data available in the ad-hoc network itself (i.e. sensor networks). 2) If two parties are close to each other, they can decide to switch to the ad-hoc network discovered earlier and talk to each other without any infrastructure support. They might also create an ad-hoc network on their own using a default channel, if no other ad-hoc network is available. As future work, we will look in more detail at application layer mobility, load balancing and call admission control. We will investigate the possibility of having some network elements such as APs support A-MN and RN functionalities; this would be useful in scenarios where only few MNs support CR. Finally, we will look at how IEEE 802.21 [13] could integrate and extend CR.
| 10,914 |
cs0701107
|
2949624117
|
This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.
|
@cite_5 proposed a query-based debugger to understand object relationships. Their query language is expressed in the same language as the target object-oriented language (Self), and thus a programmer does not need to learn a new language. Queries consist of a search domain and a constraint. Lencevicius' query-based debugger provides incremental delivery of results, a feature that is useful in dealing with queries that takes considerable time to find all answers.
|
{
"abstract": [
"Object relationships in modem software systems are becoming increasingly numerous and complex. Programmers who try to find violations of such relationships need new tools that allow them to explore objects in a large system more efficiently. Many existing debuggers present only a low-level, one-object-at-a-time view of objects and their relationships. We propose a new solution to overcome these problems: query-based debugging. The implementation of the query-based debugger described here offers programmers an effective query tool that allows efficient searching of large object spaces and quick verification of complex relationships. Even for programs that have large numbers of objects, the debugger achieves interactive response times for common queries by using a combination of fast searching primitives, query optimization, and incremental result delivery."
],
"cite_N": [
"@cite_5"
],
"mid": [
"1982328732"
]
}
|
JavaTA: A Logic-based Debugger for Java
|
This paper shows some of the benefits of applying logic programming techniques in the debugging of object-oriented programs. Debugging object-oriented programs has traditionally been a procedural process in that the programmer has to proceed step-by-step and object-by-object in order to uncover the cause of an error. In this paper, we propose a logic-based approach to the debugging of object-oriented programs in which debugging data can be collected via higher level logical queries. We represent the salient events during the execution of a Java program by a logic database, and implement these queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program.
To illustrate our approach, note that a crucial aspect of program understanding is observing how variables take on different values during execution. The use of print statements is the standard procedural way of eliciting this information. This is a classic case of the need to query over execution history. Other examples include queries to find which variable has a certain value; the calling sequence that results in a certain outcome; whether a certain statement was executed; etc. We arrived at a set of queries by a study of the types of errors that arise in object-oriented programs [8].
We propose two broad categories of queries in this paper: (i) queries over individual execution states and (ii) queries over the entire history of execution, or a subset of the history. Our proposed method recognizes the need to query subhistories; such capability is especially useful when debugging large scale software whose program trace is composed of millions of execution events. Our system has the ability to filter system objects so that a programmer may focus on the objects explicitly instantiated form user defined classes.
Our current implementation, called JavaTA, takes a Java program as input and builds a logic database of salient events (method call, return, assignment, object creation, etc) during the execution of a Java program using the JPDA interface (Java Platform Debugger Architecture). Our approach to recording the history of changes is incremental in nature, i.e., when a variable is assigned, we save only the new value assigned to the variable. Thus, queries about previous execution states involve some state reconstruction. A textual interface allows the user to pose a number of queries as detailed in section 4.
Thus the contributions of our paper are: (1) logic-based approach to debugging object-oriented programs; (2) the provision of queries over individual states and the history of execution; (3) a prototype for a trace analysis for objectoriented programs.
The remainder of the paper is organized as follows. Section 2 presents an example, called the 'traveling null pointer', in order to illustate our overall approach. Section 3 presents the architecture of JavaTA, along with the Java Event Log language. Section 4 outlines the principles of our debugging methodology. Section 5 surveys closely related research and compares them with our work. Section 6 presents conclusions and areas of further research.
Overview of Logic-based Debugging
This section provides an overview of our approach to logic-based debugging with an example. We present the 'traveling null pointer' example, which illustrates a bug pattern in which a method call incorrectly returns a null pointer and the client of that method propagates the null pointer through a call chain, and, finally, a null pointer exception is thrown when the client code of the last call in the chain tries to de-reference the null pointer. In other words, the code that originates the null pointer and the code that de-references that pointer are far apart spatially and temporally. Fig. 1 illustrates the traveling null pointer defect pattern in Java code. The instance method doSomeThing in FarAWayClass returns a null pointer due to erroneous conditions. When this program is executed it reports a null pointer exception at line 14.
JavaTA generates a trace for the example program. (We use the term 'trace' and 'execution history' interchangeably in this paper.) The trace includes 17 events as shown in Fig. 2 a Prolog-based description language for program trace. For example, the second event recorded has a unique id 1 and it belongs to the main thread. The event has been recorded due to the invocation of a method called main. The term l('Example.java ', 20) indicates that the method is defined in the Example.java file on line 20. The term c('Example') means that the method is a class or static method of the Example class. The main method takes an instance of array of strings as the only argument. Instance or object is described by the class name and a unique id as in the term o('java.lang.String[]', 641).
To facilitate trace analysis JavaTA provides a set of predefined queries. Table 1 shows the three predefined queries used in the debugging session. First the user asked regarding the environment where the exception is thrown as in Q1. A1 indicates that the enclosing method is mN whose single argument is null and the call to the enclosing method occurred at event id 14. The current question is where the null pointer originated. Q2 inquires full detail call chain leading to event id 14. A2 shows that method m1 called method m2 which called method mN. The initial call to method main and the constructor is omitted for simplicity of presentation. By investigating the argument passed to m2 it is clear that it has a null value. Method m2 is called from m1, and m1 is called at event id 4. When looking at the source code of method m1, the programmer concludes that the local variable 'result' holds a null value since it is passed as the argument to m2. The Prolog code for the three queries referenced in table 1 are shown in section 4. Given that these are frequently used queries in object-oriented program debugging and also noting that the average Java programmer may be unfamiliar with Prolog, JavaTA provides these queries as built-in primitives. Several additional useful debugging queries and their Prolog implementation are also illustrated in section 4.
JavaTA Architecture
We have implemented a prototype of the JavaTA framework as a distributed system. Fig. 3 shows the main tiers and components of the framework. The architecture of JavaTA is composed of four tiers. The first tier consists of three components: the JPDA, the Prolog server, and the built-in primitives. JPDA, the Java Platform Debugger Architecture [10], is designed as a distributed system that can interface with a JVM running on the same machine or on a different machine. Prolog Beans [20] is a Prolog server that can be interfaced with Java or .Net. The client-server architecture of Prolog Beans allows the server to be a component of a distributed system. Prolog Beans was designed to handle large applications.
The second tier is composed of two components: the Logger and the Query Manager. Once the Logger receives a Java program it starts a JVM and subscribes for the desired events with the JPDA. It is also possible (but not implemented in the current prototype) that the Logger interacts with an already running JVM. The Query Manager is responsible for constructing Prolog goals and sending the constructed goals to the Prolog Beans server. Once the Query Manger receives answers, it forwards them back to the Tools Interface. The third tier is composed of only one component: the Tools Interface which is a facade for the JavaTA Framework. The fourth tier has only one component: the User Interface that interacts with the Tools interface and the user. [6,7] and JyLog [11] have implemented similar recording techniques based on logging in XML; however, JEL describes the program trace as a set of Prolog facts. JEL can be easily extended to include a sophisticated description of static and dynamic information about a given program. Table 2 shows part of the BNF grammar of JEL. The basic construct in JEL is the event term. Each event has a unique id and thread in addition to other specific information. Objects are identified by their class and a unique id. The implemented prototype supports the description of the following nine events.
1. Method call event records the source code location of the first executable line of the method body, the class or the instance that this method was invoked on, method name, and method arguments. 2. Method exit event records similar information as method call event, in addition to the id of the corresponding method entry event and the returned value instead of the arguments. 3. Set Field event records the source location where the field was set to a new value, the instance or the class where this field is declared, and the new value. 4. Data Structure event is recorded after a method entry event, method exit event, and set field event if the type of the field being assigned a new value is a data structure. The data structure can be an array or a Collection instance. The event describes the source code information of the event that caused the recording of the data structure. 5.
Step event describes the source code location in addition to names and values of visible local variables in each step. Each step corresponds to the execution of a source line. 6. Exception event records the source code location, the exception instance, the exception message, and the catch location if it is caught or the uncaught keyword other wise. 7. Thread Start and Thread Death events record the starting or the ending of a thread. The thread group is also recorded. 8. Member fields event records information regarding member fields of a given class. Table 2. Part of JEL BNF events ::= event* event ::= event '(' id , thread , execution-event ')' '.' execution-event ::= member-fields method-call method-exit set-field data-structure exception step thread-start thread-death method-call ::= methodcall '(' location , ( instance class ), name , arguments ')' method-exit ::= methodexit '(' id , location , ( instance class ), name , value ')' set-field ::= setfield '(' location , ( instance class ), name , value ')' data-structure ::= datastructure'(' location , contents ')' exception ::= exception '(' location , instance , message , ( location uncaught) ')' step ::= step '(' location , local-variable-list ')' member-fields ::= memberfields '(' class , member-fields ')' thread-start ::= threadstart '(' thread-group ')' thread-death ::= threaddeath '(' thread-group ')'
Queries on Program Trace
The debugging process involves three phases: (i) formulating a hypothesis about the root of the error; (ii) collecting program-specific data that is pertinent to the hypothesis; (iii) analyzing the collected data to prove or disprove the hypothesis. The difference between JavaTA and traditional debugging lies in their respective approaches to the data collection phase (ii). In JavaTA, data collection is performed by high-level queries on the trace. In traditional debugging, data collection is performed by the programmer by a process of manually stepping through the code, setting break points, and inspecting objects. In this section, the program trace is recorded as a Prolog database. The database is populated by entries corresponding to execution events which are specified by JEL. While it is possible to pre-process this database in order to construct auxiliary structures such as call trees, we do not resort to such optimizations here, but present a relatively straightforward implementation of the debugging primitives directly in terms of the event database. The debugging primitives, or predefined queries, provided by JavaTA can be organized under three categories: queries on specific events, queries on execution history, and query management. Section 4.1 discusses queries on specific events. There are four kinds of queries over the execution history and are illustrated in section 4.2. Query management and programmability techniques are discussed in section 4.3.
Queries on Program State
Group Method Calls According to Call Chain. Compared with the traditional procedural paradigm, the object-oriented paradigm engenders the use of many small methods and greater method interaction. Thus, posing queries regarding the interaction between objects is essential in the debugging process and in the understanding of object oriented programs in general. A method call can be viewed as a message whose content is the passed arguments. Each message has a response which is the returned value or void. A message can have no response if it exits abnormally, i.e. throws an exception. Call chain can serve as a way to know the execution path leading to the execution of a specific event or as a way to inspect argument values propagated through the chain of calls. Fig. 4 illustrate the call chain rule in Prolog.
The rule any enclosing method specifies any enclosing method for a given event. For example, suppose method m1 called method m2 where event e was executed. Let id c m1 , id e m1 , id c m2 , id e m2 , id e are the id's for the following events: m1 call, m1 exit, m2 call, m2 exit, and the execution of event e respectively, assuming that the program has terminated normally. Note that id c m1 < id c m2 < id e < id e m1 < id e m2 . Event e is enclosed in method m2 which is enclosed in method m1; therefore, methods m1 and m2 are considered as enclosing methods. According to the rule any enclosing method CallId = id c m1 , ExitId = id e m1 or CallId = id c m2 , ExitId = id e m2 . Therefore, a call chain leading to the execution of a given event is all the enclosing methods for that event. According to the call chain rule, OutList = [id c m1 , id c m2 ]. Query Where an Event Occurred. In object-oriented programming, execution events occur within an environment. An environment is an instance object and an instance method invocation or else it is a class and a static method invocation. This environment represents the enclosing environment for an event. The instance or the class is referred to as the enclosing instance or enclosing class and the method is referred to as the enclosing method for the event. The enclosing Fig. 5 shows the rule where exception is thrown for a given thread. Once an exception is thrown, the thread in which the exception occurred is terminated; therefore, there is at most one uncaught exception per thread. The rule where specifies that the enclosing environment for a given event is the first call in the reversed call chain specified by the full detail call chain rule which reverse the list of id's obtained form call chain rule and extract the associated events from the database. Query the State of an Object. Querying the state of an object is concerned with the encapsulation aspect of object-oriented programming. The state of an object is captured in the values of its member fields and public and protected member fields of its super classes. The rule object state in Fig. 6 illustrates how the state of the object OName whose id is OId can be reconstructed at event id E. Object instantiation event is recorded as a method call to init . The domain of the object state rule is the segment of the program history between id S when the instantiation occurred and id E which is specified by the user. Member fields of a class is recorded as memberfields event. The object state helper rule specifies field value contributing to the desired state as the last value in the field history between id S and id E. Rule instance field history is discussed in the next section.
For example, let id init , id end be the boundary of the search domain and f 1 , f 2 are fields of the desired object. Suppose that histories of fields f 1 , f 2 are {{id i , f 1 , v i }, .., {id j , f 1 , v j }} and {{id n , f 2 , v n }, .., {id m , f 2 , v m }} respectively, v k and id k stand for the value of the field and when it was assigned respectively. Note that id init < id i , id j , id n , id m <= id end , and id i < id j , and id n < id m . Then the object's state is {{id j , f 1 , v j }{id m , f 2 , v m }}. Queries On Method State In design-by-contract (DBC) [17][18][19] the client has to meet preconditions or specific requirements in order to be able to call a certain method. These requirements are usually constraints on the arguments and the state. Our method generalizes the requirement to be imposed on any execution event and not only on method calls as in DBC. The following three factors can affect the execution of a given event within the enclosing method: (1) arguments values, (2) the returned value of all preceding method calls to a given event within the same enclosing method. Fig. 7 shows the pre event called methods rule, and (3) local variables values before the execution of the event. Thus those three factors are considered candidate queries.
Analogously, the post-condition in DBC is the effect that the called method promises upon its correct completion. Our methodology generalizes this idea to all executed events. The effect of the execution of an event on the enclosing method can appear in the following three areas: (1) the returned value of the enclosing method, (2) methods that have been called after the execution of the event within the same enclosing method, and (3) Local variables values after the execution of the event. DBC is not capable of specifying directly that some other methods need to be called before or after a given method. Having recorded the execution history it is possible to inspect whether a certain method(s) has been called before or after a given event.
Queries over Execution History
Execution History Subset. The programmer should have the ability to focus on an interval of the execution history when an erroneous behavior is suspected to occur. Such feature is useful in dealing with large program trace by allowing the programmer to filter out irrelevant data. Gathering Data. Eisenstadt [5] in his study on how bugs were found in 51 cases gathered from professional programmers found that programmers have used the following 4 techniques to locate the defect root: data gathering, code inspection, expert help, and controlled experiments. In 27 cases the bugs were found by gathering data regarding the execution of the program. JavaTA can gather data automatically regarding the following (1) member field value history;
(2) local variable value history which is important in understating loop execution;
(3) history of arguments of method calls; (4) history of return value of method calls; (5) history of contents of data structure; (6) all class instances and their states which is important in understanding user defined data structures; (7) thread status such: running and exited threads. Fig. 8 shows the rule for instance field history. The rule specifies a segment of the history between id S and id E for an instance field F of object OName whose unique id is OId. The rule instance field value specifies that a value of a given field can be obtained from a set field event provided that its id is between S and E. Call Tree. Grouping method calls according to a call tree is motivated by the need to depict interactions among objects. Call tree can be defined as methods called by the method of interest. Method calls that are involved in a call tree collaborate in achieving one task. Those methods are not necessarily dependent on each other, unlike method calls in a call chain in which the called method depends on the caller.
Query about Statement Execution. One of the most recurring questions in the debugging process is whether a certain statement has been executed or not. Novice programmers find the answer for such a question by inserting multiple print statements in program's code. Advanced developer would insert break points using a traditional debugger to verify whether a given statement has been executed or not. The answer to this question is either yes or no. We propose the following seven queries. (1) Was a given conditional statement executed? (2) Was a given method called? (3) Was a member field assigned to a given value? (4) Is there an instance of a specific class? (5) Was a specific exception caught? (6) Is a given thread still running? (7) Has a given thread exited?
Programmability and Query Management
Compose and Save Queries. The ability to compose queries provides a way to adapt queries to recurring bug patterns as well as to the individual needs of the developer. The idea is similar to the idea behind the Emacs system that allows the user to add macros dynamically to add functionality to the system. Composed queries guarantee the flexibility and the extendibility of our framework. Allowing the user to add queries dynamically results in a general purpose static analyzer for program trace. However, we do not have experimental data to support our claim especially on large program traces or for more complicated analyses.
Liang and Kai [15] developed a scenario-driven debugger. The idea is to allow the programmer to model a behavior view for a specific task as finite automata. The debugger allows the programmer to inspect the task execution progress. A similar capability can be added to JavaTA by composing a Prolog rule. Fig. 9 shows the login Prolog rule used to inspect the execution of the login task. The original example of the login task and its behavior view is illustrated in Liang and Kai's paper [15]. A standard login task is composed of (i) obtaining the user name (ii) obtaining the password (iii) verifying the user name and the password. If any step fails the login process fails, otherwise the user is allowed to login. One important difference is the analysis used in JavaTA is postmortem analysis. On the other hand, the scenario-driven debugger uses on-line analysis. Comparing Query Results. Eisenstadt [5] describes the "Dump & Diff" as a technique to locate errors. This technique works as follows. The output of print statements is saved to two text files corresponding to two different executions; the two files are then compared using a source-compare "diff" utility, which highlights the difference between the two outputs. This technique can be adapted to query multiple execution histories and to compare the results of multiple queries over the same execution history. Comparative queries can be helpful to see the difference between data structure contents, and call chains and much more. Comparative queries can also be applied to isolating errors related to software maintenance by posing a query on two runs obtained form two versions and comparing query results.
Save Queries Answers. Calculating a query on a large program history is costly and time demanding. In many debugging scenarios the programmer may go back to examine the results of previous queries or would like to compare them. Re-computing a query on such execution history is wasteful; therefore, queries and their answers should be saved. The WhyLine [12] allows for data provisioning to ease the debugging process; however, JavaTA adapt this technique due to the cost associated with query evaluation on large program trace.
Conclusions and Future Work
We believe that our proposed logic programming approach is a simple and effective method for debugging object oriented programs. The key to our approach representing the execution history as a logic database, and employing logic queries to answer questions about previous execution states. Our proposed query catalog is based upon an extensive study of errors in object oriented programs [8].
Work is still in progress on JavaTA. Currently we are working on a programmable tool interface to JavaTA features. We are applying our technique to larger programs, in order to gain a better understanding of the methodology and its potential limitations. We plan to make the JavaTA available as a plug-in for Eclipse. We are also exploring the performance characteristics in terms of both the space and the time needed for various types of queries. We are also interested in quantifying the overhead of extracting the program trace.
| 4,060 |
cs0701107
|
2949624117
|
This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.
|
Recently, PQL (Program Query Language) was developed by @cite_17 to query over source code and program trace for finding errors and security flaws in programs. Queries may formulate application-specific code patterns that may result in vulnerabilities at run-time. Queries are translated to Datalog (which is essentially declarative Prolog without function symbols), and provide the ability to take an action once a match found. A combination of static and dynamic analyses is performed to answer queries. The PQL compiler generates code that is weaved into the target application and matches against a history of relevant events at execution time. A number of interesting security violations are found by this technique.
|
{
"abstract": [
"A number of effective error detection tools have been built in recent years to check if a program conforms to certain design rules. An important class of design rules deals with sequences of events asso-ciated with a set of related objects. This paper presents a language called PQL (Program Query Language) that allows programmers to express such questions easily in an application-specific context. A query looks like a code excerpt corresponding to the shortest amount of code that would violate a design rule. Details of the tar-get application's precise implementation are abstracted away. The programmer may also specify actions to perform when a match is found, such as recording relevant information or even correcting an erroneous execution on the fly.We have developed both static and dynamic techniques to find solutions to PQL queries. Our static analyzer finds all potential matches conservatively using a context-sensitive, flow-insensitive, inclusion-based pointer alias analysis. Static results are also use-ful in reducing the number of instrumentation points for dynamic analysis. Our dynamic analyzer instruments the source program to catch all violations precisely as the program runs and to optionally perform user-specified actions.We have implemented the techniques described in this paper and found 206 errors in 6 large real-world open-source Java applica-tions containing a total of nearly 60,000 classes. These errors are important security flaws, resource leaks, and violations of consis-tency invariants. The combination of static and dynamic analysis proves effective at addressing a wide range of debugging and pro-gram comprehension queries. We have found that dynamic analysis is especially suitable for preventing errors such as security vulner-abilities at runtime."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2134429122"
]
}
|
JavaTA: A Logic-based Debugger for Java
|
This paper shows some of the benefits of applying logic programming techniques in the debugging of object-oriented programs. Debugging object-oriented programs has traditionally been a procedural process in that the programmer has to proceed step-by-step and object-by-object in order to uncover the cause of an error. In this paper, we propose a logic-based approach to the debugging of object-oriented programs in which debugging data can be collected via higher level logical queries. We represent the salient events during the execution of a Java program by a logic database, and implement these queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program.
To illustrate our approach, note that a crucial aspect of program understanding is observing how variables take on different values during execution. The use of print statements is the standard procedural way of eliciting this information. This is a classic case of the need to query over execution history. Other examples include queries to find which variable has a certain value; the calling sequence that results in a certain outcome; whether a certain statement was executed; etc. We arrived at a set of queries by a study of the types of errors that arise in object-oriented programs [8].
We propose two broad categories of queries in this paper: (i) queries over individual execution states and (ii) queries over the entire history of execution, or a subset of the history. Our proposed method recognizes the need to query subhistories; such capability is especially useful when debugging large scale software whose program trace is composed of millions of execution events. Our system has the ability to filter system objects so that a programmer may focus on the objects explicitly instantiated form user defined classes.
Our current implementation, called JavaTA, takes a Java program as input and builds a logic database of salient events (method call, return, assignment, object creation, etc) during the execution of a Java program using the JPDA interface (Java Platform Debugger Architecture). Our approach to recording the history of changes is incremental in nature, i.e., when a variable is assigned, we save only the new value assigned to the variable. Thus, queries about previous execution states involve some state reconstruction. A textual interface allows the user to pose a number of queries as detailed in section 4.
Thus the contributions of our paper are: (1) logic-based approach to debugging object-oriented programs; (2) the provision of queries over individual states and the history of execution; (3) a prototype for a trace analysis for objectoriented programs.
The remainder of the paper is organized as follows. Section 2 presents an example, called the 'traveling null pointer', in order to illustate our overall approach. Section 3 presents the architecture of JavaTA, along with the Java Event Log language. Section 4 outlines the principles of our debugging methodology. Section 5 surveys closely related research and compares them with our work. Section 6 presents conclusions and areas of further research.
Overview of Logic-based Debugging
This section provides an overview of our approach to logic-based debugging with an example. We present the 'traveling null pointer' example, which illustrates a bug pattern in which a method call incorrectly returns a null pointer and the client of that method propagates the null pointer through a call chain, and, finally, a null pointer exception is thrown when the client code of the last call in the chain tries to de-reference the null pointer. In other words, the code that originates the null pointer and the code that de-references that pointer are far apart spatially and temporally. Fig. 1 illustrates the traveling null pointer defect pattern in Java code. The instance method doSomeThing in FarAWayClass returns a null pointer due to erroneous conditions. When this program is executed it reports a null pointer exception at line 14.
JavaTA generates a trace for the example program. (We use the term 'trace' and 'execution history' interchangeably in this paper.) The trace includes 17 events as shown in Fig. 2 a Prolog-based description language for program trace. For example, the second event recorded has a unique id 1 and it belongs to the main thread. The event has been recorded due to the invocation of a method called main. The term l('Example.java ', 20) indicates that the method is defined in the Example.java file on line 20. The term c('Example') means that the method is a class or static method of the Example class. The main method takes an instance of array of strings as the only argument. Instance or object is described by the class name and a unique id as in the term o('java.lang.String[]', 641).
To facilitate trace analysis JavaTA provides a set of predefined queries. Table 1 shows the three predefined queries used in the debugging session. First the user asked regarding the environment where the exception is thrown as in Q1. A1 indicates that the enclosing method is mN whose single argument is null and the call to the enclosing method occurred at event id 14. The current question is where the null pointer originated. Q2 inquires full detail call chain leading to event id 14. A2 shows that method m1 called method m2 which called method mN. The initial call to method main and the constructor is omitted for simplicity of presentation. By investigating the argument passed to m2 it is clear that it has a null value. Method m2 is called from m1, and m1 is called at event id 4. When looking at the source code of method m1, the programmer concludes that the local variable 'result' holds a null value since it is passed as the argument to m2. The Prolog code for the three queries referenced in table 1 are shown in section 4. Given that these are frequently used queries in object-oriented program debugging and also noting that the average Java programmer may be unfamiliar with Prolog, JavaTA provides these queries as built-in primitives. Several additional useful debugging queries and their Prolog implementation are also illustrated in section 4.
JavaTA Architecture
We have implemented a prototype of the JavaTA framework as a distributed system. Fig. 3 shows the main tiers and components of the framework. The architecture of JavaTA is composed of four tiers. The first tier consists of three components: the JPDA, the Prolog server, and the built-in primitives. JPDA, the Java Platform Debugger Architecture [10], is designed as a distributed system that can interface with a JVM running on the same machine or on a different machine. Prolog Beans [20] is a Prolog server that can be interfaced with Java or .Net. The client-server architecture of Prolog Beans allows the server to be a component of a distributed system. Prolog Beans was designed to handle large applications.
The second tier is composed of two components: the Logger and the Query Manager. Once the Logger receives a Java program it starts a JVM and subscribes for the desired events with the JPDA. It is also possible (but not implemented in the current prototype) that the Logger interacts with an already running JVM. The Query Manager is responsible for constructing Prolog goals and sending the constructed goals to the Prolog Beans server. Once the Query Manger receives answers, it forwards them back to the Tools Interface. The third tier is composed of only one component: the Tools Interface which is a facade for the JavaTA Framework. The fourth tier has only one component: the User Interface that interacts with the Tools interface and the user. [6,7] and JyLog [11] have implemented similar recording techniques based on logging in XML; however, JEL describes the program trace as a set of Prolog facts. JEL can be easily extended to include a sophisticated description of static and dynamic information about a given program. Table 2 shows part of the BNF grammar of JEL. The basic construct in JEL is the event term. Each event has a unique id and thread in addition to other specific information. Objects are identified by their class and a unique id. The implemented prototype supports the description of the following nine events.
1. Method call event records the source code location of the first executable line of the method body, the class or the instance that this method was invoked on, method name, and method arguments. 2. Method exit event records similar information as method call event, in addition to the id of the corresponding method entry event and the returned value instead of the arguments. 3. Set Field event records the source location where the field was set to a new value, the instance or the class where this field is declared, and the new value. 4. Data Structure event is recorded after a method entry event, method exit event, and set field event if the type of the field being assigned a new value is a data structure. The data structure can be an array or a Collection instance. The event describes the source code information of the event that caused the recording of the data structure. 5.
Step event describes the source code location in addition to names and values of visible local variables in each step. Each step corresponds to the execution of a source line. 6. Exception event records the source code location, the exception instance, the exception message, and the catch location if it is caught or the uncaught keyword other wise. 7. Thread Start and Thread Death events record the starting or the ending of a thread. The thread group is also recorded. 8. Member fields event records information regarding member fields of a given class. Table 2. Part of JEL BNF events ::= event* event ::= event '(' id , thread , execution-event ')' '.' execution-event ::= member-fields method-call method-exit set-field data-structure exception step thread-start thread-death method-call ::= methodcall '(' location , ( instance class ), name , arguments ')' method-exit ::= methodexit '(' id , location , ( instance class ), name , value ')' set-field ::= setfield '(' location , ( instance class ), name , value ')' data-structure ::= datastructure'(' location , contents ')' exception ::= exception '(' location , instance , message , ( location uncaught) ')' step ::= step '(' location , local-variable-list ')' member-fields ::= memberfields '(' class , member-fields ')' thread-start ::= threadstart '(' thread-group ')' thread-death ::= threaddeath '(' thread-group ')'
Queries on Program Trace
The debugging process involves three phases: (i) formulating a hypothesis about the root of the error; (ii) collecting program-specific data that is pertinent to the hypothesis; (iii) analyzing the collected data to prove or disprove the hypothesis. The difference between JavaTA and traditional debugging lies in their respective approaches to the data collection phase (ii). In JavaTA, data collection is performed by high-level queries on the trace. In traditional debugging, data collection is performed by the programmer by a process of manually stepping through the code, setting break points, and inspecting objects. In this section, the program trace is recorded as a Prolog database. The database is populated by entries corresponding to execution events which are specified by JEL. While it is possible to pre-process this database in order to construct auxiliary structures such as call trees, we do not resort to such optimizations here, but present a relatively straightforward implementation of the debugging primitives directly in terms of the event database. The debugging primitives, or predefined queries, provided by JavaTA can be organized under three categories: queries on specific events, queries on execution history, and query management. Section 4.1 discusses queries on specific events. There are four kinds of queries over the execution history and are illustrated in section 4.2. Query management and programmability techniques are discussed in section 4.3.
Queries on Program State
Group Method Calls According to Call Chain. Compared with the traditional procedural paradigm, the object-oriented paradigm engenders the use of many small methods and greater method interaction. Thus, posing queries regarding the interaction between objects is essential in the debugging process and in the understanding of object oriented programs in general. A method call can be viewed as a message whose content is the passed arguments. Each message has a response which is the returned value or void. A message can have no response if it exits abnormally, i.e. throws an exception. Call chain can serve as a way to know the execution path leading to the execution of a specific event or as a way to inspect argument values propagated through the chain of calls. Fig. 4 illustrate the call chain rule in Prolog.
The rule any enclosing method specifies any enclosing method for a given event. For example, suppose method m1 called method m2 where event e was executed. Let id c m1 , id e m1 , id c m2 , id e m2 , id e are the id's for the following events: m1 call, m1 exit, m2 call, m2 exit, and the execution of event e respectively, assuming that the program has terminated normally. Note that id c m1 < id c m2 < id e < id e m1 < id e m2 . Event e is enclosed in method m2 which is enclosed in method m1; therefore, methods m1 and m2 are considered as enclosing methods. According to the rule any enclosing method CallId = id c m1 , ExitId = id e m1 or CallId = id c m2 , ExitId = id e m2 . Therefore, a call chain leading to the execution of a given event is all the enclosing methods for that event. According to the call chain rule, OutList = [id c m1 , id c m2 ]. Query Where an Event Occurred. In object-oriented programming, execution events occur within an environment. An environment is an instance object and an instance method invocation or else it is a class and a static method invocation. This environment represents the enclosing environment for an event. The instance or the class is referred to as the enclosing instance or enclosing class and the method is referred to as the enclosing method for the event. The enclosing Fig. 5 shows the rule where exception is thrown for a given thread. Once an exception is thrown, the thread in which the exception occurred is terminated; therefore, there is at most one uncaught exception per thread. The rule where specifies that the enclosing environment for a given event is the first call in the reversed call chain specified by the full detail call chain rule which reverse the list of id's obtained form call chain rule and extract the associated events from the database. Query the State of an Object. Querying the state of an object is concerned with the encapsulation aspect of object-oriented programming. The state of an object is captured in the values of its member fields and public and protected member fields of its super classes. The rule object state in Fig. 6 illustrates how the state of the object OName whose id is OId can be reconstructed at event id E. Object instantiation event is recorded as a method call to init . The domain of the object state rule is the segment of the program history between id S when the instantiation occurred and id E which is specified by the user. Member fields of a class is recorded as memberfields event. The object state helper rule specifies field value contributing to the desired state as the last value in the field history between id S and id E. Rule instance field history is discussed in the next section.
For example, let id init , id end be the boundary of the search domain and f 1 , f 2 are fields of the desired object. Suppose that histories of fields f 1 , f 2 are {{id i , f 1 , v i }, .., {id j , f 1 , v j }} and {{id n , f 2 , v n }, .., {id m , f 2 , v m }} respectively, v k and id k stand for the value of the field and when it was assigned respectively. Note that id init < id i , id j , id n , id m <= id end , and id i < id j , and id n < id m . Then the object's state is {{id j , f 1 , v j }{id m , f 2 , v m }}. Queries On Method State In design-by-contract (DBC) [17][18][19] the client has to meet preconditions or specific requirements in order to be able to call a certain method. These requirements are usually constraints on the arguments and the state. Our method generalizes the requirement to be imposed on any execution event and not only on method calls as in DBC. The following three factors can affect the execution of a given event within the enclosing method: (1) arguments values, (2) the returned value of all preceding method calls to a given event within the same enclosing method. Fig. 7 shows the pre event called methods rule, and (3) local variables values before the execution of the event. Thus those three factors are considered candidate queries.
Analogously, the post-condition in DBC is the effect that the called method promises upon its correct completion. Our methodology generalizes this idea to all executed events. The effect of the execution of an event on the enclosing method can appear in the following three areas: (1) the returned value of the enclosing method, (2) methods that have been called after the execution of the event within the same enclosing method, and (3) Local variables values after the execution of the event. DBC is not capable of specifying directly that some other methods need to be called before or after a given method. Having recorded the execution history it is possible to inspect whether a certain method(s) has been called before or after a given event.
Queries over Execution History
Execution History Subset. The programmer should have the ability to focus on an interval of the execution history when an erroneous behavior is suspected to occur. Such feature is useful in dealing with large program trace by allowing the programmer to filter out irrelevant data. Gathering Data. Eisenstadt [5] in his study on how bugs were found in 51 cases gathered from professional programmers found that programmers have used the following 4 techniques to locate the defect root: data gathering, code inspection, expert help, and controlled experiments. In 27 cases the bugs were found by gathering data regarding the execution of the program. JavaTA can gather data automatically regarding the following (1) member field value history;
(2) local variable value history which is important in understating loop execution;
(3) history of arguments of method calls; (4) history of return value of method calls; (5) history of contents of data structure; (6) all class instances and their states which is important in understanding user defined data structures; (7) thread status such: running and exited threads. Fig. 8 shows the rule for instance field history. The rule specifies a segment of the history between id S and id E for an instance field F of object OName whose unique id is OId. The rule instance field value specifies that a value of a given field can be obtained from a set field event provided that its id is between S and E. Call Tree. Grouping method calls according to a call tree is motivated by the need to depict interactions among objects. Call tree can be defined as methods called by the method of interest. Method calls that are involved in a call tree collaborate in achieving one task. Those methods are not necessarily dependent on each other, unlike method calls in a call chain in which the called method depends on the caller.
Query about Statement Execution. One of the most recurring questions in the debugging process is whether a certain statement has been executed or not. Novice programmers find the answer for such a question by inserting multiple print statements in program's code. Advanced developer would insert break points using a traditional debugger to verify whether a given statement has been executed or not. The answer to this question is either yes or no. We propose the following seven queries. (1) Was a given conditional statement executed? (2) Was a given method called? (3) Was a member field assigned to a given value? (4) Is there an instance of a specific class? (5) Was a specific exception caught? (6) Is a given thread still running? (7) Has a given thread exited?
Programmability and Query Management
Compose and Save Queries. The ability to compose queries provides a way to adapt queries to recurring bug patterns as well as to the individual needs of the developer. The idea is similar to the idea behind the Emacs system that allows the user to add macros dynamically to add functionality to the system. Composed queries guarantee the flexibility and the extendibility of our framework. Allowing the user to add queries dynamically results in a general purpose static analyzer for program trace. However, we do not have experimental data to support our claim especially on large program traces or for more complicated analyses.
Liang and Kai [15] developed a scenario-driven debugger. The idea is to allow the programmer to model a behavior view for a specific task as finite automata. The debugger allows the programmer to inspect the task execution progress. A similar capability can be added to JavaTA by composing a Prolog rule. Fig. 9 shows the login Prolog rule used to inspect the execution of the login task. The original example of the login task and its behavior view is illustrated in Liang and Kai's paper [15]. A standard login task is composed of (i) obtaining the user name (ii) obtaining the password (iii) verifying the user name and the password. If any step fails the login process fails, otherwise the user is allowed to login. One important difference is the analysis used in JavaTA is postmortem analysis. On the other hand, the scenario-driven debugger uses on-line analysis. Comparing Query Results. Eisenstadt [5] describes the "Dump & Diff" as a technique to locate errors. This technique works as follows. The output of print statements is saved to two text files corresponding to two different executions; the two files are then compared using a source-compare "diff" utility, which highlights the difference between the two outputs. This technique can be adapted to query multiple execution histories and to compare the results of multiple queries over the same execution history. Comparative queries can be helpful to see the difference between data structure contents, and call chains and much more. Comparative queries can also be applied to isolating errors related to software maintenance by posing a query on two runs obtained form two versions and comparing query results.
Save Queries Answers. Calculating a query on a large program history is costly and time demanding. In many debugging scenarios the programmer may go back to examine the results of previous queries or would like to compare them. Re-computing a query on such execution history is wasteful; therefore, queries and their answers should be saved. The WhyLine [12] allows for data provisioning to ease the debugging process; however, JavaTA adapt this technique due to the cost associated with query evaluation on large program trace.
Conclusions and Future Work
We believe that our proposed logic programming approach is a simple and effective method for debugging object oriented programs. The key to our approach representing the execution history as a logic database, and employing logic queries to answer questions about previous execution states. Our proposed query catalog is based upon an extensive study of errors in object oriented programs [8].
Work is still in progress on JavaTA. Currently we are working on a programmable tool interface to JavaTA features. We are applying our technique to larger programs, in order to gain a better understanding of the methodology and its potential limitations. We plan to make the JavaTA available as a plug-in for Eclipse. We are also exploring the performance characteristics in terms of both the space and the time needed for various types of queries. We are also interested in quantifying the overhead of extracting the program trace.
| 4,060 |
cs0701107
|
2949624117
|
This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.
|
@cite_15 proposed the PTQL (Program Trace Query Language) as a relational query language designed to query program trace. Similar in goals with PQL, PTQL employs an SQL-like query language. Partiqle compiles the PTQL queries into instrumentation in a given Java program. PTQL queries can be used to specify what is to be recorded during program execution, and hence this technique can be effective with programs that generate many irrelevant events.
|
{
"abstract": [
"Instrumenting programs with code to monitor runtime behavior is a common technique for profiling and debugging. In practice, instrumentation is either inserted manually by programmers, or automatically by specialized tools that monitor particular properties. We propose Program Trace Query Language (PTQL), a language based on relational queries over program traces, in which programmers can write expressive, declarative queries about program behavior. We also describe our compiler, Partiqle . Given a PTQL query and a Java program, Partiqle instruments the program to execute the query on-line. We apply several PTQL queries to a set of benchmark programs, including the Apache Tomcat Web server. Our queries reveal significant performance bugs in the jack SpecJVM98 benchmark, in Tomcat, and in the IBM Java class library, as well as some correct though uncomfortably subtle code in the Xerces XML parser. We present performance measurements demonstrating that our prototype system has usable performance."
],
"cite_N": [
"@cite_15"
],
"mid": [
"2162126440"
]
}
|
JavaTA: A Logic-based Debugger for Java
|
This paper shows some of the benefits of applying logic programming techniques in the debugging of object-oriented programs. Debugging object-oriented programs has traditionally been a procedural process in that the programmer has to proceed step-by-step and object-by-object in order to uncover the cause of an error. In this paper, we propose a logic-based approach to the debugging of object-oriented programs in which debugging data can be collected via higher level logical queries. We represent the salient events during the execution of a Java program by a logic database, and implement these queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program.
To illustrate our approach, note that a crucial aspect of program understanding is observing how variables take on different values during execution. The use of print statements is the standard procedural way of eliciting this information. This is a classic case of the need to query over execution history. Other examples include queries to find which variable has a certain value; the calling sequence that results in a certain outcome; whether a certain statement was executed; etc. We arrived at a set of queries by a study of the types of errors that arise in object-oriented programs [8].
We propose two broad categories of queries in this paper: (i) queries over individual execution states and (ii) queries over the entire history of execution, or a subset of the history. Our proposed method recognizes the need to query subhistories; such capability is especially useful when debugging large scale software whose program trace is composed of millions of execution events. Our system has the ability to filter system objects so that a programmer may focus on the objects explicitly instantiated form user defined classes.
Our current implementation, called JavaTA, takes a Java program as input and builds a logic database of salient events (method call, return, assignment, object creation, etc) during the execution of a Java program using the JPDA interface (Java Platform Debugger Architecture). Our approach to recording the history of changes is incremental in nature, i.e., when a variable is assigned, we save only the new value assigned to the variable. Thus, queries about previous execution states involve some state reconstruction. A textual interface allows the user to pose a number of queries as detailed in section 4.
Thus the contributions of our paper are: (1) logic-based approach to debugging object-oriented programs; (2) the provision of queries over individual states and the history of execution; (3) a prototype for a trace analysis for objectoriented programs.
The remainder of the paper is organized as follows. Section 2 presents an example, called the 'traveling null pointer', in order to illustate our overall approach. Section 3 presents the architecture of JavaTA, along with the Java Event Log language. Section 4 outlines the principles of our debugging methodology. Section 5 surveys closely related research and compares them with our work. Section 6 presents conclusions and areas of further research.
Overview of Logic-based Debugging
This section provides an overview of our approach to logic-based debugging with an example. We present the 'traveling null pointer' example, which illustrates a bug pattern in which a method call incorrectly returns a null pointer and the client of that method propagates the null pointer through a call chain, and, finally, a null pointer exception is thrown when the client code of the last call in the chain tries to de-reference the null pointer. In other words, the code that originates the null pointer and the code that de-references that pointer are far apart spatially and temporally. Fig. 1 illustrates the traveling null pointer defect pattern in Java code. The instance method doSomeThing in FarAWayClass returns a null pointer due to erroneous conditions. When this program is executed it reports a null pointer exception at line 14.
JavaTA generates a trace for the example program. (We use the term 'trace' and 'execution history' interchangeably in this paper.) The trace includes 17 events as shown in Fig. 2 a Prolog-based description language for program trace. For example, the second event recorded has a unique id 1 and it belongs to the main thread. The event has been recorded due to the invocation of a method called main. The term l('Example.java ', 20) indicates that the method is defined in the Example.java file on line 20. The term c('Example') means that the method is a class or static method of the Example class. The main method takes an instance of array of strings as the only argument. Instance or object is described by the class name and a unique id as in the term o('java.lang.String[]', 641).
To facilitate trace analysis JavaTA provides a set of predefined queries. Table 1 shows the three predefined queries used in the debugging session. First the user asked regarding the environment where the exception is thrown as in Q1. A1 indicates that the enclosing method is mN whose single argument is null and the call to the enclosing method occurred at event id 14. The current question is where the null pointer originated. Q2 inquires full detail call chain leading to event id 14. A2 shows that method m1 called method m2 which called method mN. The initial call to method main and the constructor is omitted for simplicity of presentation. By investigating the argument passed to m2 it is clear that it has a null value. Method m2 is called from m1, and m1 is called at event id 4. When looking at the source code of method m1, the programmer concludes that the local variable 'result' holds a null value since it is passed as the argument to m2. The Prolog code for the three queries referenced in table 1 are shown in section 4. Given that these are frequently used queries in object-oriented program debugging and also noting that the average Java programmer may be unfamiliar with Prolog, JavaTA provides these queries as built-in primitives. Several additional useful debugging queries and their Prolog implementation are also illustrated in section 4.
JavaTA Architecture
We have implemented a prototype of the JavaTA framework as a distributed system. Fig. 3 shows the main tiers and components of the framework. The architecture of JavaTA is composed of four tiers. The first tier consists of three components: the JPDA, the Prolog server, and the built-in primitives. JPDA, the Java Platform Debugger Architecture [10], is designed as a distributed system that can interface with a JVM running on the same machine or on a different machine. Prolog Beans [20] is a Prolog server that can be interfaced with Java or .Net. The client-server architecture of Prolog Beans allows the server to be a component of a distributed system. Prolog Beans was designed to handle large applications.
The second tier is composed of two components: the Logger and the Query Manager. Once the Logger receives a Java program it starts a JVM and subscribes for the desired events with the JPDA. It is also possible (but not implemented in the current prototype) that the Logger interacts with an already running JVM. The Query Manager is responsible for constructing Prolog goals and sending the constructed goals to the Prolog Beans server. Once the Query Manger receives answers, it forwards them back to the Tools Interface. The third tier is composed of only one component: the Tools Interface which is a facade for the JavaTA Framework. The fourth tier has only one component: the User Interface that interacts with the Tools interface and the user. [6,7] and JyLog [11] have implemented similar recording techniques based on logging in XML; however, JEL describes the program trace as a set of Prolog facts. JEL can be easily extended to include a sophisticated description of static and dynamic information about a given program. Table 2 shows part of the BNF grammar of JEL. The basic construct in JEL is the event term. Each event has a unique id and thread in addition to other specific information. Objects are identified by their class and a unique id. The implemented prototype supports the description of the following nine events.
1. Method call event records the source code location of the first executable line of the method body, the class or the instance that this method was invoked on, method name, and method arguments. 2. Method exit event records similar information as method call event, in addition to the id of the corresponding method entry event and the returned value instead of the arguments. 3. Set Field event records the source location where the field was set to a new value, the instance or the class where this field is declared, and the new value. 4. Data Structure event is recorded after a method entry event, method exit event, and set field event if the type of the field being assigned a new value is a data structure. The data structure can be an array or a Collection instance. The event describes the source code information of the event that caused the recording of the data structure. 5.
Step event describes the source code location in addition to names and values of visible local variables in each step. Each step corresponds to the execution of a source line. 6. Exception event records the source code location, the exception instance, the exception message, and the catch location if it is caught or the uncaught keyword other wise. 7. Thread Start and Thread Death events record the starting or the ending of a thread. The thread group is also recorded. 8. Member fields event records information regarding member fields of a given class. Table 2. Part of JEL BNF events ::= event* event ::= event '(' id , thread , execution-event ')' '.' execution-event ::= member-fields method-call method-exit set-field data-structure exception step thread-start thread-death method-call ::= methodcall '(' location , ( instance class ), name , arguments ')' method-exit ::= methodexit '(' id , location , ( instance class ), name , value ')' set-field ::= setfield '(' location , ( instance class ), name , value ')' data-structure ::= datastructure'(' location , contents ')' exception ::= exception '(' location , instance , message , ( location uncaught) ')' step ::= step '(' location , local-variable-list ')' member-fields ::= memberfields '(' class , member-fields ')' thread-start ::= threadstart '(' thread-group ')' thread-death ::= threaddeath '(' thread-group ')'
Queries on Program Trace
The debugging process involves three phases: (i) formulating a hypothesis about the root of the error; (ii) collecting program-specific data that is pertinent to the hypothesis; (iii) analyzing the collected data to prove or disprove the hypothesis. The difference between JavaTA and traditional debugging lies in their respective approaches to the data collection phase (ii). In JavaTA, data collection is performed by high-level queries on the trace. In traditional debugging, data collection is performed by the programmer by a process of manually stepping through the code, setting break points, and inspecting objects. In this section, the program trace is recorded as a Prolog database. The database is populated by entries corresponding to execution events which are specified by JEL. While it is possible to pre-process this database in order to construct auxiliary structures such as call trees, we do not resort to such optimizations here, but present a relatively straightforward implementation of the debugging primitives directly in terms of the event database. The debugging primitives, or predefined queries, provided by JavaTA can be organized under three categories: queries on specific events, queries on execution history, and query management. Section 4.1 discusses queries on specific events. There are four kinds of queries over the execution history and are illustrated in section 4.2. Query management and programmability techniques are discussed in section 4.3.
Queries on Program State
Group Method Calls According to Call Chain. Compared with the traditional procedural paradigm, the object-oriented paradigm engenders the use of many small methods and greater method interaction. Thus, posing queries regarding the interaction between objects is essential in the debugging process and in the understanding of object oriented programs in general. A method call can be viewed as a message whose content is the passed arguments. Each message has a response which is the returned value or void. A message can have no response if it exits abnormally, i.e. throws an exception. Call chain can serve as a way to know the execution path leading to the execution of a specific event or as a way to inspect argument values propagated through the chain of calls. Fig. 4 illustrate the call chain rule in Prolog.
The rule any enclosing method specifies any enclosing method for a given event. For example, suppose method m1 called method m2 where event e was executed. Let id c m1 , id e m1 , id c m2 , id e m2 , id e are the id's for the following events: m1 call, m1 exit, m2 call, m2 exit, and the execution of event e respectively, assuming that the program has terminated normally. Note that id c m1 < id c m2 < id e < id e m1 < id e m2 . Event e is enclosed in method m2 which is enclosed in method m1; therefore, methods m1 and m2 are considered as enclosing methods. According to the rule any enclosing method CallId = id c m1 , ExitId = id e m1 or CallId = id c m2 , ExitId = id e m2 . Therefore, a call chain leading to the execution of a given event is all the enclosing methods for that event. According to the call chain rule, OutList = [id c m1 , id c m2 ]. Query Where an Event Occurred. In object-oriented programming, execution events occur within an environment. An environment is an instance object and an instance method invocation or else it is a class and a static method invocation. This environment represents the enclosing environment for an event. The instance or the class is referred to as the enclosing instance or enclosing class and the method is referred to as the enclosing method for the event. The enclosing Fig. 5 shows the rule where exception is thrown for a given thread. Once an exception is thrown, the thread in which the exception occurred is terminated; therefore, there is at most one uncaught exception per thread. The rule where specifies that the enclosing environment for a given event is the first call in the reversed call chain specified by the full detail call chain rule which reverse the list of id's obtained form call chain rule and extract the associated events from the database. Query the State of an Object. Querying the state of an object is concerned with the encapsulation aspect of object-oriented programming. The state of an object is captured in the values of its member fields and public and protected member fields of its super classes. The rule object state in Fig. 6 illustrates how the state of the object OName whose id is OId can be reconstructed at event id E. Object instantiation event is recorded as a method call to init . The domain of the object state rule is the segment of the program history between id S when the instantiation occurred and id E which is specified by the user. Member fields of a class is recorded as memberfields event. The object state helper rule specifies field value contributing to the desired state as the last value in the field history between id S and id E. Rule instance field history is discussed in the next section.
For example, let id init , id end be the boundary of the search domain and f 1 , f 2 are fields of the desired object. Suppose that histories of fields f 1 , f 2 are {{id i , f 1 , v i }, .., {id j , f 1 , v j }} and {{id n , f 2 , v n }, .., {id m , f 2 , v m }} respectively, v k and id k stand for the value of the field and when it was assigned respectively. Note that id init < id i , id j , id n , id m <= id end , and id i < id j , and id n < id m . Then the object's state is {{id j , f 1 , v j }{id m , f 2 , v m }}. Queries On Method State In design-by-contract (DBC) [17][18][19] the client has to meet preconditions or specific requirements in order to be able to call a certain method. These requirements are usually constraints on the arguments and the state. Our method generalizes the requirement to be imposed on any execution event and not only on method calls as in DBC. The following three factors can affect the execution of a given event within the enclosing method: (1) arguments values, (2) the returned value of all preceding method calls to a given event within the same enclosing method. Fig. 7 shows the pre event called methods rule, and (3) local variables values before the execution of the event. Thus those three factors are considered candidate queries.
Analogously, the post-condition in DBC is the effect that the called method promises upon its correct completion. Our methodology generalizes this idea to all executed events. The effect of the execution of an event on the enclosing method can appear in the following three areas: (1) the returned value of the enclosing method, (2) methods that have been called after the execution of the event within the same enclosing method, and (3) Local variables values after the execution of the event. DBC is not capable of specifying directly that some other methods need to be called before or after a given method. Having recorded the execution history it is possible to inspect whether a certain method(s) has been called before or after a given event.
Queries over Execution History
Execution History Subset. The programmer should have the ability to focus on an interval of the execution history when an erroneous behavior is suspected to occur. Such feature is useful in dealing with large program trace by allowing the programmer to filter out irrelevant data. Gathering Data. Eisenstadt [5] in his study on how bugs were found in 51 cases gathered from professional programmers found that programmers have used the following 4 techniques to locate the defect root: data gathering, code inspection, expert help, and controlled experiments. In 27 cases the bugs were found by gathering data regarding the execution of the program. JavaTA can gather data automatically regarding the following (1) member field value history;
(2) local variable value history which is important in understating loop execution;
(3) history of arguments of method calls; (4) history of return value of method calls; (5) history of contents of data structure; (6) all class instances and their states which is important in understanding user defined data structures; (7) thread status such: running and exited threads. Fig. 8 shows the rule for instance field history. The rule specifies a segment of the history between id S and id E for an instance field F of object OName whose unique id is OId. The rule instance field value specifies that a value of a given field can be obtained from a set field event provided that its id is between S and E. Call Tree. Grouping method calls according to a call tree is motivated by the need to depict interactions among objects. Call tree can be defined as methods called by the method of interest. Method calls that are involved in a call tree collaborate in achieving one task. Those methods are not necessarily dependent on each other, unlike method calls in a call chain in which the called method depends on the caller.
Query about Statement Execution. One of the most recurring questions in the debugging process is whether a certain statement has been executed or not. Novice programmers find the answer for such a question by inserting multiple print statements in program's code. Advanced developer would insert break points using a traditional debugger to verify whether a given statement has been executed or not. The answer to this question is either yes or no. We propose the following seven queries. (1) Was a given conditional statement executed? (2) Was a given method called? (3) Was a member field assigned to a given value? (4) Is there an instance of a specific class? (5) Was a specific exception caught? (6) Is a given thread still running? (7) Has a given thread exited?
Programmability and Query Management
Compose and Save Queries. The ability to compose queries provides a way to adapt queries to recurring bug patterns as well as to the individual needs of the developer. The idea is similar to the idea behind the Emacs system that allows the user to add macros dynamically to add functionality to the system. Composed queries guarantee the flexibility and the extendibility of our framework. Allowing the user to add queries dynamically results in a general purpose static analyzer for program trace. However, we do not have experimental data to support our claim especially on large program traces or for more complicated analyses.
Liang and Kai [15] developed a scenario-driven debugger. The idea is to allow the programmer to model a behavior view for a specific task as finite automata. The debugger allows the programmer to inspect the task execution progress. A similar capability can be added to JavaTA by composing a Prolog rule. Fig. 9 shows the login Prolog rule used to inspect the execution of the login task. The original example of the login task and its behavior view is illustrated in Liang and Kai's paper [15]. A standard login task is composed of (i) obtaining the user name (ii) obtaining the password (iii) verifying the user name and the password. If any step fails the login process fails, otherwise the user is allowed to login. One important difference is the analysis used in JavaTA is postmortem analysis. On the other hand, the scenario-driven debugger uses on-line analysis. Comparing Query Results. Eisenstadt [5] describes the "Dump & Diff" as a technique to locate errors. This technique works as follows. The output of print statements is saved to two text files corresponding to two different executions; the two files are then compared using a source-compare "diff" utility, which highlights the difference between the two outputs. This technique can be adapted to query multiple execution histories and to compare the results of multiple queries over the same execution history. Comparative queries can be helpful to see the difference between data structure contents, and call chains and much more. Comparative queries can also be applied to isolating errors related to software maintenance by posing a query on two runs obtained form two versions and comparing query results.
Save Queries Answers. Calculating a query on a large program history is costly and time demanding. In many debugging scenarios the programmer may go back to examine the results of previous queries or would like to compare them. Re-computing a query on such execution history is wasteful; therefore, queries and their answers should be saved. The WhyLine [12] allows for data provisioning to ease the debugging process; however, JavaTA adapt this technique due to the cost associated with query evaluation on large program trace.
Conclusions and Future Work
We believe that our proposed logic programming approach is a simple and effective method for debugging object oriented programs. The key to our approach representing the execution history as a logic database, and employing logic queries to answer questions about previous execution states. Our proposed query catalog is based upon an extensive study of errors in object oriented programs [8].
Work is still in progress on JavaTA. Currently we are working on a programmable tool interface to JavaTA features. We are applying our technique to larger programs, in order to gain a better understanding of the methodology and its potential limitations. We plan to make the JavaTA available as a plug-in for Eclipse. We are also exploring the performance characteristics in terms of both the space and the time needed for various types of queries. We are also interested in quantifying the overhead of extracting the program trace.
| 4,060 |
cs0701107
|
2949624117
|
This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.
|
Hy @math @cite_12 is a visual debugger for distributed programs. The system works as follows. The program is instrumented to obtain trace which is used to build database implemented in CORAL. Programmers can specify debugging queries and visualizations using a visual declarative query language called GraphLog. These visual queries once formulated can be saved and applied to other programs since these queries are application independent. This technique allows the programmere to visualize a specific program behavior pattern and filter out irrelevant events. Hy @math performs static trace analysis and has a simple postmortem dynamic trace analysis by animating the program trace.
|
{
"abstract": [
"A programmer attempting to understand and debug a distributed program deals with large volumes of trace data that describe the program's behaviour. Visualization is widely believed to help in this and similar tasks. We contend that visualization is indeed useful, but only if accompanied of powerful data management facilities to support abstraction and filtering. The Hy+ visualization system and GraphLog query language provide these facilities. They support not just a fixed way of visualizing data, but visualizations that can be specified and manipulated through declarative queries, like data are manipulated in a database. In this paper we show how the Hy+ GraphLog system can be used by distributed program debuggers to meet their information manipulation and visualization goals."
],
"cite_N": [
"@cite_12"
],
"mid": [
"1522718855"
]
}
|
JavaTA: A Logic-based Debugger for Java
|
This paper shows some of the benefits of applying logic programming techniques in the debugging of object-oriented programs. Debugging object-oriented programs has traditionally been a procedural process in that the programmer has to proceed step-by-step and object-by-object in order to uncover the cause of an error. In this paper, we propose a logic-based approach to the debugging of object-oriented programs in which debugging data can be collected via higher level logical queries. We represent the salient events during the execution of a Java program by a logic database, and implement these queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program.
To illustrate our approach, note that a crucial aspect of program understanding is observing how variables take on different values during execution. The use of print statements is the standard procedural way of eliciting this information. This is a classic case of the need to query over execution history. Other examples include queries to find which variable has a certain value; the calling sequence that results in a certain outcome; whether a certain statement was executed; etc. We arrived at a set of queries by a study of the types of errors that arise in object-oriented programs [8].
We propose two broad categories of queries in this paper: (i) queries over individual execution states and (ii) queries over the entire history of execution, or a subset of the history. Our proposed method recognizes the need to query subhistories; such capability is especially useful when debugging large scale software whose program trace is composed of millions of execution events. Our system has the ability to filter system objects so that a programmer may focus on the objects explicitly instantiated form user defined classes.
Our current implementation, called JavaTA, takes a Java program as input and builds a logic database of salient events (method call, return, assignment, object creation, etc) during the execution of a Java program using the JPDA interface (Java Platform Debugger Architecture). Our approach to recording the history of changes is incremental in nature, i.e., when a variable is assigned, we save only the new value assigned to the variable. Thus, queries about previous execution states involve some state reconstruction. A textual interface allows the user to pose a number of queries as detailed in section 4.
Thus the contributions of our paper are: (1) logic-based approach to debugging object-oriented programs; (2) the provision of queries over individual states and the history of execution; (3) a prototype for a trace analysis for objectoriented programs.
The remainder of the paper is organized as follows. Section 2 presents an example, called the 'traveling null pointer', in order to illustate our overall approach. Section 3 presents the architecture of JavaTA, along with the Java Event Log language. Section 4 outlines the principles of our debugging methodology. Section 5 surveys closely related research and compares them with our work. Section 6 presents conclusions and areas of further research.
Overview of Logic-based Debugging
This section provides an overview of our approach to logic-based debugging with an example. We present the 'traveling null pointer' example, which illustrates a bug pattern in which a method call incorrectly returns a null pointer and the client of that method propagates the null pointer through a call chain, and, finally, a null pointer exception is thrown when the client code of the last call in the chain tries to de-reference the null pointer. In other words, the code that originates the null pointer and the code that de-references that pointer are far apart spatially and temporally. Fig. 1 illustrates the traveling null pointer defect pattern in Java code. The instance method doSomeThing in FarAWayClass returns a null pointer due to erroneous conditions. When this program is executed it reports a null pointer exception at line 14.
JavaTA generates a trace for the example program. (We use the term 'trace' and 'execution history' interchangeably in this paper.) The trace includes 17 events as shown in Fig. 2 a Prolog-based description language for program trace. For example, the second event recorded has a unique id 1 and it belongs to the main thread. The event has been recorded due to the invocation of a method called main. The term l('Example.java ', 20) indicates that the method is defined in the Example.java file on line 20. The term c('Example') means that the method is a class or static method of the Example class. The main method takes an instance of array of strings as the only argument. Instance or object is described by the class name and a unique id as in the term o('java.lang.String[]', 641).
To facilitate trace analysis JavaTA provides a set of predefined queries. Table 1 shows the three predefined queries used in the debugging session. First the user asked regarding the environment where the exception is thrown as in Q1. A1 indicates that the enclosing method is mN whose single argument is null and the call to the enclosing method occurred at event id 14. The current question is where the null pointer originated. Q2 inquires full detail call chain leading to event id 14. A2 shows that method m1 called method m2 which called method mN. The initial call to method main and the constructor is omitted for simplicity of presentation. By investigating the argument passed to m2 it is clear that it has a null value. Method m2 is called from m1, and m1 is called at event id 4. When looking at the source code of method m1, the programmer concludes that the local variable 'result' holds a null value since it is passed as the argument to m2. The Prolog code for the three queries referenced in table 1 are shown in section 4. Given that these are frequently used queries in object-oriented program debugging and also noting that the average Java programmer may be unfamiliar with Prolog, JavaTA provides these queries as built-in primitives. Several additional useful debugging queries and their Prolog implementation are also illustrated in section 4.
JavaTA Architecture
We have implemented a prototype of the JavaTA framework as a distributed system. Fig. 3 shows the main tiers and components of the framework. The architecture of JavaTA is composed of four tiers. The first tier consists of three components: the JPDA, the Prolog server, and the built-in primitives. JPDA, the Java Platform Debugger Architecture [10], is designed as a distributed system that can interface with a JVM running on the same machine or on a different machine. Prolog Beans [20] is a Prolog server that can be interfaced with Java or .Net. The client-server architecture of Prolog Beans allows the server to be a component of a distributed system. Prolog Beans was designed to handle large applications.
The second tier is composed of two components: the Logger and the Query Manager. Once the Logger receives a Java program it starts a JVM and subscribes for the desired events with the JPDA. It is also possible (but not implemented in the current prototype) that the Logger interacts with an already running JVM. The Query Manager is responsible for constructing Prolog goals and sending the constructed goals to the Prolog Beans server. Once the Query Manger receives answers, it forwards them back to the Tools Interface. The third tier is composed of only one component: the Tools Interface which is a facade for the JavaTA Framework. The fourth tier has only one component: the User Interface that interacts with the Tools interface and the user. [6,7] and JyLog [11] have implemented similar recording techniques based on logging in XML; however, JEL describes the program trace as a set of Prolog facts. JEL can be easily extended to include a sophisticated description of static and dynamic information about a given program. Table 2 shows part of the BNF grammar of JEL. The basic construct in JEL is the event term. Each event has a unique id and thread in addition to other specific information. Objects are identified by their class and a unique id. The implemented prototype supports the description of the following nine events.
1. Method call event records the source code location of the first executable line of the method body, the class or the instance that this method was invoked on, method name, and method arguments. 2. Method exit event records similar information as method call event, in addition to the id of the corresponding method entry event and the returned value instead of the arguments. 3. Set Field event records the source location where the field was set to a new value, the instance or the class where this field is declared, and the new value. 4. Data Structure event is recorded after a method entry event, method exit event, and set field event if the type of the field being assigned a new value is a data structure. The data structure can be an array or a Collection instance. The event describes the source code information of the event that caused the recording of the data structure. 5.
Step event describes the source code location in addition to names and values of visible local variables in each step. Each step corresponds to the execution of a source line. 6. Exception event records the source code location, the exception instance, the exception message, and the catch location if it is caught or the uncaught keyword other wise. 7. Thread Start and Thread Death events record the starting or the ending of a thread. The thread group is also recorded. 8. Member fields event records information regarding member fields of a given class. Table 2. Part of JEL BNF events ::= event* event ::= event '(' id , thread , execution-event ')' '.' execution-event ::= member-fields method-call method-exit set-field data-structure exception step thread-start thread-death method-call ::= methodcall '(' location , ( instance class ), name , arguments ')' method-exit ::= methodexit '(' id , location , ( instance class ), name , value ')' set-field ::= setfield '(' location , ( instance class ), name , value ')' data-structure ::= datastructure'(' location , contents ')' exception ::= exception '(' location , instance , message , ( location uncaught) ')' step ::= step '(' location , local-variable-list ')' member-fields ::= memberfields '(' class , member-fields ')' thread-start ::= threadstart '(' thread-group ')' thread-death ::= threaddeath '(' thread-group ')'
Queries on Program Trace
The debugging process involves three phases: (i) formulating a hypothesis about the root of the error; (ii) collecting program-specific data that is pertinent to the hypothesis; (iii) analyzing the collected data to prove or disprove the hypothesis. The difference between JavaTA and traditional debugging lies in their respective approaches to the data collection phase (ii). In JavaTA, data collection is performed by high-level queries on the trace. In traditional debugging, data collection is performed by the programmer by a process of manually stepping through the code, setting break points, and inspecting objects. In this section, the program trace is recorded as a Prolog database. The database is populated by entries corresponding to execution events which are specified by JEL. While it is possible to pre-process this database in order to construct auxiliary structures such as call trees, we do not resort to such optimizations here, but present a relatively straightforward implementation of the debugging primitives directly in terms of the event database. The debugging primitives, or predefined queries, provided by JavaTA can be organized under three categories: queries on specific events, queries on execution history, and query management. Section 4.1 discusses queries on specific events. There are four kinds of queries over the execution history and are illustrated in section 4.2. Query management and programmability techniques are discussed in section 4.3.
Queries on Program State
Group Method Calls According to Call Chain. Compared with the traditional procedural paradigm, the object-oriented paradigm engenders the use of many small methods and greater method interaction. Thus, posing queries regarding the interaction between objects is essential in the debugging process and in the understanding of object oriented programs in general. A method call can be viewed as a message whose content is the passed arguments. Each message has a response which is the returned value or void. A message can have no response if it exits abnormally, i.e. throws an exception. Call chain can serve as a way to know the execution path leading to the execution of a specific event or as a way to inspect argument values propagated through the chain of calls. Fig. 4 illustrate the call chain rule in Prolog.
The rule any enclosing method specifies any enclosing method for a given event. For example, suppose method m1 called method m2 where event e was executed. Let id c m1 , id e m1 , id c m2 , id e m2 , id e are the id's for the following events: m1 call, m1 exit, m2 call, m2 exit, and the execution of event e respectively, assuming that the program has terminated normally. Note that id c m1 < id c m2 < id e < id e m1 < id e m2 . Event e is enclosed in method m2 which is enclosed in method m1; therefore, methods m1 and m2 are considered as enclosing methods. According to the rule any enclosing method CallId = id c m1 , ExitId = id e m1 or CallId = id c m2 , ExitId = id e m2 . Therefore, a call chain leading to the execution of a given event is all the enclosing methods for that event. According to the call chain rule, OutList = [id c m1 , id c m2 ]. Query Where an Event Occurred. In object-oriented programming, execution events occur within an environment. An environment is an instance object and an instance method invocation or else it is a class and a static method invocation. This environment represents the enclosing environment for an event. The instance or the class is referred to as the enclosing instance or enclosing class and the method is referred to as the enclosing method for the event. The enclosing Fig. 5 shows the rule where exception is thrown for a given thread. Once an exception is thrown, the thread in which the exception occurred is terminated; therefore, there is at most one uncaught exception per thread. The rule where specifies that the enclosing environment for a given event is the first call in the reversed call chain specified by the full detail call chain rule which reverse the list of id's obtained form call chain rule and extract the associated events from the database. Query the State of an Object. Querying the state of an object is concerned with the encapsulation aspect of object-oriented programming. The state of an object is captured in the values of its member fields and public and protected member fields of its super classes. The rule object state in Fig. 6 illustrates how the state of the object OName whose id is OId can be reconstructed at event id E. Object instantiation event is recorded as a method call to init . The domain of the object state rule is the segment of the program history between id S when the instantiation occurred and id E which is specified by the user. Member fields of a class is recorded as memberfields event. The object state helper rule specifies field value contributing to the desired state as the last value in the field history between id S and id E. Rule instance field history is discussed in the next section.
For example, let id init , id end be the boundary of the search domain and f 1 , f 2 are fields of the desired object. Suppose that histories of fields f 1 , f 2 are {{id i , f 1 , v i }, .., {id j , f 1 , v j }} and {{id n , f 2 , v n }, .., {id m , f 2 , v m }} respectively, v k and id k stand for the value of the field and when it was assigned respectively. Note that id init < id i , id j , id n , id m <= id end , and id i < id j , and id n < id m . Then the object's state is {{id j , f 1 , v j }{id m , f 2 , v m }}. Queries On Method State In design-by-contract (DBC) [17][18][19] the client has to meet preconditions or specific requirements in order to be able to call a certain method. These requirements are usually constraints on the arguments and the state. Our method generalizes the requirement to be imposed on any execution event and not only on method calls as in DBC. The following three factors can affect the execution of a given event within the enclosing method: (1) arguments values, (2) the returned value of all preceding method calls to a given event within the same enclosing method. Fig. 7 shows the pre event called methods rule, and (3) local variables values before the execution of the event. Thus those three factors are considered candidate queries.
Analogously, the post-condition in DBC is the effect that the called method promises upon its correct completion. Our methodology generalizes this idea to all executed events. The effect of the execution of an event on the enclosing method can appear in the following three areas: (1) the returned value of the enclosing method, (2) methods that have been called after the execution of the event within the same enclosing method, and (3) Local variables values after the execution of the event. DBC is not capable of specifying directly that some other methods need to be called before or after a given method. Having recorded the execution history it is possible to inspect whether a certain method(s) has been called before or after a given event.
Queries over Execution History
Execution History Subset. The programmer should have the ability to focus on an interval of the execution history when an erroneous behavior is suspected to occur. Such feature is useful in dealing with large program trace by allowing the programmer to filter out irrelevant data. Gathering Data. Eisenstadt [5] in his study on how bugs were found in 51 cases gathered from professional programmers found that programmers have used the following 4 techniques to locate the defect root: data gathering, code inspection, expert help, and controlled experiments. In 27 cases the bugs were found by gathering data regarding the execution of the program. JavaTA can gather data automatically regarding the following (1) member field value history;
(2) local variable value history which is important in understating loop execution;
(3) history of arguments of method calls; (4) history of return value of method calls; (5) history of contents of data structure; (6) all class instances and their states which is important in understanding user defined data structures; (7) thread status such: running and exited threads. Fig. 8 shows the rule for instance field history. The rule specifies a segment of the history between id S and id E for an instance field F of object OName whose unique id is OId. The rule instance field value specifies that a value of a given field can be obtained from a set field event provided that its id is between S and E. Call Tree. Grouping method calls according to a call tree is motivated by the need to depict interactions among objects. Call tree can be defined as methods called by the method of interest. Method calls that are involved in a call tree collaborate in achieving one task. Those methods are not necessarily dependent on each other, unlike method calls in a call chain in which the called method depends on the caller.
Query about Statement Execution. One of the most recurring questions in the debugging process is whether a certain statement has been executed or not. Novice programmers find the answer for such a question by inserting multiple print statements in program's code. Advanced developer would insert break points using a traditional debugger to verify whether a given statement has been executed or not. The answer to this question is either yes or no. We propose the following seven queries. (1) Was a given conditional statement executed? (2) Was a given method called? (3) Was a member field assigned to a given value? (4) Is there an instance of a specific class? (5) Was a specific exception caught? (6) Is a given thread still running? (7) Has a given thread exited?
Programmability and Query Management
Compose and Save Queries. The ability to compose queries provides a way to adapt queries to recurring bug patterns as well as to the individual needs of the developer. The idea is similar to the idea behind the Emacs system that allows the user to add macros dynamically to add functionality to the system. Composed queries guarantee the flexibility and the extendibility of our framework. Allowing the user to add queries dynamically results in a general purpose static analyzer for program trace. However, we do not have experimental data to support our claim especially on large program traces or for more complicated analyses.
Liang and Kai [15] developed a scenario-driven debugger. The idea is to allow the programmer to model a behavior view for a specific task as finite automata. The debugger allows the programmer to inspect the task execution progress. A similar capability can be added to JavaTA by composing a Prolog rule. Fig. 9 shows the login Prolog rule used to inspect the execution of the login task. The original example of the login task and its behavior view is illustrated in Liang and Kai's paper [15]. A standard login task is composed of (i) obtaining the user name (ii) obtaining the password (iii) verifying the user name and the password. If any step fails the login process fails, otherwise the user is allowed to login. One important difference is the analysis used in JavaTA is postmortem analysis. On the other hand, the scenario-driven debugger uses on-line analysis. Comparing Query Results. Eisenstadt [5] describes the "Dump & Diff" as a technique to locate errors. This technique works as follows. The output of print statements is saved to two text files corresponding to two different executions; the two files are then compared using a source-compare "diff" utility, which highlights the difference between the two outputs. This technique can be adapted to query multiple execution histories and to compare the results of multiple queries over the same execution history. Comparative queries can be helpful to see the difference between data structure contents, and call chains and much more. Comparative queries can also be applied to isolating errors related to software maintenance by posing a query on two runs obtained form two versions and comparing query results.
Save Queries Answers. Calculating a query on a large program history is costly and time demanding. In many debugging scenarios the programmer may go back to examine the results of previous queries or would like to compare them. Re-computing a query on such execution history is wasteful; therefore, queries and their answers should be saved. The WhyLine [12] allows for data provisioning to ease the debugging process; however, JavaTA adapt this technique due to the cost associated with query evaluation on large program trace.
Conclusions and Future Work
We believe that our proposed logic programming approach is a simple and effective method for debugging object oriented programs. The key to our approach representing the execution history as a logic database, and employing logic queries to answer questions about previous execution states. Our proposed query catalog is based upon an extensive study of errors in object oriented programs [8].
Work is still in progress on JavaTA. Currently we are working on a programmable tool interface to JavaTA features. We are applying our technique to larger programs, in order to gain a better understanding of the methodology and its potential limitations. We plan to make the JavaTA available as a plug-in for Eclipse. We are also exploring the performance characteristics in terms of both the space and the time needed for various types of queries. We are also interested in quantifying the overhead of extracting the program trace.
| 4,060 |
cs0701107
|
2949624117
|
This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.
|
JIVE's @cite_1 @cite_9 (Java Interactive Visualization Engine) design is based on the following seven criteria: (1) depict objects as environment of method execution; (2) display object states at different levels of granularity; (3) provide a sequence diagram to capture the history of execution; (4) support forwards and backwards execution of programs; (5) support queries on the runtime state; (6) produce clear and legible drawings; (7) uses exiting Java technologies. JIVE interacts with the JPDA to extract program trace. An on-line dynamic trace analysis is applied while the program runs for the first time in the forwards direction and postmortem trace analysis is applied in the backwards direction or in the forwards direction once program terminates.
|
{
"abstract": [
"A novel approach to the runtime visualization and analysis of object-oriented programs is presented and illustrated through a prototype system called JIVE: Java Interactive Visualization Environment. The main contributions of JIVE are: multiple concurrent representations of program state and execution history; support for forward and reverse execution; and graphical queries over program execution. This model facilitates program understanding and interactive debugging. Our visualization of runtime states clarifies the important point that objects are environments of execution. The history of object interaction is displayed via sequence diagrams, and in this way we help close the loop between design-time and run-time representations. Interactive execution is made possible by maintaining a runtime history database, which may be queried for information on variable behavior, method executions, and object interactions. We illustrate the capabilities of this system through examples. JIVE is implemented using the Java Platform Debugger Architecture and supports the Java language and libraries, including multithreaded and GUI applications.",
"We describe a novel approach to runtime visualization of object-oriented programs. Our approach features: visualizations of execution state and history; forward and reverse execution; interactive queries during program execution; and advanced drawing capabilities involving a combination of compile-time and runtime-analysis. Our methodology is realized in a software tool called JIVE, for Java Interactive Visualization Environment."
],
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2002691587",
"2033526593"
]
}
|
JavaTA: A Logic-based Debugger for Java
|
This paper shows some of the benefits of applying logic programming techniques in the debugging of object-oriented programs. Debugging object-oriented programs has traditionally been a procedural process in that the programmer has to proceed step-by-step and object-by-object in order to uncover the cause of an error. In this paper, we propose a logic-based approach to the debugging of object-oriented programs in which debugging data can be collected via higher level logical queries. We represent the salient events during the execution of a Java program by a logic database, and implement these queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program.
To illustrate our approach, note that a crucial aspect of program understanding is observing how variables take on different values during execution. The use of print statements is the standard procedural way of eliciting this information. This is a classic case of the need to query over execution history. Other examples include queries to find which variable has a certain value; the calling sequence that results in a certain outcome; whether a certain statement was executed; etc. We arrived at a set of queries by a study of the types of errors that arise in object-oriented programs [8].
We propose two broad categories of queries in this paper: (i) queries over individual execution states and (ii) queries over the entire history of execution, or a subset of the history. Our proposed method recognizes the need to query subhistories; such capability is especially useful when debugging large scale software whose program trace is composed of millions of execution events. Our system has the ability to filter system objects so that a programmer may focus on the objects explicitly instantiated form user defined classes.
Our current implementation, called JavaTA, takes a Java program as input and builds a logic database of salient events (method call, return, assignment, object creation, etc) during the execution of a Java program using the JPDA interface (Java Platform Debugger Architecture). Our approach to recording the history of changes is incremental in nature, i.e., when a variable is assigned, we save only the new value assigned to the variable. Thus, queries about previous execution states involve some state reconstruction. A textual interface allows the user to pose a number of queries as detailed in section 4.
Thus the contributions of our paper are: (1) logic-based approach to debugging object-oriented programs; (2) the provision of queries over individual states and the history of execution; (3) a prototype for a trace analysis for objectoriented programs.
The remainder of the paper is organized as follows. Section 2 presents an example, called the 'traveling null pointer', in order to illustate our overall approach. Section 3 presents the architecture of JavaTA, along with the Java Event Log language. Section 4 outlines the principles of our debugging methodology. Section 5 surveys closely related research and compares them with our work. Section 6 presents conclusions and areas of further research.
Overview of Logic-based Debugging
This section provides an overview of our approach to logic-based debugging with an example. We present the 'traveling null pointer' example, which illustrates a bug pattern in which a method call incorrectly returns a null pointer and the client of that method propagates the null pointer through a call chain, and, finally, a null pointer exception is thrown when the client code of the last call in the chain tries to de-reference the null pointer. In other words, the code that originates the null pointer and the code that de-references that pointer are far apart spatially and temporally. Fig. 1 illustrates the traveling null pointer defect pattern in Java code. The instance method doSomeThing in FarAWayClass returns a null pointer due to erroneous conditions. When this program is executed it reports a null pointer exception at line 14.
JavaTA generates a trace for the example program. (We use the term 'trace' and 'execution history' interchangeably in this paper.) The trace includes 17 events as shown in Fig. 2 a Prolog-based description language for program trace. For example, the second event recorded has a unique id 1 and it belongs to the main thread. The event has been recorded due to the invocation of a method called main. The term l('Example.java ', 20) indicates that the method is defined in the Example.java file on line 20. The term c('Example') means that the method is a class or static method of the Example class. The main method takes an instance of array of strings as the only argument. Instance or object is described by the class name and a unique id as in the term o('java.lang.String[]', 641).
To facilitate trace analysis JavaTA provides a set of predefined queries. Table 1 shows the three predefined queries used in the debugging session. First the user asked regarding the environment where the exception is thrown as in Q1. A1 indicates that the enclosing method is mN whose single argument is null and the call to the enclosing method occurred at event id 14. The current question is where the null pointer originated. Q2 inquires full detail call chain leading to event id 14. A2 shows that method m1 called method m2 which called method mN. The initial call to method main and the constructor is omitted for simplicity of presentation. By investigating the argument passed to m2 it is clear that it has a null value. Method m2 is called from m1, and m1 is called at event id 4. When looking at the source code of method m1, the programmer concludes that the local variable 'result' holds a null value since it is passed as the argument to m2. The Prolog code for the three queries referenced in table 1 are shown in section 4. Given that these are frequently used queries in object-oriented program debugging and also noting that the average Java programmer may be unfamiliar with Prolog, JavaTA provides these queries as built-in primitives. Several additional useful debugging queries and their Prolog implementation are also illustrated in section 4.
JavaTA Architecture
We have implemented a prototype of the JavaTA framework as a distributed system. Fig. 3 shows the main tiers and components of the framework. The architecture of JavaTA is composed of four tiers. The first tier consists of three components: the JPDA, the Prolog server, and the built-in primitives. JPDA, the Java Platform Debugger Architecture [10], is designed as a distributed system that can interface with a JVM running on the same machine or on a different machine. Prolog Beans [20] is a Prolog server that can be interfaced with Java or .Net. The client-server architecture of Prolog Beans allows the server to be a component of a distributed system. Prolog Beans was designed to handle large applications.
The second tier is composed of two components: the Logger and the Query Manager. Once the Logger receives a Java program it starts a JVM and subscribes for the desired events with the JPDA. It is also possible (but not implemented in the current prototype) that the Logger interacts with an already running JVM. The Query Manager is responsible for constructing Prolog goals and sending the constructed goals to the Prolog Beans server. Once the Query Manger receives answers, it forwards them back to the Tools Interface. The third tier is composed of only one component: the Tools Interface which is a facade for the JavaTA Framework. The fourth tier has only one component: the User Interface that interacts with the Tools interface and the user. [6,7] and JyLog [11] have implemented similar recording techniques based on logging in XML; however, JEL describes the program trace as a set of Prolog facts. JEL can be easily extended to include a sophisticated description of static and dynamic information about a given program. Table 2 shows part of the BNF grammar of JEL. The basic construct in JEL is the event term. Each event has a unique id and thread in addition to other specific information. Objects are identified by their class and a unique id. The implemented prototype supports the description of the following nine events.
1. Method call event records the source code location of the first executable line of the method body, the class or the instance that this method was invoked on, method name, and method arguments. 2. Method exit event records similar information as method call event, in addition to the id of the corresponding method entry event and the returned value instead of the arguments. 3. Set Field event records the source location where the field was set to a new value, the instance or the class where this field is declared, and the new value. 4. Data Structure event is recorded after a method entry event, method exit event, and set field event if the type of the field being assigned a new value is a data structure. The data structure can be an array or a Collection instance. The event describes the source code information of the event that caused the recording of the data structure. 5.
Step event describes the source code location in addition to names and values of visible local variables in each step. Each step corresponds to the execution of a source line. 6. Exception event records the source code location, the exception instance, the exception message, and the catch location if it is caught or the uncaught keyword other wise. 7. Thread Start and Thread Death events record the starting or the ending of a thread. The thread group is also recorded. 8. Member fields event records information regarding member fields of a given class. Table 2. Part of JEL BNF events ::= event* event ::= event '(' id , thread , execution-event ')' '.' execution-event ::= member-fields method-call method-exit set-field data-structure exception step thread-start thread-death method-call ::= methodcall '(' location , ( instance class ), name , arguments ')' method-exit ::= methodexit '(' id , location , ( instance class ), name , value ')' set-field ::= setfield '(' location , ( instance class ), name , value ')' data-structure ::= datastructure'(' location , contents ')' exception ::= exception '(' location , instance , message , ( location uncaught) ')' step ::= step '(' location , local-variable-list ')' member-fields ::= memberfields '(' class , member-fields ')' thread-start ::= threadstart '(' thread-group ')' thread-death ::= threaddeath '(' thread-group ')'
Queries on Program Trace
The debugging process involves three phases: (i) formulating a hypothesis about the root of the error; (ii) collecting program-specific data that is pertinent to the hypothesis; (iii) analyzing the collected data to prove or disprove the hypothesis. The difference between JavaTA and traditional debugging lies in their respective approaches to the data collection phase (ii). In JavaTA, data collection is performed by high-level queries on the trace. In traditional debugging, data collection is performed by the programmer by a process of manually stepping through the code, setting break points, and inspecting objects. In this section, the program trace is recorded as a Prolog database. The database is populated by entries corresponding to execution events which are specified by JEL. While it is possible to pre-process this database in order to construct auxiliary structures such as call trees, we do not resort to such optimizations here, but present a relatively straightforward implementation of the debugging primitives directly in terms of the event database. The debugging primitives, or predefined queries, provided by JavaTA can be organized under three categories: queries on specific events, queries on execution history, and query management. Section 4.1 discusses queries on specific events. There are four kinds of queries over the execution history and are illustrated in section 4.2. Query management and programmability techniques are discussed in section 4.3.
Queries on Program State
Group Method Calls According to Call Chain. Compared with the traditional procedural paradigm, the object-oriented paradigm engenders the use of many small methods and greater method interaction. Thus, posing queries regarding the interaction between objects is essential in the debugging process and in the understanding of object oriented programs in general. A method call can be viewed as a message whose content is the passed arguments. Each message has a response which is the returned value or void. A message can have no response if it exits abnormally, i.e. throws an exception. Call chain can serve as a way to know the execution path leading to the execution of a specific event or as a way to inspect argument values propagated through the chain of calls. Fig. 4 illustrate the call chain rule in Prolog.
The rule any enclosing method specifies any enclosing method for a given event. For example, suppose method m1 called method m2 where event e was executed. Let id c m1 , id e m1 , id c m2 , id e m2 , id e are the id's for the following events: m1 call, m1 exit, m2 call, m2 exit, and the execution of event e respectively, assuming that the program has terminated normally. Note that id c m1 < id c m2 < id e < id e m1 < id e m2 . Event e is enclosed in method m2 which is enclosed in method m1; therefore, methods m1 and m2 are considered as enclosing methods. According to the rule any enclosing method CallId = id c m1 , ExitId = id e m1 or CallId = id c m2 , ExitId = id e m2 . Therefore, a call chain leading to the execution of a given event is all the enclosing methods for that event. According to the call chain rule, OutList = [id c m1 , id c m2 ]. Query Where an Event Occurred. In object-oriented programming, execution events occur within an environment. An environment is an instance object and an instance method invocation or else it is a class and a static method invocation. This environment represents the enclosing environment for an event. The instance or the class is referred to as the enclosing instance or enclosing class and the method is referred to as the enclosing method for the event. The enclosing Fig. 5 shows the rule where exception is thrown for a given thread. Once an exception is thrown, the thread in which the exception occurred is terminated; therefore, there is at most one uncaught exception per thread. The rule where specifies that the enclosing environment for a given event is the first call in the reversed call chain specified by the full detail call chain rule which reverse the list of id's obtained form call chain rule and extract the associated events from the database. Query the State of an Object. Querying the state of an object is concerned with the encapsulation aspect of object-oriented programming. The state of an object is captured in the values of its member fields and public and protected member fields of its super classes. The rule object state in Fig. 6 illustrates how the state of the object OName whose id is OId can be reconstructed at event id E. Object instantiation event is recorded as a method call to init . The domain of the object state rule is the segment of the program history between id S when the instantiation occurred and id E which is specified by the user. Member fields of a class is recorded as memberfields event. The object state helper rule specifies field value contributing to the desired state as the last value in the field history between id S and id E. Rule instance field history is discussed in the next section.
For example, let id init , id end be the boundary of the search domain and f 1 , f 2 are fields of the desired object. Suppose that histories of fields f 1 , f 2 are {{id i , f 1 , v i }, .., {id j , f 1 , v j }} and {{id n , f 2 , v n }, .., {id m , f 2 , v m }} respectively, v k and id k stand for the value of the field and when it was assigned respectively. Note that id init < id i , id j , id n , id m <= id end , and id i < id j , and id n < id m . Then the object's state is {{id j , f 1 , v j }{id m , f 2 , v m }}. Queries On Method State In design-by-contract (DBC) [17][18][19] the client has to meet preconditions or specific requirements in order to be able to call a certain method. These requirements are usually constraints on the arguments and the state. Our method generalizes the requirement to be imposed on any execution event and not only on method calls as in DBC. The following three factors can affect the execution of a given event within the enclosing method: (1) arguments values, (2) the returned value of all preceding method calls to a given event within the same enclosing method. Fig. 7 shows the pre event called methods rule, and (3) local variables values before the execution of the event. Thus those three factors are considered candidate queries.
Analogously, the post-condition in DBC is the effect that the called method promises upon its correct completion. Our methodology generalizes this idea to all executed events. The effect of the execution of an event on the enclosing method can appear in the following three areas: (1) the returned value of the enclosing method, (2) methods that have been called after the execution of the event within the same enclosing method, and (3) Local variables values after the execution of the event. DBC is not capable of specifying directly that some other methods need to be called before or after a given method. Having recorded the execution history it is possible to inspect whether a certain method(s) has been called before or after a given event.
Queries over Execution History
Execution History Subset. The programmer should have the ability to focus on an interval of the execution history when an erroneous behavior is suspected to occur. Such feature is useful in dealing with large program trace by allowing the programmer to filter out irrelevant data. Gathering Data. Eisenstadt [5] in his study on how bugs were found in 51 cases gathered from professional programmers found that programmers have used the following 4 techniques to locate the defect root: data gathering, code inspection, expert help, and controlled experiments. In 27 cases the bugs were found by gathering data regarding the execution of the program. JavaTA can gather data automatically regarding the following (1) member field value history;
(2) local variable value history which is important in understating loop execution;
(3) history of arguments of method calls; (4) history of return value of method calls; (5) history of contents of data structure; (6) all class instances and their states which is important in understanding user defined data structures; (7) thread status such: running and exited threads. Fig. 8 shows the rule for instance field history. The rule specifies a segment of the history between id S and id E for an instance field F of object OName whose unique id is OId. The rule instance field value specifies that a value of a given field can be obtained from a set field event provided that its id is between S and E. Call Tree. Grouping method calls according to a call tree is motivated by the need to depict interactions among objects. Call tree can be defined as methods called by the method of interest. Method calls that are involved in a call tree collaborate in achieving one task. Those methods are not necessarily dependent on each other, unlike method calls in a call chain in which the called method depends on the caller.
Query about Statement Execution. One of the most recurring questions in the debugging process is whether a certain statement has been executed or not. Novice programmers find the answer for such a question by inserting multiple print statements in program's code. Advanced developer would insert break points using a traditional debugger to verify whether a given statement has been executed or not. The answer to this question is either yes or no. We propose the following seven queries. (1) Was a given conditional statement executed? (2) Was a given method called? (3) Was a member field assigned to a given value? (4) Is there an instance of a specific class? (5) Was a specific exception caught? (6) Is a given thread still running? (7) Has a given thread exited?
Programmability and Query Management
Compose and Save Queries. The ability to compose queries provides a way to adapt queries to recurring bug patterns as well as to the individual needs of the developer. The idea is similar to the idea behind the Emacs system that allows the user to add macros dynamically to add functionality to the system. Composed queries guarantee the flexibility and the extendibility of our framework. Allowing the user to add queries dynamically results in a general purpose static analyzer for program trace. However, we do not have experimental data to support our claim especially on large program traces or for more complicated analyses.
Liang and Kai [15] developed a scenario-driven debugger. The idea is to allow the programmer to model a behavior view for a specific task as finite automata. The debugger allows the programmer to inspect the task execution progress. A similar capability can be added to JavaTA by composing a Prolog rule. Fig. 9 shows the login Prolog rule used to inspect the execution of the login task. The original example of the login task and its behavior view is illustrated in Liang and Kai's paper [15]. A standard login task is composed of (i) obtaining the user name (ii) obtaining the password (iii) verifying the user name and the password. If any step fails the login process fails, otherwise the user is allowed to login. One important difference is the analysis used in JavaTA is postmortem analysis. On the other hand, the scenario-driven debugger uses on-line analysis. Comparing Query Results. Eisenstadt [5] describes the "Dump & Diff" as a technique to locate errors. This technique works as follows. The output of print statements is saved to two text files corresponding to two different executions; the two files are then compared using a source-compare "diff" utility, which highlights the difference between the two outputs. This technique can be adapted to query multiple execution histories and to compare the results of multiple queries over the same execution history. Comparative queries can be helpful to see the difference between data structure contents, and call chains and much more. Comparative queries can also be applied to isolating errors related to software maintenance by posing a query on two runs obtained form two versions and comparing query results.
Save Queries Answers. Calculating a query on a large program history is costly and time demanding. In many debugging scenarios the programmer may go back to examine the results of previous queries or would like to compare them. Re-computing a query on such execution history is wasteful; therefore, queries and their answers should be saved. The WhyLine [12] allows for data provisioning to ease the debugging process; however, JavaTA adapt this technique due to the cost associated with query evaluation on large program trace.
Conclusions and Future Work
We believe that our proposed logic programming approach is a simple and effective method for debugging object oriented programs. The key to our approach representing the execution history as a logic database, and employing logic queries to answer questions about previous execution states. Our proposed query catalog is based upon an extensive study of errors in object oriented programs [8].
Work is still in progress on JavaTA. Currently we are working on a programmable tool interface to JavaTA features. We are applying our technique to larger programs, in order to gain a better understanding of the methodology and its potential limitations. We plan to make the JavaTA available as a plug-in for Eclipse. We are also exploring the performance characteristics in terms of both the space and the time needed for various types of queries. We are also interested in quantifying the overhead of extracting the program trace.
| 4,060 |
cs0701107
|
2949624117
|
This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.
|
The omniscient debugger (ODB) developed by Bil Lewis @cite_3 aims at easing the navigation of program trace in both forwards and backwards directions. ODB obtains program trace by a load-time instrumentation of the byte code of the debugged program. Execution events are recorded while the program runs, once finished a program state display is provided. ODB uses a static trace analysis and the program trace is kept in memory. Lewis proposed three techniques to reduce the size of the recorded program trace (1) delete old events; (2) allow the programmer to exclude a set of classes and methods form instrumentation and recording 3) a recording interval can be specified. The recording technique applied in the ODB is fast and efficient.
|
{
"abstract": [
"By recording every state change in the run of a program, it is possible to present the programmer every bit of information that might be desired. Essentially, it becomes possible to debug the program by going backwards in time,'' vastly simplifying the process of debugging. An implementation of this idea, the Omniscient Debugger,'' is used to demonstrate its viability and has been used successfully on a number of large programs. Integration with an event analysis engine for searching and control is presented. Several small-scale user studies provide encouraging results. Finally performance issues and implementation are discussed along with possible optimizations. This paper makes three contributions of interest: the concept and technique of going backwards in time,'' the GUI which presents a global view of the program state and has a formal notion of navigation through time,'' and the integration with an event analyzer."
],
"cite_N": [
"@cite_3"
],
"mid": [
"1673079227"
]
}
|
JavaTA: A Logic-based Debugger for Java
|
This paper shows some of the benefits of applying logic programming techniques in the debugging of object-oriented programs. Debugging object-oriented programs has traditionally been a procedural process in that the programmer has to proceed step-by-step and object-by-object in order to uncover the cause of an error. In this paper, we propose a logic-based approach to the debugging of object-oriented programs in which debugging data can be collected via higher level logical queries. We represent the salient events during the execution of a Java program by a logic database, and implement these queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program.
To illustrate our approach, note that a crucial aspect of program understanding is observing how variables take on different values during execution. The use of print statements is the standard procedural way of eliciting this information. This is a classic case of the need to query over execution history. Other examples include queries to find which variable has a certain value; the calling sequence that results in a certain outcome; whether a certain statement was executed; etc. We arrived at a set of queries by a study of the types of errors that arise in object-oriented programs [8].
We propose two broad categories of queries in this paper: (i) queries over individual execution states and (ii) queries over the entire history of execution, or a subset of the history. Our proposed method recognizes the need to query subhistories; such capability is especially useful when debugging large scale software whose program trace is composed of millions of execution events. Our system has the ability to filter system objects so that a programmer may focus on the objects explicitly instantiated form user defined classes.
Our current implementation, called JavaTA, takes a Java program as input and builds a logic database of salient events (method call, return, assignment, object creation, etc) during the execution of a Java program using the JPDA interface (Java Platform Debugger Architecture). Our approach to recording the history of changes is incremental in nature, i.e., when a variable is assigned, we save only the new value assigned to the variable. Thus, queries about previous execution states involve some state reconstruction. A textual interface allows the user to pose a number of queries as detailed in section 4.
Thus the contributions of our paper are: (1) logic-based approach to debugging object-oriented programs; (2) the provision of queries over individual states and the history of execution; (3) a prototype for a trace analysis for objectoriented programs.
The remainder of the paper is organized as follows. Section 2 presents an example, called the 'traveling null pointer', in order to illustate our overall approach. Section 3 presents the architecture of JavaTA, along with the Java Event Log language. Section 4 outlines the principles of our debugging methodology. Section 5 surveys closely related research and compares them with our work. Section 6 presents conclusions and areas of further research.
Overview of Logic-based Debugging
This section provides an overview of our approach to logic-based debugging with an example. We present the 'traveling null pointer' example, which illustrates a bug pattern in which a method call incorrectly returns a null pointer and the client of that method propagates the null pointer through a call chain, and, finally, a null pointer exception is thrown when the client code of the last call in the chain tries to de-reference the null pointer. In other words, the code that originates the null pointer and the code that de-references that pointer are far apart spatially and temporally. Fig. 1 illustrates the traveling null pointer defect pattern in Java code. The instance method doSomeThing in FarAWayClass returns a null pointer due to erroneous conditions. When this program is executed it reports a null pointer exception at line 14.
JavaTA generates a trace for the example program. (We use the term 'trace' and 'execution history' interchangeably in this paper.) The trace includes 17 events as shown in Fig. 2 a Prolog-based description language for program trace. For example, the second event recorded has a unique id 1 and it belongs to the main thread. The event has been recorded due to the invocation of a method called main. The term l('Example.java ', 20) indicates that the method is defined in the Example.java file on line 20. The term c('Example') means that the method is a class or static method of the Example class. The main method takes an instance of array of strings as the only argument. Instance or object is described by the class name and a unique id as in the term o('java.lang.String[]', 641).
To facilitate trace analysis JavaTA provides a set of predefined queries. Table 1 shows the three predefined queries used in the debugging session. First the user asked regarding the environment where the exception is thrown as in Q1. A1 indicates that the enclosing method is mN whose single argument is null and the call to the enclosing method occurred at event id 14. The current question is where the null pointer originated. Q2 inquires full detail call chain leading to event id 14. A2 shows that method m1 called method m2 which called method mN. The initial call to method main and the constructor is omitted for simplicity of presentation. By investigating the argument passed to m2 it is clear that it has a null value. Method m2 is called from m1, and m1 is called at event id 4. When looking at the source code of method m1, the programmer concludes that the local variable 'result' holds a null value since it is passed as the argument to m2. The Prolog code for the three queries referenced in table 1 are shown in section 4. Given that these are frequently used queries in object-oriented program debugging and also noting that the average Java programmer may be unfamiliar with Prolog, JavaTA provides these queries as built-in primitives. Several additional useful debugging queries and their Prolog implementation are also illustrated in section 4.
JavaTA Architecture
We have implemented a prototype of the JavaTA framework as a distributed system. Fig. 3 shows the main tiers and components of the framework. The architecture of JavaTA is composed of four tiers. The first tier consists of three components: the JPDA, the Prolog server, and the built-in primitives. JPDA, the Java Platform Debugger Architecture [10], is designed as a distributed system that can interface with a JVM running on the same machine or on a different machine. Prolog Beans [20] is a Prolog server that can be interfaced with Java or .Net. The client-server architecture of Prolog Beans allows the server to be a component of a distributed system. Prolog Beans was designed to handle large applications.
The second tier is composed of two components: the Logger and the Query Manager. Once the Logger receives a Java program it starts a JVM and subscribes for the desired events with the JPDA. It is also possible (but not implemented in the current prototype) that the Logger interacts with an already running JVM. The Query Manager is responsible for constructing Prolog goals and sending the constructed goals to the Prolog Beans server. Once the Query Manger receives answers, it forwards them back to the Tools Interface. The third tier is composed of only one component: the Tools Interface which is a facade for the JavaTA Framework. The fourth tier has only one component: the User Interface that interacts with the Tools interface and the user. [6,7] and JyLog [11] have implemented similar recording techniques based on logging in XML; however, JEL describes the program trace as a set of Prolog facts. JEL can be easily extended to include a sophisticated description of static and dynamic information about a given program. Table 2 shows part of the BNF grammar of JEL. The basic construct in JEL is the event term. Each event has a unique id and thread in addition to other specific information. Objects are identified by their class and a unique id. The implemented prototype supports the description of the following nine events.
1. Method call event records the source code location of the first executable line of the method body, the class or the instance that this method was invoked on, method name, and method arguments. 2. Method exit event records similar information as method call event, in addition to the id of the corresponding method entry event and the returned value instead of the arguments. 3. Set Field event records the source location where the field was set to a new value, the instance or the class where this field is declared, and the new value. 4. Data Structure event is recorded after a method entry event, method exit event, and set field event if the type of the field being assigned a new value is a data structure. The data structure can be an array or a Collection instance. The event describes the source code information of the event that caused the recording of the data structure. 5.
Step event describes the source code location in addition to names and values of visible local variables in each step. Each step corresponds to the execution of a source line. 6. Exception event records the source code location, the exception instance, the exception message, and the catch location if it is caught or the uncaught keyword other wise. 7. Thread Start and Thread Death events record the starting or the ending of a thread. The thread group is also recorded. 8. Member fields event records information regarding member fields of a given class. Table 2. Part of JEL BNF events ::= event* event ::= event '(' id , thread , execution-event ')' '.' execution-event ::= member-fields method-call method-exit set-field data-structure exception step thread-start thread-death method-call ::= methodcall '(' location , ( instance class ), name , arguments ')' method-exit ::= methodexit '(' id , location , ( instance class ), name , value ')' set-field ::= setfield '(' location , ( instance class ), name , value ')' data-structure ::= datastructure'(' location , contents ')' exception ::= exception '(' location , instance , message , ( location uncaught) ')' step ::= step '(' location , local-variable-list ')' member-fields ::= memberfields '(' class , member-fields ')' thread-start ::= threadstart '(' thread-group ')' thread-death ::= threaddeath '(' thread-group ')'
Queries on Program Trace
The debugging process involves three phases: (i) formulating a hypothesis about the root of the error; (ii) collecting program-specific data that is pertinent to the hypothesis; (iii) analyzing the collected data to prove or disprove the hypothesis. The difference between JavaTA and traditional debugging lies in their respective approaches to the data collection phase (ii). In JavaTA, data collection is performed by high-level queries on the trace. In traditional debugging, data collection is performed by the programmer by a process of manually stepping through the code, setting break points, and inspecting objects. In this section, the program trace is recorded as a Prolog database. The database is populated by entries corresponding to execution events which are specified by JEL. While it is possible to pre-process this database in order to construct auxiliary structures such as call trees, we do not resort to such optimizations here, but present a relatively straightforward implementation of the debugging primitives directly in terms of the event database. The debugging primitives, or predefined queries, provided by JavaTA can be organized under three categories: queries on specific events, queries on execution history, and query management. Section 4.1 discusses queries on specific events. There are four kinds of queries over the execution history and are illustrated in section 4.2. Query management and programmability techniques are discussed in section 4.3.
Queries on Program State
Group Method Calls According to Call Chain. Compared with the traditional procedural paradigm, the object-oriented paradigm engenders the use of many small methods and greater method interaction. Thus, posing queries regarding the interaction between objects is essential in the debugging process and in the understanding of object oriented programs in general. A method call can be viewed as a message whose content is the passed arguments. Each message has a response which is the returned value or void. A message can have no response if it exits abnormally, i.e. throws an exception. Call chain can serve as a way to know the execution path leading to the execution of a specific event or as a way to inspect argument values propagated through the chain of calls. Fig. 4 illustrate the call chain rule in Prolog.
The rule any enclosing method specifies any enclosing method for a given event. For example, suppose method m1 called method m2 where event e was executed. Let id c m1 , id e m1 , id c m2 , id e m2 , id e are the id's for the following events: m1 call, m1 exit, m2 call, m2 exit, and the execution of event e respectively, assuming that the program has terminated normally. Note that id c m1 < id c m2 < id e < id e m1 < id e m2 . Event e is enclosed in method m2 which is enclosed in method m1; therefore, methods m1 and m2 are considered as enclosing methods. According to the rule any enclosing method CallId = id c m1 , ExitId = id e m1 or CallId = id c m2 , ExitId = id e m2 . Therefore, a call chain leading to the execution of a given event is all the enclosing methods for that event. According to the call chain rule, OutList = [id c m1 , id c m2 ]. Query Where an Event Occurred. In object-oriented programming, execution events occur within an environment. An environment is an instance object and an instance method invocation or else it is a class and a static method invocation. This environment represents the enclosing environment for an event. The instance or the class is referred to as the enclosing instance or enclosing class and the method is referred to as the enclosing method for the event. The enclosing Fig. 5 shows the rule where exception is thrown for a given thread. Once an exception is thrown, the thread in which the exception occurred is terminated; therefore, there is at most one uncaught exception per thread. The rule where specifies that the enclosing environment for a given event is the first call in the reversed call chain specified by the full detail call chain rule which reverse the list of id's obtained form call chain rule and extract the associated events from the database. Query the State of an Object. Querying the state of an object is concerned with the encapsulation aspect of object-oriented programming. The state of an object is captured in the values of its member fields and public and protected member fields of its super classes. The rule object state in Fig. 6 illustrates how the state of the object OName whose id is OId can be reconstructed at event id E. Object instantiation event is recorded as a method call to init . The domain of the object state rule is the segment of the program history between id S when the instantiation occurred and id E which is specified by the user. Member fields of a class is recorded as memberfields event. The object state helper rule specifies field value contributing to the desired state as the last value in the field history between id S and id E. Rule instance field history is discussed in the next section.
For example, let id init , id end be the boundary of the search domain and f 1 , f 2 are fields of the desired object. Suppose that histories of fields f 1 , f 2 are {{id i , f 1 , v i }, .., {id j , f 1 , v j }} and {{id n , f 2 , v n }, .., {id m , f 2 , v m }} respectively, v k and id k stand for the value of the field and when it was assigned respectively. Note that id init < id i , id j , id n , id m <= id end , and id i < id j , and id n < id m . Then the object's state is {{id j , f 1 , v j }{id m , f 2 , v m }}. Queries On Method State In design-by-contract (DBC) [17][18][19] the client has to meet preconditions or specific requirements in order to be able to call a certain method. These requirements are usually constraints on the arguments and the state. Our method generalizes the requirement to be imposed on any execution event and not only on method calls as in DBC. The following three factors can affect the execution of a given event within the enclosing method: (1) arguments values, (2) the returned value of all preceding method calls to a given event within the same enclosing method. Fig. 7 shows the pre event called methods rule, and (3) local variables values before the execution of the event. Thus those three factors are considered candidate queries.
Analogously, the post-condition in DBC is the effect that the called method promises upon its correct completion. Our methodology generalizes this idea to all executed events. The effect of the execution of an event on the enclosing method can appear in the following three areas: (1) the returned value of the enclosing method, (2) methods that have been called after the execution of the event within the same enclosing method, and (3) Local variables values after the execution of the event. DBC is not capable of specifying directly that some other methods need to be called before or after a given method. Having recorded the execution history it is possible to inspect whether a certain method(s) has been called before or after a given event.
Queries over Execution History
Execution History Subset. The programmer should have the ability to focus on an interval of the execution history when an erroneous behavior is suspected to occur. Such feature is useful in dealing with large program trace by allowing the programmer to filter out irrelevant data. Gathering Data. Eisenstadt [5] in his study on how bugs were found in 51 cases gathered from professional programmers found that programmers have used the following 4 techniques to locate the defect root: data gathering, code inspection, expert help, and controlled experiments. In 27 cases the bugs were found by gathering data regarding the execution of the program. JavaTA can gather data automatically regarding the following (1) member field value history;
(2) local variable value history which is important in understating loop execution;
(3) history of arguments of method calls; (4) history of return value of method calls; (5) history of contents of data structure; (6) all class instances and their states which is important in understanding user defined data structures; (7) thread status such: running and exited threads. Fig. 8 shows the rule for instance field history. The rule specifies a segment of the history between id S and id E for an instance field F of object OName whose unique id is OId. The rule instance field value specifies that a value of a given field can be obtained from a set field event provided that its id is between S and E. Call Tree. Grouping method calls according to a call tree is motivated by the need to depict interactions among objects. Call tree can be defined as methods called by the method of interest. Method calls that are involved in a call tree collaborate in achieving one task. Those methods are not necessarily dependent on each other, unlike method calls in a call chain in which the called method depends on the caller.
Query about Statement Execution. One of the most recurring questions in the debugging process is whether a certain statement has been executed or not. Novice programmers find the answer for such a question by inserting multiple print statements in program's code. Advanced developer would insert break points using a traditional debugger to verify whether a given statement has been executed or not. The answer to this question is either yes or no. We propose the following seven queries. (1) Was a given conditional statement executed? (2) Was a given method called? (3) Was a member field assigned to a given value? (4) Is there an instance of a specific class? (5) Was a specific exception caught? (6) Is a given thread still running? (7) Has a given thread exited?
Programmability and Query Management
Compose and Save Queries. The ability to compose queries provides a way to adapt queries to recurring bug patterns as well as to the individual needs of the developer. The idea is similar to the idea behind the Emacs system that allows the user to add macros dynamically to add functionality to the system. Composed queries guarantee the flexibility and the extendibility of our framework. Allowing the user to add queries dynamically results in a general purpose static analyzer for program trace. However, we do not have experimental data to support our claim especially on large program traces or for more complicated analyses.
Liang and Kai [15] developed a scenario-driven debugger. The idea is to allow the programmer to model a behavior view for a specific task as finite automata. The debugger allows the programmer to inspect the task execution progress. A similar capability can be added to JavaTA by composing a Prolog rule. Fig. 9 shows the login Prolog rule used to inspect the execution of the login task. The original example of the login task and its behavior view is illustrated in Liang and Kai's paper [15]. A standard login task is composed of (i) obtaining the user name (ii) obtaining the password (iii) verifying the user name and the password. If any step fails the login process fails, otherwise the user is allowed to login. One important difference is the analysis used in JavaTA is postmortem analysis. On the other hand, the scenario-driven debugger uses on-line analysis. Comparing Query Results. Eisenstadt [5] describes the "Dump & Diff" as a technique to locate errors. This technique works as follows. The output of print statements is saved to two text files corresponding to two different executions; the two files are then compared using a source-compare "diff" utility, which highlights the difference between the two outputs. This technique can be adapted to query multiple execution histories and to compare the results of multiple queries over the same execution history. Comparative queries can be helpful to see the difference between data structure contents, and call chains and much more. Comparative queries can also be applied to isolating errors related to software maintenance by posing a query on two runs obtained form two versions and comparing query results.
Save Queries Answers. Calculating a query on a large program history is costly and time demanding. In many debugging scenarios the programmer may go back to examine the results of previous queries or would like to compare them. Re-computing a query on such execution history is wasteful; therefore, queries and their answers should be saved. The WhyLine [12] allows for data provisioning to ease the debugging process; however, JavaTA adapt this technique due to the cost associated with query evaluation on large program trace.
Conclusions and Future Work
We believe that our proposed logic programming approach is a simple and effective method for debugging object oriented programs. The key to our approach representing the execution history as a logic database, and employing logic queries to answer questions about previous execution states. Our proposed query catalog is based upon an extensive study of errors in object oriented programs [8].
Work is still in progress on JavaTA. Currently we are working on a programmable tool interface to JavaTA features. We are applying our technique to larger programs, in order to gain a better understanding of the methodology and its potential limitations. We plan to make the JavaTA available as a plug-in for Eclipse. We are also exploring the performance characteristics in terms of both the space and the time needed for various types of queries. We are also interested in quantifying the overhead of extracting the program trace.
| 4,060 |
cs0701107
|
2949624117
|
This paper presents a logic based approach to debugging Java programs. In contrast with traditional debugging we propose a debugging methodology for Java programs using logical queries on individual execution states and also over the history of execution. These queries were arrived at by a systematic study of errors in object-oriented programs in our earlier research. We represent the salient events during the execution of a Java program by a logic database, and implement the queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program, such as the calling sequence that results in a certain outcome, the state of an object at a particular execution point, etc. Our system also provides the ability to compose new queries during a debugging session. We believe that logic programming offers a significant contribution to the art of object-oriented programs debugging.
|
WhyLine @cite_4 is an interrogative debugger for the Alice programming environment. It allows the user to ask why a given event did or did not occur. The WhyLine gives the answer in the form of an execution path that leads or was supposed to lead to the execution of the given event. The path is annotated with control flow information. The comparison among these systems are based on four features: (1) automatic program trace extraction; (2) program trace navigation features such as forwards and backwards stepping, breakpoint, and conditional breakpoint; (3) query language support; (4) built-in trace analyses including a set of the most recurring debugging queries or abstract views of program behavior. Table shows the comparison among the 10 system. JavaTA currently does not support trace navigation; however, it is straight forward to implement. Opium supports similar features as JavaTA especially the built-in trace analyses; however, these analyses are hard to be compared since they target two different programming paradigms namely declarative and imperative respectively.
|
{
"abstract": [
"Debugging is still among the most common and costly of programming activities. One reason is that current debugging tools do not directly support the inquisitive nature of the activity. Interrogative Debugging is a new debugging paradigm in which programmers can ask why did and even why didn't questions directly about their program's runtime failures. The Whyline is a prototype Interrogative Debugging interface for the Alice programming environment that visualizes answers in terms of runtime events directly relevant to a programmer's question. Comparisons of identical debugging scenarios from user tests with and without the Whyline showed that the Whyline reduced debugging time by nearly a factor of 8, and helped programmers complete 40 more tasks."
],
"cite_N": [
"@cite_4"
],
"mid": [
"2157922094"
]
}
|
JavaTA: A Logic-based Debugger for Java
|
This paper shows some of the benefits of applying logic programming techniques in the debugging of object-oriented programs. Debugging object-oriented programs has traditionally been a procedural process in that the programmer has to proceed step-by-step and object-by-object in order to uncover the cause of an error. In this paper, we propose a logic-based approach to the debugging of object-oriented programs in which debugging data can be collected via higher level logical queries. We represent the salient events during the execution of a Java program by a logic database, and implement these queries as logic programs. Such an approach allows us to answer a number of useful and interesting queries about a Java program.
To illustrate our approach, note that a crucial aspect of program understanding is observing how variables take on different values during execution. The use of print statements is the standard procedural way of eliciting this information. This is a classic case of the need to query over execution history. Other examples include queries to find which variable has a certain value; the calling sequence that results in a certain outcome; whether a certain statement was executed; etc. We arrived at a set of queries by a study of the types of errors that arise in object-oriented programs [8].
We propose two broad categories of queries in this paper: (i) queries over individual execution states and (ii) queries over the entire history of execution, or a subset of the history. Our proposed method recognizes the need to query subhistories; such capability is especially useful when debugging large scale software whose program trace is composed of millions of execution events. Our system has the ability to filter system objects so that a programmer may focus on the objects explicitly instantiated form user defined classes.
Our current implementation, called JavaTA, takes a Java program as input and builds a logic database of salient events (method call, return, assignment, object creation, etc) during the execution of a Java program using the JPDA interface (Java Platform Debugger Architecture). Our approach to recording the history of changes is incremental in nature, i.e., when a variable is assigned, we save only the new value assigned to the variable. Thus, queries about previous execution states involve some state reconstruction. A textual interface allows the user to pose a number of queries as detailed in section 4.
Thus the contributions of our paper are: (1) logic-based approach to debugging object-oriented programs; (2) the provision of queries over individual states and the history of execution; (3) a prototype for a trace analysis for objectoriented programs.
The remainder of the paper is organized as follows. Section 2 presents an example, called the 'traveling null pointer', in order to illustate our overall approach. Section 3 presents the architecture of JavaTA, along with the Java Event Log language. Section 4 outlines the principles of our debugging methodology. Section 5 surveys closely related research and compares them with our work. Section 6 presents conclusions and areas of further research.
Overview of Logic-based Debugging
This section provides an overview of our approach to logic-based debugging with an example. We present the 'traveling null pointer' example, which illustrates a bug pattern in which a method call incorrectly returns a null pointer and the client of that method propagates the null pointer through a call chain, and, finally, a null pointer exception is thrown when the client code of the last call in the chain tries to de-reference the null pointer. In other words, the code that originates the null pointer and the code that de-references that pointer are far apart spatially and temporally. Fig. 1 illustrates the traveling null pointer defect pattern in Java code. The instance method doSomeThing in FarAWayClass returns a null pointer due to erroneous conditions. When this program is executed it reports a null pointer exception at line 14.
JavaTA generates a trace for the example program. (We use the term 'trace' and 'execution history' interchangeably in this paper.) The trace includes 17 events as shown in Fig. 2 a Prolog-based description language for program trace. For example, the second event recorded has a unique id 1 and it belongs to the main thread. The event has been recorded due to the invocation of a method called main. The term l('Example.java ', 20) indicates that the method is defined in the Example.java file on line 20. The term c('Example') means that the method is a class or static method of the Example class. The main method takes an instance of array of strings as the only argument. Instance or object is described by the class name and a unique id as in the term o('java.lang.String[]', 641).
To facilitate trace analysis JavaTA provides a set of predefined queries. Table 1 shows the three predefined queries used in the debugging session. First the user asked regarding the environment where the exception is thrown as in Q1. A1 indicates that the enclosing method is mN whose single argument is null and the call to the enclosing method occurred at event id 14. The current question is where the null pointer originated. Q2 inquires full detail call chain leading to event id 14. A2 shows that method m1 called method m2 which called method mN. The initial call to method main and the constructor is omitted for simplicity of presentation. By investigating the argument passed to m2 it is clear that it has a null value. Method m2 is called from m1, and m1 is called at event id 4. When looking at the source code of method m1, the programmer concludes that the local variable 'result' holds a null value since it is passed as the argument to m2. The Prolog code for the three queries referenced in table 1 are shown in section 4. Given that these are frequently used queries in object-oriented program debugging and also noting that the average Java programmer may be unfamiliar with Prolog, JavaTA provides these queries as built-in primitives. Several additional useful debugging queries and their Prolog implementation are also illustrated in section 4.
JavaTA Architecture
We have implemented a prototype of the JavaTA framework as a distributed system. Fig. 3 shows the main tiers and components of the framework. The architecture of JavaTA is composed of four tiers. The first tier consists of three components: the JPDA, the Prolog server, and the built-in primitives. JPDA, the Java Platform Debugger Architecture [10], is designed as a distributed system that can interface with a JVM running on the same machine or on a different machine. Prolog Beans [20] is a Prolog server that can be interfaced with Java or .Net. The client-server architecture of Prolog Beans allows the server to be a component of a distributed system. Prolog Beans was designed to handle large applications.
The second tier is composed of two components: the Logger and the Query Manager. Once the Logger receives a Java program it starts a JVM and subscribes for the desired events with the JPDA. It is also possible (but not implemented in the current prototype) that the Logger interacts with an already running JVM. The Query Manager is responsible for constructing Prolog goals and sending the constructed goals to the Prolog Beans server. Once the Query Manger receives answers, it forwards them back to the Tools Interface. The third tier is composed of only one component: the Tools Interface which is a facade for the JavaTA Framework. The fourth tier has only one component: the User Interface that interacts with the Tools interface and the user. [6,7] and JyLog [11] have implemented similar recording techniques based on logging in XML; however, JEL describes the program trace as a set of Prolog facts. JEL can be easily extended to include a sophisticated description of static and dynamic information about a given program. Table 2 shows part of the BNF grammar of JEL. The basic construct in JEL is the event term. Each event has a unique id and thread in addition to other specific information. Objects are identified by their class and a unique id. The implemented prototype supports the description of the following nine events.
1. Method call event records the source code location of the first executable line of the method body, the class or the instance that this method was invoked on, method name, and method arguments. 2. Method exit event records similar information as method call event, in addition to the id of the corresponding method entry event and the returned value instead of the arguments. 3. Set Field event records the source location where the field was set to a new value, the instance or the class where this field is declared, and the new value. 4. Data Structure event is recorded after a method entry event, method exit event, and set field event if the type of the field being assigned a new value is a data structure. The data structure can be an array or a Collection instance. The event describes the source code information of the event that caused the recording of the data structure. 5.
Step event describes the source code location in addition to names and values of visible local variables in each step. Each step corresponds to the execution of a source line. 6. Exception event records the source code location, the exception instance, the exception message, and the catch location if it is caught or the uncaught keyword other wise. 7. Thread Start and Thread Death events record the starting or the ending of a thread. The thread group is also recorded. 8. Member fields event records information regarding member fields of a given class. Table 2. Part of JEL BNF events ::= event* event ::= event '(' id , thread , execution-event ')' '.' execution-event ::= member-fields method-call method-exit set-field data-structure exception step thread-start thread-death method-call ::= methodcall '(' location , ( instance class ), name , arguments ')' method-exit ::= methodexit '(' id , location , ( instance class ), name , value ')' set-field ::= setfield '(' location , ( instance class ), name , value ')' data-structure ::= datastructure'(' location , contents ')' exception ::= exception '(' location , instance , message , ( location uncaught) ')' step ::= step '(' location , local-variable-list ')' member-fields ::= memberfields '(' class , member-fields ')' thread-start ::= threadstart '(' thread-group ')' thread-death ::= threaddeath '(' thread-group ')'
Queries on Program Trace
The debugging process involves three phases: (i) formulating a hypothesis about the root of the error; (ii) collecting program-specific data that is pertinent to the hypothesis; (iii) analyzing the collected data to prove or disprove the hypothesis. The difference between JavaTA and traditional debugging lies in their respective approaches to the data collection phase (ii). In JavaTA, data collection is performed by high-level queries on the trace. In traditional debugging, data collection is performed by the programmer by a process of manually stepping through the code, setting break points, and inspecting objects. In this section, the program trace is recorded as a Prolog database. The database is populated by entries corresponding to execution events which are specified by JEL. While it is possible to pre-process this database in order to construct auxiliary structures such as call trees, we do not resort to such optimizations here, but present a relatively straightforward implementation of the debugging primitives directly in terms of the event database. The debugging primitives, or predefined queries, provided by JavaTA can be organized under three categories: queries on specific events, queries on execution history, and query management. Section 4.1 discusses queries on specific events. There are four kinds of queries over the execution history and are illustrated in section 4.2. Query management and programmability techniques are discussed in section 4.3.
Queries on Program State
Group Method Calls According to Call Chain. Compared with the traditional procedural paradigm, the object-oriented paradigm engenders the use of many small methods and greater method interaction. Thus, posing queries regarding the interaction between objects is essential in the debugging process and in the understanding of object oriented programs in general. A method call can be viewed as a message whose content is the passed arguments. Each message has a response which is the returned value or void. A message can have no response if it exits abnormally, i.e. throws an exception. Call chain can serve as a way to know the execution path leading to the execution of a specific event or as a way to inspect argument values propagated through the chain of calls. Fig. 4 illustrate the call chain rule in Prolog.
The rule any enclosing method specifies any enclosing method for a given event. For example, suppose method m1 called method m2 where event e was executed. Let id c m1 , id e m1 , id c m2 , id e m2 , id e are the id's for the following events: m1 call, m1 exit, m2 call, m2 exit, and the execution of event e respectively, assuming that the program has terminated normally. Note that id c m1 < id c m2 < id e < id e m1 < id e m2 . Event e is enclosed in method m2 which is enclosed in method m1; therefore, methods m1 and m2 are considered as enclosing methods. According to the rule any enclosing method CallId = id c m1 , ExitId = id e m1 or CallId = id c m2 , ExitId = id e m2 . Therefore, a call chain leading to the execution of a given event is all the enclosing methods for that event. According to the call chain rule, OutList = [id c m1 , id c m2 ]. Query Where an Event Occurred. In object-oriented programming, execution events occur within an environment. An environment is an instance object and an instance method invocation or else it is a class and a static method invocation. This environment represents the enclosing environment for an event. The instance or the class is referred to as the enclosing instance or enclosing class and the method is referred to as the enclosing method for the event. The enclosing Fig. 5 shows the rule where exception is thrown for a given thread. Once an exception is thrown, the thread in which the exception occurred is terminated; therefore, there is at most one uncaught exception per thread. The rule where specifies that the enclosing environment for a given event is the first call in the reversed call chain specified by the full detail call chain rule which reverse the list of id's obtained form call chain rule and extract the associated events from the database. Query the State of an Object. Querying the state of an object is concerned with the encapsulation aspect of object-oriented programming. The state of an object is captured in the values of its member fields and public and protected member fields of its super classes. The rule object state in Fig. 6 illustrates how the state of the object OName whose id is OId can be reconstructed at event id E. Object instantiation event is recorded as a method call to init . The domain of the object state rule is the segment of the program history between id S when the instantiation occurred and id E which is specified by the user. Member fields of a class is recorded as memberfields event. The object state helper rule specifies field value contributing to the desired state as the last value in the field history between id S and id E. Rule instance field history is discussed in the next section.
For example, let id init , id end be the boundary of the search domain and f 1 , f 2 are fields of the desired object. Suppose that histories of fields f 1 , f 2 are {{id i , f 1 , v i }, .., {id j , f 1 , v j }} and {{id n , f 2 , v n }, .., {id m , f 2 , v m }} respectively, v k and id k stand for the value of the field and when it was assigned respectively. Note that id init < id i , id j , id n , id m <= id end , and id i < id j , and id n < id m . Then the object's state is {{id j , f 1 , v j }{id m , f 2 , v m }}. Queries On Method State In design-by-contract (DBC) [17][18][19] the client has to meet preconditions or specific requirements in order to be able to call a certain method. These requirements are usually constraints on the arguments and the state. Our method generalizes the requirement to be imposed on any execution event and not only on method calls as in DBC. The following three factors can affect the execution of a given event within the enclosing method: (1) arguments values, (2) the returned value of all preceding method calls to a given event within the same enclosing method. Fig. 7 shows the pre event called methods rule, and (3) local variables values before the execution of the event. Thus those three factors are considered candidate queries.
Analogously, the post-condition in DBC is the effect that the called method promises upon its correct completion. Our methodology generalizes this idea to all executed events. The effect of the execution of an event on the enclosing method can appear in the following three areas: (1) the returned value of the enclosing method, (2) methods that have been called after the execution of the event within the same enclosing method, and (3) Local variables values after the execution of the event. DBC is not capable of specifying directly that some other methods need to be called before or after a given method. Having recorded the execution history it is possible to inspect whether a certain method(s) has been called before or after a given event.
Queries over Execution History
Execution History Subset. The programmer should have the ability to focus on an interval of the execution history when an erroneous behavior is suspected to occur. Such feature is useful in dealing with large program trace by allowing the programmer to filter out irrelevant data. Gathering Data. Eisenstadt [5] in his study on how bugs were found in 51 cases gathered from professional programmers found that programmers have used the following 4 techniques to locate the defect root: data gathering, code inspection, expert help, and controlled experiments. In 27 cases the bugs were found by gathering data regarding the execution of the program. JavaTA can gather data automatically regarding the following (1) member field value history;
(2) local variable value history which is important in understating loop execution;
(3) history of arguments of method calls; (4) history of return value of method calls; (5) history of contents of data structure; (6) all class instances and their states which is important in understanding user defined data structures; (7) thread status such: running and exited threads. Fig. 8 shows the rule for instance field history. The rule specifies a segment of the history between id S and id E for an instance field F of object OName whose unique id is OId. The rule instance field value specifies that a value of a given field can be obtained from a set field event provided that its id is between S and E. Call Tree. Grouping method calls according to a call tree is motivated by the need to depict interactions among objects. Call tree can be defined as methods called by the method of interest. Method calls that are involved in a call tree collaborate in achieving one task. Those methods are not necessarily dependent on each other, unlike method calls in a call chain in which the called method depends on the caller.
Query about Statement Execution. One of the most recurring questions in the debugging process is whether a certain statement has been executed or not. Novice programmers find the answer for such a question by inserting multiple print statements in program's code. Advanced developer would insert break points using a traditional debugger to verify whether a given statement has been executed or not. The answer to this question is either yes or no. We propose the following seven queries. (1) Was a given conditional statement executed? (2) Was a given method called? (3) Was a member field assigned to a given value? (4) Is there an instance of a specific class? (5) Was a specific exception caught? (6) Is a given thread still running? (7) Has a given thread exited?
Programmability and Query Management
Compose and Save Queries. The ability to compose queries provides a way to adapt queries to recurring bug patterns as well as to the individual needs of the developer. The idea is similar to the idea behind the Emacs system that allows the user to add macros dynamically to add functionality to the system. Composed queries guarantee the flexibility and the extendibility of our framework. Allowing the user to add queries dynamically results in a general purpose static analyzer for program trace. However, we do not have experimental data to support our claim especially on large program traces or for more complicated analyses.
Liang and Kai [15] developed a scenario-driven debugger. The idea is to allow the programmer to model a behavior view for a specific task as finite automata. The debugger allows the programmer to inspect the task execution progress. A similar capability can be added to JavaTA by composing a Prolog rule. Fig. 9 shows the login Prolog rule used to inspect the execution of the login task. The original example of the login task and its behavior view is illustrated in Liang and Kai's paper [15]. A standard login task is composed of (i) obtaining the user name (ii) obtaining the password (iii) verifying the user name and the password. If any step fails the login process fails, otherwise the user is allowed to login. One important difference is the analysis used in JavaTA is postmortem analysis. On the other hand, the scenario-driven debugger uses on-line analysis. Comparing Query Results. Eisenstadt [5] describes the "Dump & Diff" as a technique to locate errors. This technique works as follows. The output of print statements is saved to two text files corresponding to two different executions; the two files are then compared using a source-compare "diff" utility, which highlights the difference between the two outputs. This technique can be adapted to query multiple execution histories and to compare the results of multiple queries over the same execution history. Comparative queries can be helpful to see the difference between data structure contents, and call chains and much more. Comparative queries can also be applied to isolating errors related to software maintenance by posing a query on two runs obtained form two versions and comparing query results.
Save Queries Answers. Calculating a query on a large program history is costly and time demanding. In many debugging scenarios the programmer may go back to examine the results of previous queries or would like to compare them. Re-computing a query on such execution history is wasteful; therefore, queries and their answers should be saved. The WhyLine [12] allows for data provisioning to ease the debugging process; however, JavaTA adapt this technique due to the cost associated with query evaluation on large program trace.
Conclusions and Future Work
We believe that our proposed logic programming approach is a simple and effective method for debugging object oriented programs. The key to our approach representing the execution history as a logic database, and employing logic queries to answer questions about previous execution states. Our proposed query catalog is based upon an extensive study of errors in object oriented programs [8].
Work is still in progress on JavaTA. Currently we are working on a programmable tool interface to JavaTA features. We are applying our technique to larger programs, in order to gain a better understanding of the methodology and its potential limitations. We plan to make the JavaTA available as a plug-in for Eclipse. We are also exploring the performance characteristics in terms of both the space and the time needed for various types of queries. We are also interested in quantifying the overhead of extracting the program trace.
| 4,060 |
math0701481
|
1854128548
|
Chains are vector-valued signals sampling a curve. They are important to motion signal processing and to many scientific applications including location sensors. We propose a novel measure of smoothness for chains curves by generalizing the scalar-valued concept of monotonicity. Monotonicity can be defined by the connectedness of the inverse image of balls. This definition is coordinate-invariant and can be computed efficiently over chains. Monotone curves can be discontinuous, but continuous monotone curves are differentiable a.e. Over chains, a simple sphere-preserving filter shown to never decrease the degree of monotonicity. It outperforms moving average filters over a synthetic data set. Applications include Time Series Segmentation, chain reconstruction from unordered data points, Optical Character Recognition, and Pattern Matching.
|
A motion signal is comprised of two components: orientation and translation. The orientation vector indicates where the object is facing, whereas the translation component determines the object's location. Recent work has focused on smoothing the orientation vectors @cite_2 @cite_16 , whereas the results of the present paper apply equally well to orientation vectors (points on the surface of a unit sphere) as to arbitrary translation signals.
|
{
"abstract": [
"Smooth motion generation is an important issue in the computer animation and virtual reality areas. The motion of a rigid body consists of translation and orientation. The former is described by a space curve in 3-dimensional Euclidean space, while the latter is represented by a curve in the unit quaternion space. Although there exist well-known techniques for smoothing the translation data, smoothing the orientation data is yet to be explored due to the nonlinearity of the unit quaternion space. This paper presents a wavelet-based algorithm for smoothing noise-embedded motion data and the experiment shows the effectiveness of the proposed algorithm.",
"Multiresolution motion analysis has gained considerable research interest as a unified framework to facilitate a variety of motion editing tasks. Within this framework, motion data are represented as a collection of coefficients that form a coarse-to-fine hierarchy. The coefficients at the coarsest level describe the global pattern of a motion signal, while those at fine levels provide details at successively finer resolutions. Due to the inherent nonlinearity of the orientation space, the challenge is to generalize multiresolution representations for motion data that contain orientations as well as positions. Our goal is to develop a multiresolution analysis method that guarantees coordinate-invariance without singularity. To do so, we employ two novel ideas: hierarchical displacement mapping and motion filtering. Hierarchical displacement mapping provides an elegant formulation to describe positions and orientations in a coherent manner. Motion filtering enables us to separate motion details level-by-level to build a multiresolution representation in a coordinate-invariant way. Our representation facilitates multiresolution motion editing through level-wise coefficient manipulation that uniformly addresses issues raised by motion modification, blending, and stitching."
],
"cite_N": [
"@cite_16",
"@cite_2"
],
"mid": [
"1489924732",
"2094963281"
]
}
|
Monotonicity Analysis over Chains and Curves
| 0 |
|
math0701481
|
1854128548
|
Chains are vector-valued signals sampling a curve. They are important to motion signal processing and to many scientific applications including location sensors. We propose a novel measure of smoothness for chains curves by generalizing the scalar-valued concept of monotonicity. Monotonicity can be defined by the connectedness of the inverse image of balls. This definition is coordinate-invariant and can be computed efficiently over chains. Monotone curves can be discontinuous, but continuous monotone curves are differentiable a.e. Over chains, a simple sphere-preserving filter shown to never decrease the degree of monotonicity. It outperforms moving average filters over a synthetic data set. Applications include Time Series Segmentation, chain reconstruction from unordered data points, Optical Character Recognition, and Pattern Matching.
|
@cite_4 @cite_14 @cite_10 , the authors chose to define monotonicity for curves or chains with an arbitrary direction vector: a curve is monotone if its projection on a line is does not backtrack. While this is a sensible choice given the lack of definition elsewhere, we argue that not all applications support an arbitrary direction that can be used to define monotonicity.
|
{
"abstract": [
"Abstract Given a set of points S = ( x 1 , y 1 ), ( x 2 , y 2 ), …, ( x N , y N ) in R 2 with x 1 x 2 x N , we want to construct a polygonal (i.e., continuous, piecewise linear) function f with a small number of corners (i.e., nondifferentiable points) which fits S well. To measure the quality of f in this regard, we employ two criteria: 1. (i) the number of corners in the graph of f , and 2. (ii) max 1≤i≤N ¦y i − f(x i )¦ (the Chebyshev error of the fit). We give efficient algorithms to construct a polygonal function f that minimizes (i) (resp. (ii)) under a maximum allowable value of (ii) (resp. (i)), whether or not the comers of f are constrained to be in the set S . A key tool used in designing these algorithms is a linear time algorithm to find the visibility polygon from an edge in a monotone polygon. A variation of one of these algorithms solves the following computational geometry problem in optimal O ( N ) time: Given N vertical segments in the plane, no two with the same abscissa, find a monotone polygonal curve with the least number of corners which intersects all the segments.",
"",
"We consider the problem of approximating a polygonal curve P under a given error criterion by another polygonal curve P? whose vertices are a subset of the vertices of P. The goal is to minimize the number of vertices of P? while ensuring that the error between P? and P is below a certain threshold. We consider two fundamentally different error measures -- Hausdorff and Frechet error measures. For both error criteria, we present near-linear time approximation algorithms that, given a parameter ? > 0, compute a simplified polygonal curve P? whose error is less than ? and size at most the size of an optimal simplified polygonal curve with error ? 2. We consider monotone curves in the case of Hausdorff error measure and arbitrary curves for the Frechet error measure. We present experimental results demonstrating that our algorithms are simple and fast, and produce close to optimal simplifications in practice."
],
"cite_N": [
"@cite_10",
"@cite_14",
"@cite_4"
],
"mid": [
"1983699268",
"",
"2121202224"
]
}
|
Monotonicity Analysis over Chains and Curves
| 0 |
|
math0701481
|
1854128548
|
Chains are vector-valued signals sampling a curve. They are important to motion signal processing and to many scientific applications including location sensors. We propose a novel measure of smoothness for chains curves by generalizing the scalar-valued concept of monotonicity. Monotonicity can be defined by the connectedness of the inverse image of balls. This definition is coordinate-invariant and can be computed efficiently over chains. Monotone curves can be discontinuous, but continuous monotone curves are differentiable a.e. Over chains, a simple sphere-preserving filter shown to never decrease the degree of monotonicity. It outperforms moving average filters over a synthetic data set. Applications include Time Series Segmentation, chain reconstruction from unordered data points, Optical Character Recognition, and Pattern Matching.
|
One approach to chain smoothing is to use B-splines and Bezier curves with the @math norm @cite_6 . Correspondingly, we could measure the smoothness'' of a chain by measuring how closely one can fit it to a smooth curve. Our approach differs in that we do not use polygonal approximations or curve fitting: we consider chains to be first-class citizens.
|
{
"abstract": [
"We present a new approach to the problem of matching 3-D curves. The approach has a low algorithmic complexity in the number of models, and can operate in the presence of noise and partial occlusions. Our method builds upon the seminal work of (1990), where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures. However, we introduce two enhancements: - We make use of nonuniform B-spline approximations, which permits us to better retain information at high-curvature locations. The spline approximations are controlled (i.e., regularized) by making use of normal vectors to the surface in 3-D on which the curves lie, and by an explicit minimization of a bending energy"
],
"cite_N": [
"@cite_6"
],
"mid": [
"2002667015"
]
}
|
Monotonicity Analysis over Chains and Curves
| 0 |
|
cs0701133
|
1619196380
|
It is well-known that wide-area networks face today several performance and reliability problems. In this work, we propose to solve these problems by connecting two or more local-area networks together via a Redundant Array of Internet Links (or RAIL) and by proactively replicating each packet over these links. In that sense, RAIL is for networks what RAID (Redundant Array of Inexpensive Disks) was for disks. In this paper, we describe the RAIL approach, present our prototype (called the RAILedge), and evaluate its performance. First, we demonstrate that using multiple Internet links significantly improves the end-to-end performance in terms of network-level as well as application-level metrics for Voice-over-IP and TCP. Second, we show that a delay padding mechanism is needed to complement RAIL when there is significant delay disparity between the paths. Third, we show that two paths provide most of the benefit, if carefully managed. Finally, we discuss a RAIL-network architecture, where RAILedges make use of path redundancy, route control and application-specific mechanisms, to improve WAN performance.
|
In the media streaming community, the idea of path diversity is traditionally combined with multiple-description coding: complementary streams are simultaneously sent over independent paths, to achieve resilience to loss in a bandwidth-efficient manner. @cite_6 proposed to transmit multiple- description video over independent paths; in follow-up work @cite_36 , the same authors used this idea to design a content-delivery network. @cite_28 applied the same idea to Voice-over-IP and also designed an playout scheduling algorithm to handle multi-path transmission. The same authors did a simulation study on the effect of replication and path diversity on TCP transfers @cite_18 .
|
{
"abstract": [
"We propose a system that improves the performance of streaming media CDN by exploiting the path diversity provided by existing CDN infrastructure. Path diversity is provided by the different network paths that exist between a client and its nearby edge servers; and multiple description (MD) coding is coupled with this path diversity to provide resilience to losses. In our system, MD coding is used to code a media stream into multiple complementary descriptions, which are distributed across the edge servers in the CDN. When a client requests a media stream, it is directed to multiple nearby servers which host complementary descriptions. These servers simultaneously stream these complementary descriptions to the client over different network paths. This paper provides distortion models for MDC video and conventional video. We use these models to select the optimal pair of servers with complementary descriptions for each client while accounting for path lengths and path jointness and disjointness. We also use these models to evaluate the performance of MD streaming over CDN in a number of real and generated network topologies. Our results show that distortion reduction by about 20 to 40 can be realized even when the underlying CDN is not designed with MDC streaming in mind. Also, for certain topologies, MDC requires about 50 fewer CDN servers than conventional streaming techniques to achieve the same distortion at the clients.",
"",
"In this paper, we present error-resilient Internet video transmission using path diversity and rate-distortion optimized reference picture selection. Under this scheme, the optimal packet dependency is determined adapting to network characteristics and video content, to achieve a better trade-off between coding efficiency and forming independent streams to increase error-resilience. The optimization is achieved within a rate-distortion framework, so that the expected end-to-end distortion is minimized under the given rate constraint. The expected distortion is calculated based on an accurate binary tree modeling with the effects of channel loss and error concealment taken into account. With the aid of active probing, packets are sent across multiple available paths according to a transmission policy which takes advantage of path diversity and seeks to minimize the loss rate. Experiments demonstrate that the proposed scheme provides significant diversity gain, as well as gains over video redundancy coding and the NACK mode of conventional reference picture selection.",
"Video communication over lossy packet networks such as the Internet is hampered by limited bandwidth and packet loss. This paper presents a system for providing reliable video communication over these networks, where the system is composed of two subsystems: (1) multiple state video encoder decoder and (2) a path diversity transmission system. Multiple state video coding combats the problem of error propagation at the decoder by coding the video into multiple independently decodable streams, each with its own prediction process and state. If one stream is lost the other streams can still be decoded to produce usable video, and furthermore, the correctly received streams provide bidirectional (previous and future) information that enables improved state recovery for the corrupted stream. This video coder is a form of multiple description coding (MDC), and its novelty lies in its use of information from the multiple streams to perform state recovery at the decoder. The path diversity transmission system explicitly sends different subsets of packets over different paths, as opposed to the default scenarios where the packets proceed along a single path, thereby enabling the end- to-end video application to effectively see an average path behavior. We refer to this as path diversity. Generally, seeing this average path behavior provides better performance than seeing the behavior of any individual random path. For example, the probability that all of the multiple paths are simultaneously congested is much less than the probability that a single path is congested. The resulting path diversity provides the multiple state video decoder with an appropriate virtual channel to assist in recovering from lost packets, and can also simplify system design, e.g. FEC design. We propose two architectures for achieving path diversity, and examine the effectiveness of path diversity in communicating video over a lossy packet network."
],
"cite_N": [
"@cite_36",
"@cite_18",
"@cite_28",
"@cite_6"
],
"mid": [
"2133168612",
"1539362726",
"2033837429",
"2039682623"
]
}
|
The Case for Redundant Arrays of Internet Links (RAIL)
|
The Internet gradually becomes the unified network infrastructure for all our communication and business needs. Large enterprises, in particular, rely increasingly on Internetbased Virtual Private Networks (VPNs) that typically interconnect several, possibly remote, sites via a wide-area network (WAN). Depending on the company, the VPNs may have various uses, including carrying Voice-over-IP (VoIP) to drive down the communication expenses, sharing geographically distributed company resources, providing a realtime service, etc.
However, it is well known that wide-area networks face today several problems, including congestion, failure of various network elements or protocol mis-configurations. These may result in periods of degraded quality-of-service, or even lack of connectivity, perceived by the end-user. To deal with these problems, several measures can be taken at the endpoints, at the edge, or inside the network.
One approach is to use redundant communication paths to improve end-to-end performance 1 . This idea is not new.
The Resilient Overlay Network (RON) architecture [1] proposed that participating nodes maintain multiple paths to each other, in order to preserve their connectivity in the face of Internet failures. The more practical alternative to resilient overlays, multi-homing [2,3], advocates that each edge network connect to the Internet over multiple Internet Service Providers (ISPs), in order to increase the probability of finding an available path to any destination. Both approaches essentially suggest to establish and intelligently use redundant communication paths. Several vendors have already developed products along these lines [4,5,6]. A significant body of research has also investigated the performance of such approaches and algorithms for monitoring, dynamic path switching and other aspects [1,2,3,7,8,9,10,12,11,13,14].
We too are looking at how to use control at the edge and utilize redundant communication paths to improve end-toend performance. What we bring to the table is a mechanism for proactively leveraging several paths at the same time. We propose to replicate and transmit packets over several redundant independent paths, which are carefully selected. The goal is to increase the probability that at least one copy will be received correctly and on time. In other words, we propose to combine a proactive replication over a set of redundant links, with the traditional reactive dynamic switching among (sets of) links.
Our approach is inspired by the Redundant Array of Inexpensive Disks (RAID) [15]. The basic idea of RAID was to combine multiple small, inexpensive disk drives into an array of disk drives which yields better performance that of a Single Large Expensive Drive (SLED), and appears to the computer as a single logical storage unit or drive. Furthermore, disk arrays were made fault-tolerant by redundantly storing information in various ways. Our approach is analogous to "disk mirroring", or RAID-1, which duplicates all content on a backup disk; so our approach would be called RAIL-1 according to RAID terminology.
Similarly to RAID, we propose to replicate packets over multiple, relatively inexpensive, independent paths, i.e., create a Redundant Array of Internet Links (RAIL), which appears to the application as a single "superior" link. To evaleral packets lost in a row. At one extreme, packets may sporadically get dropped or delayed -this is typically referred to as QoS problem. At the other extreme, a failure may lead to an long-lasting loss of connectivitythis is typically referred to as a reliability problem. In the middle, several packets may get mistreated in a short time period -which is also typically considered a QoS problem. To cover the entire range of cases ,we often refer together to quality-of-service and reliability, as "performance" uate RAIL performance, we have built a prototype called RAILedge. We show that using RAIL yields better performance (both quality-of-service and reliability) than using any of the underlying paths alone. In addition, we evaluate the performance of applications, such as VoIP and TCP, over RAIL and we seek to optimize relevant applicationlevel metrics. In particular, we propose an additional mechanism, called delay padding, which complements RAIL when there is a significant disparity between the underlying paths.
There are several issues that need to be investigated. How much is the performance benefit from RAIL and how does it depend on the characteristics of the underlying paths? What is the tradeoff between performance benefit and the bandwidth cost of replicating every packet over multiple connections? How does RAIL interact with higher layers, such as TCP and VoIP applications? Does RAIL introduce reordering? How should one choose the links that constitute the RAIL, in a way that they complement each other and optimize application performance? In this paper, we address these questions.
With regards to the bandwidth cost, we argue that it is worthwhile and that RAIL is a simple cost-efficient approach for achieving good quality-of-service over redundant paths. The first argument is from a cost point-of-view. As bandwidth gets cheaper and cheaper, combining multiple inexpensive links becomes competitive to buying a single, more expensive, private line. Furthermore, we show that two paths are sufficient to get most of the benefit. In addition, the cost of a connection is rather fixed than usage-based. Once one pays the initial cost to get an additional connection to a second ISP (which companies using multi-homing have already done), there is no reason not to fully utilize it. The second argument is from a performance point-of-view, which may be a strict requirement for critical applications. RAIL-ing traffic over n paths provides more robustness to short term "glitches" than dynamic path switching between the same n paths. This is because there are limits in how fast path switching mechanisms can (i) confidently detect glitches and (ii) react to them without causing instability to the network. For example, if a few VoIP packets are sporadically dropped, a path switching system should probably not react to it, while RAIL can still successfully deliver copies of the lost packets arriving from the redundant paths.
Our findings can be summarized as follows.
• First, we demonstrate that proactively replicating packets over a Redundant Array of Internet Links (RAIL) significantly improves the end-to-end performance. We quantify the improvement in terms of network-level as well as application-level metrics. In this process, we use and derive analytical models for the performance of VoIP-over-RAIL and TCP-over-RAIL. We also use a working prototype of RAILedge.
• Second, we design and evaluate a delay padding mechanism to complement RAIL when there is a significant delay disparity among the underlying paths. This is useful both for VoIP (where it plays a proxy-playout role) and for TCP (where it may remove re-ordering)
• Third, we show that two paths provide most of the benefit, while additional paths bring decreasing benefits. The two preferred paths should be carefully selected based on their quality, similarity/disparity and correlation.
The structure of the rest of the paper is as follows. Section 2 discuss related work. Section 3 describes the RAILedge design, some implementation details and the experimental setup. Section 4 evaluates the performance improvement brought by RAIL in terms of general network-level metrics (subsection 4.1), VoIP quality (subsection 4.2) and TCP throughput (subsection 4.3); we also study the sensitivity to the characteristics of the underlying paths. In this evaluation, we used analysis, matlab simulation, actual packet traces collected over Internet backbones, and testbed experiments. Section 5 discusses the bigger picture, including possible extensions and open questions. Section 6 concludes the paper.
System Design
RAIL Mechanisms Overview
RAIL improves the packet delivery between two remote local area networks (LANs), by connecting them through multiple wide-area paths. The paths are chosen to be as independent as possible, e.g. belonging to different Internet Service Providers. Fig.1 shows an example of two disjoint paths: Link 1 goes through ISP-A and ISP-C, Link 2 goes through ISP-B and ISP-D. (The simplest configuration would be to have both LANs connected to the same two ISPs.) For simplicity, we describe the system using two paths only; the same ideas apply to n > 2 paths.
A RAILedge device is required to connect each LAN to the wide-area paths. Each packet that transitions from the LAN to the WAN, via the RAILedge, is replicated at the RAILedge and sent out both WAN links. Copies of the same packet travel in parallel through the different WAN links and eventually arrive at the receiving RAILedge. There are three possibilities: both copies arrive, one copy arrives or no copy arrives. The receiving RAILedge examines every packet coming in from the WAN and suppresses any duplicates; i.e. it forwards the first copy of each packet toward its destination but it discards any copies arriving later.
The result is clear: the probability of both copies being lost is reduced compared to using a single path, and the delay experienced is the minimum of the delay on each path. Overall, the application perceives a virtual RAIL link that is better than the underlying physical links.
In summary, the RAILedge performs three basic operations: (i) packet duplication (ii) forwarding over all redundant Internet links and (iii) duplicate suppression. RAILedge-RAILedge communication happens over VPN tunnels, to ensure that every RAIL-ed packet is received by the intended RAILedge. We implement tunneling with a simple encapsulation/decapsulation scheme; our header includes the ID of the sending RAILedge and a sequence number, which is used to suppress duplicates at the receiving RAILEdge. All RAILedge operations are transparent to the end-user. The components of a RAILedge device are shown in Fig.2 and the steps taken upon reception of a packet are summarized in Fig.3.
There is a component of the RAILedge that we are not going to examine in this paper: link monitoring and selection. This module is responsible for monitoring the performance of every physical path, computing appropriate quality metrics, and choosing the best subset of paths to constitute the RAIL, over which packets should be replicated. Link monitoring and dynamic selection is a research problem in itself, with extensive and growing literature. In this paper, we do not study dynamic path switching. 2 Instead, we focus on (i) evaluating the replication of packets over all paths that constitute the RAIL under study and (ii) on giving recom- mendations on how to statically select these paths. This is still useful for a typical use of RAIL: initially, the user compares different ISPs and decides which is the best set to subscribe to; after subscription, the user replicates packets over all ISPs.
Delay Padding
Delay Padding is a mechanism that needs to complement the basic RAIL mechanism when there is delay disparity in the paths. The idea is the following. The default behavior of the receiving RAILedge is to forward the first copy and discard all copies that arrive later. However, this may not always be the best choice when there is significant delay disparity between the two paths. In such cases, one can construct pathological scenarios where the default RAIL policy results in patterns of delay jitter that adversely affect the application. One example is VoIP: the playout buffering algorithm at the receiver tries to estimate the delay jitter and and adapt to it. This playout algorithm is unknown to us and out of our control; even worse, it is most likely designed to react to delays caused by real single paths, not by virtual RAIL paths. For example, when path 1 is much faster than path 2, then most of the time RAIL will forward copies arriving from path 1. The playout buffer may adapt and closely match it, by choosing a playout deadline slightly above the delay of the path 1. When packets are lost on the fast path, the copies arriving from the slow path will arrive late to be played outand will be useless. In this scenario, a better use of the two paths would be to "equalize" the delay in the two paths by artificially delaying the packets arriving from the fast path, thus the name "delay padding". Essentially, delay padding acts as a proxy for playout, located at the RAILedge, and presents the receiver with the illusion of a roughly constant one-way delay. The main differences from a playout algorithm at the end-host is that delay padding does not drop packets that arrive late for playout. Fig. 4 demonstrates the main idea of delay padding, for packets in the same VoIP flow. The goal is to minimize jitter, i.e. to make all packets experience the same, roughly con-stant, one-way delay D, shown in straight line. For every packet i, two copies arrive: the first one is marked with a circle, the second is marked with the diamond. The actual time RAIL forwards the packet is marked with an "X". Without padding, RAIL would normally forward the first copy, which incurred one-way delay n RAIL = min{delay1, delay2}. With padding, we compare n RAIL to the target one-way delay D.
• In cases 1 and 2: n RAIL < D. We wait for additional "padding" time D − n RAIL before forwarding the packet.
• In case 3: n RAIL > D. We forward the packet immediately, without further delay. (Instead, a playout algorithm at the receiver would just drop the late packets).
The target one-way delay D so as to maximize the overall voice quality (MOS): D = argmax{M OS(D onew ay )}. D should be chosen taking into account the statistics of two paths and the delay budget. Adaptation of this value should be allowed only in much larger time scales. We discuss the choice of D to optimize M OS, as well as the performance improvement from delay padding, in the section on VoIP evaluation (4.2.1).
Delay padding may prove a useful mechanism for TCP as well. For example, it could be used to remove reordering, caused by RAIL for certain combinations of paths. This is discussed further in the section on reordering (4.1.4) and in the section on the effect of reordering on TCP in particular (4.3.2).
A practical implementation of delay padding for VoIP would require (i) the ability to identify voice packets and keep per-flow state and (ii) calculations of timing in term of relative relative instead absolute one-way delay. An implementation of reordering-removal for TCP, would not necessarily require per flow state; it could just use the sequence numbers on the aggregate flow between the two RAILedges.
RAIL Prototype and Experimental Setup
In order to evaluate RAIL performance, we developed a RAILedge prototype that implements the functionality described in Section 3.1. Our prototype runs on Linux and consists of a control-plane and a data-plane agent, both running in user space. All routing and forwarding functionality is provided by the Linux kernel. The control plane is responsible for configuring the kernel with static routes and network interfaces. The data plane is responsible for the packet processing, i.e. encapsulation/decapsulation, duplication, duplicate suppression and delay padding. In particular, the kernel forwards each received packet to the data-plane agent, which processes it appropriately and forwards it back to the kernel for regular IP forwarding, see Fig.2.
Our user-space prototype is sufficient for a network connected to the Internet through a T1 or T3 line: Without considering duplicate packets, RAILedge running on a 1.9 We used Netem [21] on interfaces eth2, eth3, to emulate the properties of wide-area networks in a controlled way. The current version of Netem emulates variable delay, loss, duplication and re-ordering. Netem is currently enabled in the Linux kernel. We also emulated WAN links of various bandwidths, using the rate limiting functionality in Linux (iproute2/tc).
Performance evaluation
In section 4.1, we show that RAIL outperforms any of the underlying physical paths in terms of network-level metrics, i.e. it reduces loss, delay/jitter, it improves availability and it does not make reordering any worse than it already is in the underlying paths. In sections 4.2 and 4.3 we look at the improvement in terms of application-level metrics for VoIP (MOS) and TCP (throughput); we also look at how this improvement varies with the characteristics, combinations and number of underlying paths.
RAIL improves network-level metrics
RAIL statistically dominates any of the underlying paths, i.e. it presents the end-systems with a virtual path with better statistics in terms of network-level metrics (loss, delay, jitter and availability). This is intuitively expected. At the very least, RAIL could use just one of the paths and ignore the other; having more options should only improve things. A natural consequence is that any application performance metric calculated using these statistics (e.g. loss rate, average delay, jitter percentiles) should also be improved by RAIL; we found this to be indeed the case in computing metrics for VoIP and TCP. In addition to the statistics, we also looked at Figure 6: The effect of shared loss. Consider two paths with shared loss rate p shared and independent loss each p 1 = p 2 = p. Here we plot the end-to-end p RAIL = and p single vs. p, for various values of p shared . pathological sample paths, e.g. that reordering or special patterns of jitter may arise; we show that RAIL does not make things worse than they already are and that delay padding is able to handle these cases.
Loss
Clearly, RAIL decreases the average packet loss rate from p 1 , p 2 to p = p 1 p 2 , for independent paths. One can derive some useful rules of thumb, based on this simple fact.
Number of paths. Given that the actual loss rates are really small p i << 0.1 in practice, every new independent reduces loss p = p 1 p 2 ...p n , by at least an order of magnitude. For similar paths (p 1 = ...p n = p) and it is easy to see that the loss probability P RAIL (k) = p k is a decreasing and convex function of the number of paths (k). Therefore, most of the benefit comes from adding the 2 nd path, and additional paths bring only decreasing returns. However, adding a second path with significant different (smaller) loss rate dominates the product and makes a big difference.
Correlation. In practice, the physical paths underlying RAIL may overlap. E.g. consider two paths that share a segment with loss rate p shared , and also have independent segments with p 1 = p 2 = p. Loss experienced on any of the single paths w.p. p single = (1 − p)(1 − p shared ). Loss is experienced over RAIL w.p. p RAIL = (1−p 2 )(1−p shared ). Fig. 6 plots p RAIL vs. p for various values of p shared . Clearly, p RAIL increases in both p and p shared . The lossier the shared part, p shared , compared to the independent part, p, the less improvement we get by using RAIL (the curves for p RAIL and p single get closer and closer). Therefore, one should not only look at their end-to-end behavior, but also at the quality of their shared part, and choose a combination of paths that yields the lowest overall p RAIL .
RAIL also decreases the burstiness in loss. Due to lack of space, we omit the analysis and refer the reader to section 4.2.3, for testbed experiments that demonstrate this fact.
Availability
The simplest way to view a "failure" is as a long lasting period of loss, and we can talk about the percentage of time a path spends in failure. Then, the arguments we made for loss in the previous section apply here as well. E.g. for RAIL to fail, both paths must fail; the downtime reduces fast with the number and quality of paths. Note that RAIL not only reduces the time we spend in a "bad period", but also improves the user experience from "bad" to "medium" during that period. We demonstrate this in detail in the VoIP section (in particular see Table 2).
Delay and Jitter
When a packet i is RAIL-ed over two independent paths, the two copies experience one-way delay d 1 (i) and d 2 (i), and the packet forwarded by RAIL (the copy that arrived first) experiences d(i) = min{d 1 (i), d 2 (i)}. If the cumulative distribution function (CDF) for d j , j = 1, 2 is F j (t) = P r[d i ≤ t], then the delay CDF for RAIL is :
F (t) = P r[d ≤ t] = P r[min{d 1 , d 2 } ≤ t] = ... 1 − P r[d 1 > t and d 2 > t] = 1 − (1 − F 1 (t))(1 − F 2 (t))(1)
It is easy to see that RAIL statistically dominates any of the two paths. Indeed, the percentage of packets experiencing delay more than t over RAIL is
1 − F (t) = (1 − F 1 (t))(1 − F 2 (t))
, which is smaller than the percentage of packets exceeding t on any of the two links (1−F i (t)). This means that the entire delay CDF is shifted higher and left, thus F dominates F 1 and F 2 . Any quality metrics calculated based on these statistics (e.g. the average delay, percentiles, etc) will be better for RAIL than for any of the two paths. Rather than plotting arbitrary distributions at this point, we choose to demonstrate the delay and jitter improvement in some practical scenarios considered in the VoIP section (4.2).
Reordering
An interesting question is whether RAIL introduces reordering, which may be harmful for TCP performance? In this Fig.7(a) shows an example out-of-order sequence of out of order packets forwarded by the receiving RAILedge: (3,5,4). The same arguments will hold for any sequence (i, k, j) with i < j < k. Packets 3 and 5 must have arrived through different paths (otherwise one of the paths would have dropped packet 4 or reorder it). Say 3 arrives from the top path and 5 from the bottom path. Then the copy of 3 sent on the bottom path must have arrived between 3 and 5 (otherwise RAIL would have forwarded the bottom 3 copy first). What happened to packet 4 sent on the bottom path? If it arrived between 3 and 5, then there would be no out-of-order at RAIL; if it arrived after 5, then the bottom path would have reordered 4 and 5, which we assumed it is not the case; and we have assumed that 4 is not dropped either. We reached a contradiction, which means that RAIL cannot reorder packets if both paths are well behaving to start with. Proposition 2. RAIL may translate loss on the faster path to late arrivals from the slower path. If the inter-packet spacing at the sender is smaller than the delay difference of the two paths, then the packets arrive out of order. Example. In Fig.7(b), we consider paths 1 and 2, with oneway delay d 1 < d 2 . Two packets n and m are sent with spacing dt between them. If packet n is lost on the fast path, and dt ≤ d 2 − d 1 , then n will arrive at the RAILedge after m and the RAILedge will forward them out-of-order. The larger the delay difference d 2 − d 1 and the smaller the spacing between packets dt, the larger the reordering gap. Fact 3. Better late than never. Discussion. For VoIP, it does not hurt to receive packets late, as opposed to not receive them at all. However, out-of-order packets may potentially hurt TCP performance. Testbed experiments, in section 4.3.2, show that TCP performs better when x% of packets out-of-order, compared to when x% of packets lost. Furthermore, the delay padding component is designed to handle the timely delivery of packets. We will revisit this fact in section 4.3.2.
RAIL improves VoIP performance
Voice-over-IP Quality
A subjective measure used to assess Voice-over-IP quality is the Mean Opinion Score (or MOS), which is a rating in a scale from 1 (worst) to 5 (best) [22]. Another equivalent metric is the I rating, defined in the Emodel [23]. [23] also provides a translation between I and M OS; in this paper, we convert and present voice quality in the MOS scale only, even when we do some calculations in the I scale .
VoIP quality has two aspects. The first is speech quality and it depends primarily on how many and which packets are dropped in the network and/or at the playout buffer. [23,24] express the speech quality as a function of the packet loss rate, M OS speech (loss rate), for various codecs. The second aspect of VoIP quality is interactivity, i.e. the ability to comfortably carry on an interactive conversation; [?] express this aspect as a function of the average one-way delay, M OS interactivity (avg delay), for various conversation types. These two aspects can be added together (in the appropriate I scale defined in [23]) to give an overall MOS rating: M OS = M OS speech + M OS interactivity . This is the metric we will use throughout this section.
We do not present the details of these formulas in this submission, due to lack of space. The interested reader is referred to the ITU-T standards [23,24,25] or to comprehensive tutorials on the subject [26,27]. What the reader needs to keep in mind is that there are either formulas or tables for M OS speech (loss rate), M OS interactivity (avg delay) and that M OS = M OS speech + M OS interactivity . This is a commonly used methodology for assessing VoIP quality, e.g. see [26,7]. Fig.8 shows contours of MOS as a function of loss and delay based on the data provided in the ITU-T standards, considering G.711 codec and free conversation.
The effect of playout. In the assessment of VoIP, one should take into account the function of the playout algorithm at the receiver, which determines the playout deadline D playout : packets with one-way delay exceeding D playout are dropped. As D playout increases, the one-way delay increases (thus making interactivity worse), but less packets are dropped due to late arrival for playout (thus making speech quality better). Therefore, there is a tradeoff in choosing D playout and one should choose D opt = argmaxM OS(D playout ). This tradeoff depicted in Fig. 8 and is also responsible tfor the shape of the M OS(D one way ) curves of Fig.10, which clearly have a maximum at D opt . The value D opt depends on the loss, delay and jitter of the underlying paths as welllas on the delay budget consumed in components other than the playout. Recall that D playout is only a part of the total D one way = D end systems + D network + D playout and that packets arriving late contribute to the total loss (packet loss = (network loss) + P r[d > D playout ]).
The effect of RAIL. In the previous section, we saw that RAIL decreases (i) the loss rate (ii) the average delay and (iii) the percentage of late packets. Therefore, it also improves the M OS which is a function of these three statistics.
Railing VoIP over representative Internet Paths
In this section, we now use realistic packet traces to simulate the behavior of WAN links. In particular, we use the packet traces provided in [28], which are collected over the backbone networks of major ISPs, by sending probes that emulate G.711 traffic. Fig. 9(a) and (b) show the delay experienced on two paths between San Jose, CA and Ashburn, VA. The two paths belong to two different ISPs and experience different delay patterns. Fig.9(c) shows the one-way delay experienced by packets RAIL-ed over these two paths. Packets were sent every 10ms.
Although there is no network loss in these example paths, packets may still be dropped if they arrive after their playout deadline. Because the action of playout is out of the control of RAILedge, we consider the entire range of fixed one-way playout deadlines (out of which 70ms are considered consumed at the end-systems). The resulting M OS is shown in Fig.10 as a function of D one way . 3 Notice that the M OS curve for RAIL is higher then both curves corresponding to individual links, for the entire range of delays considered.
In general, RAIL always improves VoIP quality because it 3 The curve M OS(Done way ) has a maximum which corresponds to D opt playout that optimizes the loss-delay tradeoff in the overall M OS. presents the application with a better virtual path in terms of loss, delay and jitter. However, the relative improvement of RAIL vs. the single path depends (i) on the behavior of the two paths and (ii) on the playout algorithm.
This was just an illustrative example of RAIL over two specific paths. We now consider additional representative traces and their combinations using RAIL. We consider six packet traces from [28], shown in Fig. 11. We call the traces "good", "medium" and "bad", to roughly describe the VoIP performance they yield. 4 We then considered pairs of paths for all the combinations of good/medium/bad quality, by choosing one trace from the left and the second trace from the right of Fig.11. Table 2 shows the MOS for each one of the 6 paths, as well as for these 9 combinations using RAIL. 5 One can see that the combined link (RAIL) provides one "class" better quality than any of the individual links. Figure 11: Six representative packet traces, collected over wide-area paths of Internet backbones [28]. We plot one-way delay vs. packet sequence number; when a packet is lost we give it a 0 value. medium RAIL link, i.e. there is one class of service improvement. This is intuitively expected, because RAIL multiplexes and uses the best of both paths. In addition, we did in-house informal listening tests: we simulated the transmission of actual speech samples over these traces and we had people listen to the reconstructed sound. It was clear that the RAIL-sample sounded much better. Table 2: Voice Quality (in terms of MOS score) for the 6 representative paths, and for their 9 combinations using RAIL.
RAIL
Notice, that this quality improvement is in addition to the availability improvement in Table 1: not only RAIL reduces the time spent in "bad/medium" periods, but it also improves the experience of the user during that period, from "bad" to "medium" and from "medium" to "good".
Testbed experiments for VoIP-over-RAIL
In this section, we use our testbed to demonstrate the improvement that RAIL brings to VoIP quality for the entire range of path conditions. We used Netem to control the loss and delay parameters of each path. We sent probes to emulate the transmission of voice traffic. 6 First, we looked at loss rate. We applied uniform loss and the same loss rate p from 1 to 20%, which is quite high but may happen during short periods of bursty loss. As expected, the voice stream experiences loss rate p 2 when transmitted over RAIL, and p over on a single link. Indeed, in Fig.12(a), the measured 45 degrees red line (for a single link) agrees with p; the measured blue line (for RAIL) agrees with the theoretical p 2 dashed purple line. This loss reduction results in a speech quality improvement up to 1.5 units of MOS. Fig. 12(b) shows that MOS (averaged over the entire duration) is practically constant when we use RAIL, while the MOS over a single link is decreasing rapidly with increasing loss rate. A side-benefit is that speech quality varies less with time, which is less annoying for the user.
Second, we looked at the burstiness of loss, which is an important aspect because it can lead to loss of entire phonemes, thus degrading speech intelligibility. To control burstiness, we controlled the "correla- Table 5: Maximum size of burst (i.e. max number of consecutive packets lost) on a single path (in regular font) vs. RAIL (in bold font). The average burst size for RAIL is 1 in most cases.
Number of packets lost in burst
tion" parameter in Netem. 7 We tried all combinations of (loss rate, loss correlation) and measured the following metrics for bursty loss: (i) number of packets lost in burst (ii) number of bursts (iii) average burst size (iv) maximum burst size. In Tables 3,4, 5, we show the numbers measured over one link in regular font, and the numbers measured over RAIL in bold. Clearly, all metrics are significantly reduced with RAIL compared to the single path case, which demonstrates that RAIL reduces loss burstiness. This good property is intuitively expected, as it is less likely that both paths will experience a burst at the same time.
Third, we experimented with delay jitter. We considered 7 The Netem correlation coefficient does increase the loss burstiness, but does not directly translate to burstiness parameters, such as burstiness duration. An artifact of their implementation [21] is that increasing correlation decreases the measured loss rate (for loss rate¡50%). However, it does not matter: our point is to compare RAIL to a single path, under the same loss conditions (ii) the playout at the receiver (captured here by the jitter allowed). Delay was configured in Netem to be paretonormal distributed, with mean=100ms and correlation=0. two paths with the same mean delay (100ms), and we used Netem to generate delay according to a paretonormal distribution. We generated delay on both paths according to the same statistics. We fixed the mean delay at 100ms for both paths, and experimented with the entire range of delay variability (standard deviation from 10ms to 100ms and delay correlation from 0% to 100%).
In the beginning, we set delay correlation at 0 and increase the standard deviation of delay. We observed that RAIL reduces the jitter experienced by the VoIP stream. This results in less packets being late for playout and thus better speech quality. The exact improvement depends (i) on the delay variability of the underlying paths (captured here by the standard deviation of delay) and (ii) on the playout at the receiver (captured here by the jitter allowed at the playout). Fig.13 shows the improvement in speech quality (in MOS) compared to a single path, for a range of these two parameters (std dev 20-80ms and jitter level acceptable at playout 20-100ms). One can make several observations. First, RAIL always help (i.e. benefit> 0); this is because RAIL presents the end-system with a better virtual path. Second, there is a maximum in every curve (every curve corresponds to a certain path delay variability): when the playout is intolerant to jitter, then it drops most packets anyway; when the playout can absorb most of the jitter itself, then the help of RAIL is not needed; therefore, RAIL provides most of its benefit, in the middle -when it is needed to reduce the perceived jitter below the acceptable threshold for playout. Finally, the entire curve moves to the right and lower for paths with higher delay variability.
In addition, we experimented with delay correlation (which will result in several consecutive packets arrive late and get dropped in the playout) and we observed that RAIL decreased this correlation by multiplexing the two streams. Finally, we experimented with RAIL-ed VoIP and several non-RAILed TCP flows interfering with it. The idea was to have loss and delay caused by cross-traffic rather than being artificially injected by Netem. RAIL brought improvement in the same orders of magnitude as observed before. Figure 15: The larger the delay disparity between the two paths, the more padding is needed.
Delay Padding
The delay padding algorithm, described in section 3.2, acts as a proxy playout at the receiving RAILedge: it artificially adds delay ("padding") in order to create the illusion of constant one-way delay. In this section, we use matlab simulation to demonstrate the effect of padding. Fig.14 considers the case when the two paths differ in their average delay; this can be due to e.g. difference in propagation and/or Figure 16: Padding decreases jitter for RAIL over paths with the same average delay (100ms) but different jitter (stddev = 20ms, 5ms). The more padding -the less jitter. transmission delay. Notice the difference between (b)-RAIL without padding and (c)-RAIL with padding. Fig.15 shows that the larger the disparity between the two paths, the more padding is needed to smooth out the stream. Fig. 16 considers the case when two paths have the same average delay but differ significantly in the delay jitter, e.g. due to different utilization. Fig. 16(a) plots the delay on the two paths on the same graph; Fig. 16(b) shows what RAIL does without padding; Fig. 16(c) and (d) show that the stream can be smoothed out by adding more padding. The appropriate amount of padding should be chosen so as to maximize the overall MOS -as discussed in section 4.2.1.
RAIL improves TCP performance
In the section 4.1, we saw that RAIL statistically dominates the underlying paths in terms network-level statistics. Therefore, performance metrics computed based on these statistics, such as the average throughput, should be improved. In section 4.3.1, we analyze the throughput of long-lived TCP flows, and we show that indeed this is the case. However, there may be pathological cases, e.g. when reordering falsely triggers fast-retransmit; this is what we study in section 4.3.2, and show that -for most practical cases-RAIL helps TCP as well .
Analysis of long-lived TCP-over-RAIL
A simple formula. Let us consider two paths with loss rate and round-trip times: (p 1 , RT T 1 ), (p 2 , RT T 2 ) respectively, and w.l.o.g. RT T 1 ≤ RT T 2 . The simple rule of thumb from Figure 17: The simple steady-state model for TCP [29]. [29] predicts that the long-term TCP throughput for each path is:
T i = 1.22 RT Ti √ pi , for i = 1, 2.
What is the long-term TCP throughput using RAIL over these two paths? Following a reasoning similar to [29], we find that:
T = 1.22 E[RT T ] √ p 1 p 2 , where: (2) E[RT T ] = RT T 1 1 − p 1 1 − p 1 p 2 + RT T 2 p 1 (1 − p 2 ) 1 − p 1 p 2(3)
Proof. Fig. 17 shows the simple steady-state model considered in [29]. The network drops a packet from when the congestion window increases to W packets. The congestion window is cut in half (W/2), and then it increases by one packet per round-trip time until it reaches W packets again; at which point, the network drops a packet again and the steady-state model continues as before. Let us look at a single congestion epoch.
For that simple model, the number of packets sent during the congestion epoch is w
2 + ( w 2 + 1) + ...(+ w 2 + w 2 ) = 3w 2 8 + 3w 4 .
For the packet to be lost , both copies sent over the two paths must be lost. Therefore, the loss rate is p = p 1 p 2 = 1 number of packets = 1 3w 2 8 + 3w 4 ≃ 8 3w 2 and W ≃ 8/3(p 1 p 2 ). The only difference from [29] is that the round-trip time as perceived by TCP-over-RAIL is no longer constant, but it depends on whether a packet is lost on any of the paths. Provided that the packet is received on at least one path, which has prob. (1 − p 1 p 2 ), we are still in the same congestion epoch and
RT T = RT T 1 , w.p. (1 − p 1 ) RT T 2 , w.p. p 1 (1 − p 2 )(4)
Therefore, the conditional expectation for RTT is given by Eq. (3); and the TCP throughput over RAIL is on average:
(number of packets) ( W 2 + 1) · E[RT T ] ≃ ... 1.22 E[RT T ] √ p 1 p 2(5)
Essentially, RAIL appears to the TCP flow as a virtual path with loss rate p = p 1 p 2 and round-trip time E[RT T ]. Notice that there are two factors to take into account in Eq.(2): a multiplication in loss (p 1 p 2 ) and an averaging in delay E [RTT]. The loss for RAIL is smaller than any of the two links: p > p 1 , p > p 2 . The same is not true for the delay which is a weighted average: RT T 1 < E[RT T ] < RT T 2 .
Implications. Let us now use this simple formula to study the sensitivity of tcp-over-RAIL throughput to the characteristics of the underlying paths. Proof. First, consider that RT T 1 = RT T 2 = RT T . Then, the RAIL link is equivalent to a single link with p = p 1 p 2 , which is better than any of the two by an order of magnitude. What happens when RT T 1 < RT T 2 ? It is easy to see that RAIL is better than the slower path (2), because RAIL has both smaller loss and shorter RTT than the slow path (2):
T T 2 = 1 √ p 1 RT T 2 E[RT T ] > 1 · 1 = 1(6)
Is RAIL better than the faster path (1) as well? RAIL is better in terms of loss but worse in terms of delay (E[RT T ] > RT T 1 ). It turns out that the multiplicative decrease in loss dominates the averaging in delay. In Fig.18, we consider p 1 = p 2 = p, we fix RT T 1 = 10ms and consider the full range of p and RT T 2 . We plot the ratio between the throughput for TCP-over-RAIL vs. TCPover-fast-link.
T T 1 = 1 √ p RT T 1 E[RT T ] where 1 √ p > 1 and RT T 1 E[RT T ] = ... = 1 + p 1 + p RT T2 RT T1 ≤ 1(7)
We see that tcp does 4-10 times better over RAIL than over the fast link (1), for all practical cases: loss rates up to 10% and difference in delay up to 100ms. Indeed, the difference in RT T cannot be exceed some tens of milliseconds (e.g. due to propagation or transmission ), and p should be really small, except for short time periods.
How many paths? For n paths with characteristics and following similar derivations, we find that:
(p i , RT T i ), i = 1..n, where RT T 1 < RT T 2 < ... < RT T n ,T (n) = 1.22 E[RT T ] √ p 1 p 2 ...p n ,
where:
E[RT T ] = [RT T 1 + RT T 2 p + ...RT T n p n−1 ](1 − p) 1 − p 1 ...p n(8)
The multiplicative factor √ p 1 ..p k dominates the averaging E[RTT]. Also large RTTs have discounted contributions. For p 1 = p 2 = ...p n , T (n) is a convex increasing function of n, which implies that adding more paths of similar loss rate, improves throughput but with decreasing increments.
Testbed Experiments on Reordering and TCP
In section 4.1.4, we saw that RAIL does not introduce reordering if both paths are well behaving, but may convert loss on the fast path to late -and at the extreme even outof-order packets under some conditions (dt ≤ d 2 − d 1 ). It is well known that reordering may have a reverse effect on TCP, as it falsely triggers the fast retransmit. In this section, we use testbed experiments to show that, even in cases that RAIL converts loss to reordering, this is actually beneficial for TCP. Recall that RAIL does not cause reordering, it only translates loss to reordering. Therefore, the fair question to ask is not how "TCP does with reordering vs. without reordering" but instead "how TCP does with x% of packets arriving out-of-order vs. x% of packets being lost".
Fact 3-revisited. Better late than never (and the earlier the better). We used the simplified testbed shown in Fig.19 Figure 21: RAILnet: a virtual multipoint reliable network to inject a controlled amount of loss and reordering, using Netem, on a single TCP flow. Fig.20 shows the results of the comparison. First, we introduced x% of loss, ranging from 0 to 20%; the TCP throughput is shown in dashed line. Then we introduced x% of reordering for a range of reordering gaps/delays, i.e. the packets arrive 10-90ms later than they should; the resulting TCP throughput is shown in a separate bold line for each delay value. We see that TCP performs much better with reordering than with loss, therefore it is indeed better to receive packets "late than never". Not surprisingly, the less the delay in delivery, the better the performance.
Furthermore, TCP has today several default options to deal with reordering: including SACK, DSACK and timestamps. We found that turning SACK on further improved the performance of TCP under reordering in Fig.20. In summary, we expect RAIL to help TCP for all practical cases, i.e. for small loss rates and delay differences between the paths in the order of 10-50ms. As an extreme measure, one can use the delay padding mechanism not only for voice, but also as a TCP ordering buffer to completely eliminate reordering.
Future Directions
We envision a RAIL-network architecture, where RAILedges are control points that use path redundancy, route control and application-specific mechanisms, to improve WAN performance.
A first extension has to do with topology. So far, we considered two RAILedge devices connecting two remote sites via multiple redundant links. We envision that this can be generalized to a virtual multipoint network or RAILnet, where multiple edge networks are reliably interconnected to each other, as shown in Fig.21. Each participating edge network is located behind its own RAILedge, and each RAILedge pair communicates over at least two Internet links. The Railnet interface represents the local point of attachment to a Railnet and should present itself as a regular interface to a multi-access subnet.
Second, we are interested in combining the proactive replication of RAIL with some kind of route control, in particular (i) selection of the right subset of physical paths within the same RAIL and also (ii) dynamically switching among them. In this paper, we focused on the first part (i.e. at combinations of paths with various characteristics, at different number of paths, at paths that are similar or different from each other) and tried to give recommendations on how to statically select among them. The second aspect is dynamic switching among sets of paths. We expect this to be a less constrained than single-path switching, because (i) redundant transmission is robust to short-lived problems and (ii) physical paths tend to have consistent behavior in the long time scales. Therefore, RAIL should relieve much of the urgency in dynamic path switching decisions.
One could further enhance the functionality of RAILedge. So far, we focused on replication of packets over multiple paths. Several other functions can be naturally added on an edge network device, including monitoring and path switching, compression, quality-of-service mechanisms, protocol specific acceleration. For example, one could decide to RAIL part of the traffic (e.g. VoIP or critical applications) and use striping for the remaining traffic; this could correspond to RAIL-0 in the raid taxonomy [15].
There are some additional interesting questions, we are currently pursuing as a direct extension of this work. First, we continue to study TCP over RAIL, using more accurate TCP models, and considering also short-lived connections; we are also working on a modification of our delay-padding algorithm, to remove reordering at the receiving RAILedge. Second, we are investigating the effect of RAIL on the rest of the traffic. E.g. when there is significant disparity in bandwidth, we expect RAIL-ed TCP to cause congestion on the limited-bandwidth path. Furthermore, what is the interaction between competing RAILs? Finally, it would be interesting to explore the benefit of adding additional RAILedges in the middle of the network.
The RAILnet architecture can be incrementally deployed by gradually adding more RAILedges. If widely deployed, it has the potential to fundamentally change the dynamics and economics of wide-area networks.
Conclusion
We proposed and evaluated the Redundant Array of Internet Links (RAIL) -a mechanism for improving packet delivery by proactively replicating packets over multiple Internet Links. We showed that RAIL significantly improves the performance in terms of network-as well as applicationlevel metrics. We studied different combinations of underlying paths: we found that most of the benefit comes from two paths of carefully managed; we also designed a delay padding algorithm to hide significant disparities among paths. RAIL can be gracefully combined with and greatly enhance other techniques currently used in overlay networks, such as dynamic path switching. Ultimately, it has the potential to greatly affect the dynamics and economics of widearea networks.
| 8,166 |
cs0701133
|
1619196380
|
It is well-known that wide-area networks face today several performance and reliability problems. In this work, we propose to solve these problems by connecting two or more local-area networks together via a Redundant Array of Internet Links (or RAIL) and by proactively replicating each packet over these links. In that sense, RAIL is for networks what RAID (Redundant Array of Inexpensive Disks) was for disks. In this paper, we describe the RAIL approach, present our prototype (called the RAILedge), and evaluate its performance. First, we demonstrate that using multiple Internet links significantly improves the end-to-end performance in terms of network-level as well as application-level metrics for Voice-over-IP and TCP. Second, we show that a delay padding mechanism is needed to complement RAIL when there is significant delay disparity between the paths. Third, we show that two paths provide most of the benefit, if carefully managed. Finally, we discuss a RAIL-network architecture, where RAILedges make use of path redundancy, route control and application-specific mechanisms, to improve WAN performance.
|
Our work fits in this scope as follows. It is related to multi-homing and overlay approaches in that it tries to improve end-to-end performance by connecting edge-networks via several different ISPs and by exploiting their path diversity. We compare to related work as follows. The novel aspect we are focusing on is proactive replication of every packet over the available paths in a single RAIL. This aspect is orthogonal to the online decision of switching traffic between RAILs (i.e. sets of paths). However, in this paper we still explore how to choose and manage the physical paths that constitute a single RAIL. Similarly to @cite_0 @cite_14 , we are looking at application-level metrics, particularly for VoIP and TCP. In contrast to the media-streaming work, we transmit redundant as opposed to complementary descriptions, operating on the assumption that bandwidth is not the issue. Our delay padding algorithm resembles playout buffering @cite_28 in that it tries to smooth out the network delay jitter; however, it is implemented at an edge device instead of the end-point, and acts only as a playout-proxy without dropping packets.
|
{
"abstract": [
"This paper explores the feasibility of improving the performance of end-to-end data transfers between different sites through path switching. Our study is focused on both the logic that controls path switching decisions and the configurations required to achieve sufficient path diversity. Specifically, we investigate two common approaches offering path diversity - multi-homing and overlay networks - and investigate their characteristics in the context of a representative wide-area testbed. We explore the end-to-end delay and loss characteristics of different paths and find that substantial improvements can potentially be achived by path switching, especially in lowering end-to-end losses. Based on this assessment, we develop a simple path-switching mechanism capable of realizing those performance improvements. Our experimental study demonstrates that substantial performance improvements are indeed achievable using this approach.",
"In this paper, we present error-resilient Internet video transmission using path diversity and rate-distortion optimized reference picture selection. Under this scheme, the optimal packet dependency is determined adapting to network characteristics and video content, to achieve a better trade-off between coding efficiency and forming independent streams to increase error-resilience. The optimization is achieved within a rate-distortion framework, so that the expected end-to-end distortion is minimized under the given rate constraint. The expected distortion is calculated based on an accurate binary tree modeling with the effects of channel loss and error concealment taken into account. With the aid of active probing, packets are sent across multiple available paths according to a transmission policy which takes advantage of path diversity and seeks to minimize the loss rate. Experiments demonstrate that the proposed scheme provides significant diversity gain, as well as gains over video redundancy coding and the NACK mode of conventional reference picture selection.",
""
],
"cite_N": [
"@cite_0",
"@cite_28",
"@cite_14"
],
"mid": [
"1976099036",
"2033837429",
""
]
}
|
The Case for Redundant Arrays of Internet Links (RAIL)
|
The Internet gradually becomes the unified network infrastructure for all our communication and business needs. Large enterprises, in particular, rely increasingly on Internetbased Virtual Private Networks (VPNs) that typically interconnect several, possibly remote, sites via a wide-area network (WAN). Depending on the company, the VPNs may have various uses, including carrying Voice-over-IP (VoIP) to drive down the communication expenses, sharing geographically distributed company resources, providing a realtime service, etc.
However, it is well known that wide-area networks face today several problems, including congestion, failure of various network elements or protocol mis-configurations. These may result in periods of degraded quality-of-service, or even lack of connectivity, perceived by the end-user. To deal with these problems, several measures can be taken at the endpoints, at the edge, or inside the network.
One approach is to use redundant communication paths to improve end-to-end performance 1 . This idea is not new.
The Resilient Overlay Network (RON) architecture [1] proposed that participating nodes maintain multiple paths to each other, in order to preserve their connectivity in the face of Internet failures. The more practical alternative to resilient overlays, multi-homing [2,3], advocates that each edge network connect to the Internet over multiple Internet Service Providers (ISPs), in order to increase the probability of finding an available path to any destination. Both approaches essentially suggest to establish and intelligently use redundant communication paths. Several vendors have already developed products along these lines [4,5,6]. A significant body of research has also investigated the performance of such approaches and algorithms for monitoring, dynamic path switching and other aspects [1,2,3,7,8,9,10,12,11,13,14].
We too are looking at how to use control at the edge and utilize redundant communication paths to improve end-toend performance. What we bring to the table is a mechanism for proactively leveraging several paths at the same time. We propose to replicate and transmit packets over several redundant independent paths, which are carefully selected. The goal is to increase the probability that at least one copy will be received correctly and on time. In other words, we propose to combine a proactive replication over a set of redundant links, with the traditional reactive dynamic switching among (sets of) links.
Our approach is inspired by the Redundant Array of Inexpensive Disks (RAID) [15]. The basic idea of RAID was to combine multiple small, inexpensive disk drives into an array of disk drives which yields better performance that of a Single Large Expensive Drive (SLED), and appears to the computer as a single logical storage unit or drive. Furthermore, disk arrays were made fault-tolerant by redundantly storing information in various ways. Our approach is analogous to "disk mirroring", or RAID-1, which duplicates all content on a backup disk; so our approach would be called RAIL-1 according to RAID terminology.
Similarly to RAID, we propose to replicate packets over multiple, relatively inexpensive, independent paths, i.e., create a Redundant Array of Internet Links (RAIL), which appears to the application as a single "superior" link. To evaleral packets lost in a row. At one extreme, packets may sporadically get dropped or delayed -this is typically referred to as QoS problem. At the other extreme, a failure may lead to an long-lasting loss of connectivitythis is typically referred to as a reliability problem. In the middle, several packets may get mistreated in a short time period -which is also typically considered a QoS problem. To cover the entire range of cases ,we often refer together to quality-of-service and reliability, as "performance" uate RAIL performance, we have built a prototype called RAILedge. We show that using RAIL yields better performance (both quality-of-service and reliability) than using any of the underlying paths alone. In addition, we evaluate the performance of applications, such as VoIP and TCP, over RAIL and we seek to optimize relevant applicationlevel metrics. In particular, we propose an additional mechanism, called delay padding, which complements RAIL when there is a significant disparity between the underlying paths.
There are several issues that need to be investigated. How much is the performance benefit from RAIL and how does it depend on the characteristics of the underlying paths? What is the tradeoff between performance benefit and the bandwidth cost of replicating every packet over multiple connections? How does RAIL interact with higher layers, such as TCP and VoIP applications? Does RAIL introduce reordering? How should one choose the links that constitute the RAIL, in a way that they complement each other and optimize application performance? In this paper, we address these questions.
With regards to the bandwidth cost, we argue that it is worthwhile and that RAIL is a simple cost-efficient approach for achieving good quality-of-service over redundant paths. The first argument is from a cost point-of-view. As bandwidth gets cheaper and cheaper, combining multiple inexpensive links becomes competitive to buying a single, more expensive, private line. Furthermore, we show that two paths are sufficient to get most of the benefit. In addition, the cost of a connection is rather fixed than usage-based. Once one pays the initial cost to get an additional connection to a second ISP (which companies using multi-homing have already done), there is no reason not to fully utilize it. The second argument is from a performance point-of-view, which may be a strict requirement for critical applications. RAIL-ing traffic over n paths provides more robustness to short term "glitches" than dynamic path switching between the same n paths. This is because there are limits in how fast path switching mechanisms can (i) confidently detect glitches and (ii) react to them without causing instability to the network. For example, if a few VoIP packets are sporadically dropped, a path switching system should probably not react to it, while RAIL can still successfully deliver copies of the lost packets arriving from the redundant paths.
Our findings can be summarized as follows.
• First, we demonstrate that proactively replicating packets over a Redundant Array of Internet Links (RAIL) significantly improves the end-to-end performance. We quantify the improvement in terms of network-level as well as application-level metrics. In this process, we use and derive analytical models for the performance of VoIP-over-RAIL and TCP-over-RAIL. We also use a working prototype of RAILedge.
• Second, we design and evaluate a delay padding mechanism to complement RAIL when there is a significant delay disparity among the underlying paths. This is useful both for VoIP (where it plays a proxy-playout role) and for TCP (where it may remove re-ordering)
• Third, we show that two paths provide most of the benefit, while additional paths bring decreasing benefits. The two preferred paths should be carefully selected based on their quality, similarity/disparity and correlation.
The structure of the rest of the paper is as follows. Section 2 discuss related work. Section 3 describes the RAILedge design, some implementation details and the experimental setup. Section 4 evaluates the performance improvement brought by RAIL in terms of general network-level metrics (subsection 4.1), VoIP quality (subsection 4.2) and TCP throughput (subsection 4.3); we also study the sensitivity to the characteristics of the underlying paths. In this evaluation, we used analysis, matlab simulation, actual packet traces collected over Internet backbones, and testbed experiments. Section 5 discusses the bigger picture, including possible extensions and open questions. Section 6 concludes the paper.
System Design
RAIL Mechanisms Overview
RAIL improves the packet delivery between two remote local area networks (LANs), by connecting them through multiple wide-area paths. The paths are chosen to be as independent as possible, e.g. belonging to different Internet Service Providers. Fig.1 shows an example of two disjoint paths: Link 1 goes through ISP-A and ISP-C, Link 2 goes through ISP-B and ISP-D. (The simplest configuration would be to have both LANs connected to the same two ISPs.) For simplicity, we describe the system using two paths only; the same ideas apply to n > 2 paths.
A RAILedge device is required to connect each LAN to the wide-area paths. Each packet that transitions from the LAN to the WAN, via the RAILedge, is replicated at the RAILedge and sent out both WAN links. Copies of the same packet travel in parallel through the different WAN links and eventually arrive at the receiving RAILedge. There are three possibilities: both copies arrive, one copy arrives or no copy arrives. The receiving RAILedge examines every packet coming in from the WAN and suppresses any duplicates; i.e. it forwards the first copy of each packet toward its destination but it discards any copies arriving later.
The result is clear: the probability of both copies being lost is reduced compared to using a single path, and the delay experienced is the minimum of the delay on each path. Overall, the application perceives a virtual RAIL link that is better than the underlying physical links.
In summary, the RAILedge performs three basic operations: (i) packet duplication (ii) forwarding over all redundant Internet links and (iii) duplicate suppression. RAILedge-RAILedge communication happens over VPN tunnels, to ensure that every RAIL-ed packet is received by the intended RAILedge. We implement tunneling with a simple encapsulation/decapsulation scheme; our header includes the ID of the sending RAILedge and a sequence number, which is used to suppress duplicates at the receiving RAILEdge. All RAILedge operations are transparent to the end-user. The components of a RAILedge device are shown in Fig.2 and the steps taken upon reception of a packet are summarized in Fig.3.
There is a component of the RAILedge that we are not going to examine in this paper: link monitoring and selection. This module is responsible for monitoring the performance of every physical path, computing appropriate quality metrics, and choosing the best subset of paths to constitute the RAIL, over which packets should be replicated. Link monitoring and dynamic selection is a research problem in itself, with extensive and growing literature. In this paper, we do not study dynamic path switching. 2 Instead, we focus on (i) evaluating the replication of packets over all paths that constitute the RAIL under study and (ii) on giving recom- mendations on how to statically select these paths. This is still useful for a typical use of RAIL: initially, the user compares different ISPs and decides which is the best set to subscribe to; after subscription, the user replicates packets over all ISPs.
Delay Padding
Delay Padding is a mechanism that needs to complement the basic RAIL mechanism when there is delay disparity in the paths. The idea is the following. The default behavior of the receiving RAILedge is to forward the first copy and discard all copies that arrive later. However, this may not always be the best choice when there is significant delay disparity between the two paths. In such cases, one can construct pathological scenarios where the default RAIL policy results in patterns of delay jitter that adversely affect the application. One example is VoIP: the playout buffering algorithm at the receiver tries to estimate the delay jitter and and adapt to it. This playout algorithm is unknown to us and out of our control; even worse, it is most likely designed to react to delays caused by real single paths, not by virtual RAIL paths. For example, when path 1 is much faster than path 2, then most of the time RAIL will forward copies arriving from path 1. The playout buffer may adapt and closely match it, by choosing a playout deadline slightly above the delay of the path 1. When packets are lost on the fast path, the copies arriving from the slow path will arrive late to be played outand will be useless. In this scenario, a better use of the two paths would be to "equalize" the delay in the two paths by artificially delaying the packets arriving from the fast path, thus the name "delay padding". Essentially, delay padding acts as a proxy for playout, located at the RAILedge, and presents the receiver with the illusion of a roughly constant one-way delay. The main differences from a playout algorithm at the end-host is that delay padding does not drop packets that arrive late for playout. Fig. 4 demonstrates the main idea of delay padding, for packets in the same VoIP flow. The goal is to minimize jitter, i.e. to make all packets experience the same, roughly con-stant, one-way delay D, shown in straight line. For every packet i, two copies arrive: the first one is marked with a circle, the second is marked with the diamond. The actual time RAIL forwards the packet is marked with an "X". Without padding, RAIL would normally forward the first copy, which incurred one-way delay n RAIL = min{delay1, delay2}. With padding, we compare n RAIL to the target one-way delay D.
• In cases 1 and 2: n RAIL < D. We wait for additional "padding" time D − n RAIL before forwarding the packet.
• In case 3: n RAIL > D. We forward the packet immediately, without further delay. (Instead, a playout algorithm at the receiver would just drop the late packets).
The target one-way delay D so as to maximize the overall voice quality (MOS): D = argmax{M OS(D onew ay )}. D should be chosen taking into account the statistics of two paths and the delay budget. Adaptation of this value should be allowed only in much larger time scales. We discuss the choice of D to optimize M OS, as well as the performance improvement from delay padding, in the section on VoIP evaluation (4.2.1).
Delay padding may prove a useful mechanism for TCP as well. For example, it could be used to remove reordering, caused by RAIL for certain combinations of paths. This is discussed further in the section on reordering (4.1.4) and in the section on the effect of reordering on TCP in particular (4.3.2).
A practical implementation of delay padding for VoIP would require (i) the ability to identify voice packets and keep per-flow state and (ii) calculations of timing in term of relative relative instead absolute one-way delay. An implementation of reordering-removal for TCP, would not necessarily require per flow state; it could just use the sequence numbers on the aggregate flow between the two RAILedges.
RAIL Prototype and Experimental Setup
In order to evaluate RAIL performance, we developed a RAILedge prototype that implements the functionality described in Section 3.1. Our prototype runs on Linux and consists of a control-plane and a data-plane agent, both running in user space. All routing and forwarding functionality is provided by the Linux kernel. The control plane is responsible for configuring the kernel with static routes and network interfaces. The data plane is responsible for the packet processing, i.e. encapsulation/decapsulation, duplication, duplicate suppression and delay padding. In particular, the kernel forwards each received packet to the data-plane agent, which processes it appropriately and forwards it back to the kernel for regular IP forwarding, see Fig.2.
Our user-space prototype is sufficient for a network connected to the Internet through a T1 or T3 line: Without considering duplicate packets, RAILedge running on a 1.9 We used Netem [21] on interfaces eth2, eth3, to emulate the properties of wide-area networks in a controlled way. The current version of Netem emulates variable delay, loss, duplication and re-ordering. Netem is currently enabled in the Linux kernel. We also emulated WAN links of various bandwidths, using the rate limiting functionality in Linux (iproute2/tc).
Performance evaluation
In section 4.1, we show that RAIL outperforms any of the underlying physical paths in terms of network-level metrics, i.e. it reduces loss, delay/jitter, it improves availability and it does not make reordering any worse than it already is in the underlying paths. In sections 4.2 and 4.3 we look at the improvement in terms of application-level metrics for VoIP (MOS) and TCP (throughput); we also look at how this improvement varies with the characteristics, combinations and number of underlying paths.
RAIL improves network-level metrics
RAIL statistically dominates any of the underlying paths, i.e. it presents the end-systems with a virtual path with better statistics in terms of network-level metrics (loss, delay, jitter and availability). This is intuitively expected. At the very least, RAIL could use just one of the paths and ignore the other; having more options should only improve things. A natural consequence is that any application performance metric calculated using these statistics (e.g. loss rate, average delay, jitter percentiles) should also be improved by RAIL; we found this to be indeed the case in computing metrics for VoIP and TCP. In addition to the statistics, we also looked at Figure 6: The effect of shared loss. Consider two paths with shared loss rate p shared and independent loss each p 1 = p 2 = p. Here we plot the end-to-end p RAIL = and p single vs. p, for various values of p shared . pathological sample paths, e.g. that reordering or special patterns of jitter may arise; we show that RAIL does not make things worse than they already are and that delay padding is able to handle these cases.
Loss
Clearly, RAIL decreases the average packet loss rate from p 1 , p 2 to p = p 1 p 2 , for independent paths. One can derive some useful rules of thumb, based on this simple fact.
Number of paths. Given that the actual loss rates are really small p i << 0.1 in practice, every new independent reduces loss p = p 1 p 2 ...p n , by at least an order of magnitude. For similar paths (p 1 = ...p n = p) and it is easy to see that the loss probability P RAIL (k) = p k is a decreasing and convex function of the number of paths (k). Therefore, most of the benefit comes from adding the 2 nd path, and additional paths bring only decreasing returns. However, adding a second path with significant different (smaller) loss rate dominates the product and makes a big difference.
Correlation. In practice, the physical paths underlying RAIL may overlap. E.g. consider two paths that share a segment with loss rate p shared , and also have independent segments with p 1 = p 2 = p. Loss experienced on any of the single paths w.p. p single = (1 − p)(1 − p shared ). Loss is experienced over RAIL w.p. p RAIL = (1−p 2 )(1−p shared ). Fig. 6 plots p RAIL vs. p for various values of p shared . Clearly, p RAIL increases in both p and p shared . The lossier the shared part, p shared , compared to the independent part, p, the less improvement we get by using RAIL (the curves for p RAIL and p single get closer and closer). Therefore, one should not only look at their end-to-end behavior, but also at the quality of their shared part, and choose a combination of paths that yields the lowest overall p RAIL .
RAIL also decreases the burstiness in loss. Due to lack of space, we omit the analysis and refer the reader to section 4.2.3, for testbed experiments that demonstrate this fact.
Availability
The simplest way to view a "failure" is as a long lasting period of loss, and we can talk about the percentage of time a path spends in failure. Then, the arguments we made for loss in the previous section apply here as well. E.g. for RAIL to fail, both paths must fail; the downtime reduces fast with the number and quality of paths. Note that RAIL not only reduces the time we spend in a "bad period", but also improves the user experience from "bad" to "medium" during that period. We demonstrate this in detail in the VoIP section (in particular see Table 2).
Delay and Jitter
When a packet i is RAIL-ed over two independent paths, the two copies experience one-way delay d 1 (i) and d 2 (i), and the packet forwarded by RAIL (the copy that arrived first) experiences d(i) = min{d 1 (i), d 2 (i)}. If the cumulative distribution function (CDF) for d j , j = 1, 2 is F j (t) = P r[d i ≤ t], then the delay CDF for RAIL is :
F (t) = P r[d ≤ t] = P r[min{d 1 , d 2 } ≤ t] = ... 1 − P r[d 1 > t and d 2 > t] = 1 − (1 − F 1 (t))(1 − F 2 (t))(1)
It is easy to see that RAIL statistically dominates any of the two paths. Indeed, the percentage of packets experiencing delay more than t over RAIL is
1 − F (t) = (1 − F 1 (t))(1 − F 2 (t))
, which is smaller than the percentage of packets exceeding t on any of the two links (1−F i (t)). This means that the entire delay CDF is shifted higher and left, thus F dominates F 1 and F 2 . Any quality metrics calculated based on these statistics (e.g. the average delay, percentiles, etc) will be better for RAIL than for any of the two paths. Rather than plotting arbitrary distributions at this point, we choose to demonstrate the delay and jitter improvement in some practical scenarios considered in the VoIP section (4.2).
Reordering
An interesting question is whether RAIL introduces reordering, which may be harmful for TCP performance? In this Fig.7(a) shows an example out-of-order sequence of out of order packets forwarded by the receiving RAILedge: (3,5,4). The same arguments will hold for any sequence (i, k, j) with i < j < k. Packets 3 and 5 must have arrived through different paths (otherwise one of the paths would have dropped packet 4 or reorder it). Say 3 arrives from the top path and 5 from the bottom path. Then the copy of 3 sent on the bottom path must have arrived between 3 and 5 (otherwise RAIL would have forwarded the bottom 3 copy first). What happened to packet 4 sent on the bottom path? If it arrived between 3 and 5, then there would be no out-of-order at RAIL; if it arrived after 5, then the bottom path would have reordered 4 and 5, which we assumed it is not the case; and we have assumed that 4 is not dropped either. We reached a contradiction, which means that RAIL cannot reorder packets if both paths are well behaving to start with. Proposition 2. RAIL may translate loss on the faster path to late arrivals from the slower path. If the inter-packet spacing at the sender is smaller than the delay difference of the two paths, then the packets arrive out of order. Example. In Fig.7(b), we consider paths 1 and 2, with oneway delay d 1 < d 2 . Two packets n and m are sent with spacing dt between them. If packet n is lost on the fast path, and dt ≤ d 2 − d 1 , then n will arrive at the RAILedge after m and the RAILedge will forward them out-of-order. The larger the delay difference d 2 − d 1 and the smaller the spacing between packets dt, the larger the reordering gap. Fact 3. Better late than never. Discussion. For VoIP, it does not hurt to receive packets late, as opposed to not receive them at all. However, out-of-order packets may potentially hurt TCP performance. Testbed experiments, in section 4.3.2, show that TCP performs better when x% of packets out-of-order, compared to when x% of packets lost. Furthermore, the delay padding component is designed to handle the timely delivery of packets. We will revisit this fact in section 4.3.2.
RAIL improves VoIP performance
Voice-over-IP Quality
A subjective measure used to assess Voice-over-IP quality is the Mean Opinion Score (or MOS), which is a rating in a scale from 1 (worst) to 5 (best) [22]. Another equivalent metric is the I rating, defined in the Emodel [23]. [23] also provides a translation between I and M OS; in this paper, we convert and present voice quality in the MOS scale only, even when we do some calculations in the I scale .
VoIP quality has two aspects. The first is speech quality and it depends primarily on how many and which packets are dropped in the network and/or at the playout buffer. [23,24] express the speech quality as a function of the packet loss rate, M OS speech (loss rate), for various codecs. The second aspect of VoIP quality is interactivity, i.e. the ability to comfortably carry on an interactive conversation; [?] express this aspect as a function of the average one-way delay, M OS interactivity (avg delay), for various conversation types. These two aspects can be added together (in the appropriate I scale defined in [23]) to give an overall MOS rating: M OS = M OS speech + M OS interactivity . This is the metric we will use throughout this section.
We do not present the details of these formulas in this submission, due to lack of space. The interested reader is referred to the ITU-T standards [23,24,25] or to comprehensive tutorials on the subject [26,27]. What the reader needs to keep in mind is that there are either formulas or tables for M OS speech (loss rate), M OS interactivity (avg delay) and that M OS = M OS speech + M OS interactivity . This is a commonly used methodology for assessing VoIP quality, e.g. see [26,7]. Fig.8 shows contours of MOS as a function of loss and delay based on the data provided in the ITU-T standards, considering G.711 codec and free conversation.
The effect of playout. In the assessment of VoIP, one should take into account the function of the playout algorithm at the receiver, which determines the playout deadline D playout : packets with one-way delay exceeding D playout are dropped. As D playout increases, the one-way delay increases (thus making interactivity worse), but less packets are dropped due to late arrival for playout (thus making speech quality better). Therefore, there is a tradeoff in choosing D playout and one should choose D opt = argmaxM OS(D playout ). This tradeoff depicted in Fig. 8 and is also responsible tfor the shape of the M OS(D one way ) curves of Fig.10, which clearly have a maximum at D opt . The value D opt depends on the loss, delay and jitter of the underlying paths as welllas on the delay budget consumed in components other than the playout. Recall that D playout is only a part of the total D one way = D end systems + D network + D playout and that packets arriving late contribute to the total loss (packet loss = (network loss) + P r[d > D playout ]).
The effect of RAIL. In the previous section, we saw that RAIL decreases (i) the loss rate (ii) the average delay and (iii) the percentage of late packets. Therefore, it also improves the M OS which is a function of these three statistics.
Railing VoIP over representative Internet Paths
In this section, we now use realistic packet traces to simulate the behavior of WAN links. In particular, we use the packet traces provided in [28], which are collected over the backbone networks of major ISPs, by sending probes that emulate G.711 traffic. Fig. 9(a) and (b) show the delay experienced on two paths between San Jose, CA and Ashburn, VA. The two paths belong to two different ISPs and experience different delay patterns. Fig.9(c) shows the one-way delay experienced by packets RAIL-ed over these two paths. Packets were sent every 10ms.
Although there is no network loss in these example paths, packets may still be dropped if they arrive after their playout deadline. Because the action of playout is out of the control of RAILedge, we consider the entire range of fixed one-way playout deadlines (out of which 70ms are considered consumed at the end-systems). The resulting M OS is shown in Fig.10 as a function of D one way . 3 Notice that the M OS curve for RAIL is higher then both curves corresponding to individual links, for the entire range of delays considered.
In general, RAIL always improves VoIP quality because it 3 The curve M OS(Done way ) has a maximum which corresponds to D opt playout that optimizes the loss-delay tradeoff in the overall M OS. presents the application with a better virtual path in terms of loss, delay and jitter. However, the relative improvement of RAIL vs. the single path depends (i) on the behavior of the two paths and (ii) on the playout algorithm.
This was just an illustrative example of RAIL over two specific paths. We now consider additional representative traces and their combinations using RAIL. We consider six packet traces from [28], shown in Fig. 11. We call the traces "good", "medium" and "bad", to roughly describe the VoIP performance they yield. 4 We then considered pairs of paths for all the combinations of good/medium/bad quality, by choosing one trace from the left and the second trace from the right of Fig.11. Table 2 shows the MOS for each one of the 6 paths, as well as for these 9 combinations using RAIL. 5 One can see that the combined link (RAIL) provides one "class" better quality than any of the individual links. Figure 11: Six representative packet traces, collected over wide-area paths of Internet backbones [28]. We plot one-way delay vs. packet sequence number; when a packet is lost we give it a 0 value. medium RAIL link, i.e. there is one class of service improvement. This is intuitively expected, because RAIL multiplexes and uses the best of both paths. In addition, we did in-house informal listening tests: we simulated the transmission of actual speech samples over these traces and we had people listen to the reconstructed sound. It was clear that the RAIL-sample sounded much better. Table 2: Voice Quality (in terms of MOS score) for the 6 representative paths, and for their 9 combinations using RAIL.
RAIL
Notice, that this quality improvement is in addition to the availability improvement in Table 1: not only RAIL reduces the time spent in "bad/medium" periods, but it also improves the experience of the user during that period, from "bad" to "medium" and from "medium" to "good".
Testbed experiments for VoIP-over-RAIL
In this section, we use our testbed to demonstrate the improvement that RAIL brings to VoIP quality for the entire range of path conditions. We used Netem to control the loss and delay parameters of each path. We sent probes to emulate the transmission of voice traffic. 6 First, we looked at loss rate. We applied uniform loss and the same loss rate p from 1 to 20%, which is quite high but may happen during short periods of bursty loss. As expected, the voice stream experiences loss rate p 2 when transmitted over RAIL, and p over on a single link. Indeed, in Fig.12(a), the measured 45 degrees red line (for a single link) agrees with p; the measured blue line (for RAIL) agrees with the theoretical p 2 dashed purple line. This loss reduction results in a speech quality improvement up to 1.5 units of MOS. Fig. 12(b) shows that MOS (averaged over the entire duration) is practically constant when we use RAIL, while the MOS over a single link is decreasing rapidly with increasing loss rate. A side-benefit is that speech quality varies less with time, which is less annoying for the user.
Second, we looked at the burstiness of loss, which is an important aspect because it can lead to loss of entire phonemes, thus degrading speech intelligibility. To control burstiness, we controlled the "correla- Table 5: Maximum size of burst (i.e. max number of consecutive packets lost) on a single path (in regular font) vs. RAIL (in bold font). The average burst size for RAIL is 1 in most cases.
Number of packets lost in burst
tion" parameter in Netem. 7 We tried all combinations of (loss rate, loss correlation) and measured the following metrics for bursty loss: (i) number of packets lost in burst (ii) number of bursts (iii) average burst size (iv) maximum burst size. In Tables 3,4, 5, we show the numbers measured over one link in regular font, and the numbers measured over RAIL in bold. Clearly, all metrics are significantly reduced with RAIL compared to the single path case, which demonstrates that RAIL reduces loss burstiness. This good property is intuitively expected, as it is less likely that both paths will experience a burst at the same time.
Third, we experimented with delay jitter. We considered 7 The Netem correlation coefficient does increase the loss burstiness, but does not directly translate to burstiness parameters, such as burstiness duration. An artifact of their implementation [21] is that increasing correlation decreases the measured loss rate (for loss rate¡50%). However, it does not matter: our point is to compare RAIL to a single path, under the same loss conditions (ii) the playout at the receiver (captured here by the jitter allowed). Delay was configured in Netem to be paretonormal distributed, with mean=100ms and correlation=0. two paths with the same mean delay (100ms), and we used Netem to generate delay according to a paretonormal distribution. We generated delay on both paths according to the same statistics. We fixed the mean delay at 100ms for both paths, and experimented with the entire range of delay variability (standard deviation from 10ms to 100ms and delay correlation from 0% to 100%).
In the beginning, we set delay correlation at 0 and increase the standard deviation of delay. We observed that RAIL reduces the jitter experienced by the VoIP stream. This results in less packets being late for playout and thus better speech quality. The exact improvement depends (i) on the delay variability of the underlying paths (captured here by the standard deviation of delay) and (ii) on the playout at the receiver (captured here by the jitter allowed at the playout). Fig.13 shows the improvement in speech quality (in MOS) compared to a single path, for a range of these two parameters (std dev 20-80ms and jitter level acceptable at playout 20-100ms). One can make several observations. First, RAIL always help (i.e. benefit> 0); this is because RAIL presents the end-system with a better virtual path. Second, there is a maximum in every curve (every curve corresponds to a certain path delay variability): when the playout is intolerant to jitter, then it drops most packets anyway; when the playout can absorb most of the jitter itself, then the help of RAIL is not needed; therefore, RAIL provides most of its benefit, in the middle -when it is needed to reduce the perceived jitter below the acceptable threshold for playout. Finally, the entire curve moves to the right and lower for paths with higher delay variability.
In addition, we experimented with delay correlation (which will result in several consecutive packets arrive late and get dropped in the playout) and we observed that RAIL decreased this correlation by multiplexing the two streams. Finally, we experimented with RAIL-ed VoIP and several non-RAILed TCP flows interfering with it. The idea was to have loss and delay caused by cross-traffic rather than being artificially injected by Netem. RAIL brought improvement in the same orders of magnitude as observed before. Figure 15: The larger the delay disparity between the two paths, the more padding is needed.
Delay Padding
The delay padding algorithm, described in section 3.2, acts as a proxy playout at the receiving RAILedge: it artificially adds delay ("padding") in order to create the illusion of constant one-way delay. In this section, we use matlab simulation to demonstrate the effect of padding. Fig.14 considers the case when the two paths differ in their average delay; this can be due to e.g. difference in propagation and/or Figure 16: Padding decreases jitter for RAIL over paths with the same average delay (100ms) but different jitter (stddev = 20ms, 5ms). The more padding -the less jitter. transmission delay. Notice the difference between (b)-RAIL without padding and (c)-RAIL with padding. Fig.15 shows that the larger the disparity between the two paths, the more padding is needed to smooth out the stream. Fig. 16 considers the case when two paths have the same average delay but differ significantly in the delay jitter, e.g. due to different utilization. Fig. 16(a) plots the delay on the two paths on the same graph; Fig. 16(b) shows what RAIL does without padding; Fig. 16(c) and (d) show that the stream can be smoothed out by adding more padding. The appropriate amount of padding should be chosen so as to maximize the overall MOS -as discussed in section 4.2.1.
RAIL improves TCP performance
In the section 4.1, we saw that RAIL statistically dominates the underlying paths in terms network-level statistics. Therefore, performance metrics computed based on these statistics, such as the average throughput, should be improved. In section 4.3.1, we analyze the throughput of long-lived TCP flows, and we show that indeed this is the case. However, there may be pathological cases, e.g. when reordering falsely triggers fast-retransmit; this is what we study in section 4.3.2, and show that -for most practical cases-RAIL helps TCP as well .
Analysis of long-lived TCP-over-RAIL
A simple formula. Let us consider two paths with loss rate and round-trip times: (p 1 , RT T 1 ), (p 2 , RT T 2 ) respectively, and w.l.o.g. RT T 1 ≤ RT T 2 . The simple rule of thumb from Figure 17: The simple steady-state model for TCP [29]. [29] predicts that the long-term TCP throughput for each path is:
T i = 1.22 RT Ti √ pi , for i = 1, 2.
What is the long-term TCP throughput using RAIL over these two paths? Following a reasoning similar to [29], we find that:
T = 1.22 E[RT T ] √ p 1 p 2 , where: (2) E[RT T ] = RT T 1 1 − p 1 1 − p 1 p 2 + RT T 2 p 1 (1 − p 2 ) 1 − p 1 p 2(3)
Proof. Fig. 17 shows the simple steady-state model considered in [29]. The network drops a packet from when the congestion window increases to W packets. The congestion window is cut in half (W/2), and then it increases by one packet per round-trip time until it reaches W packets again; at which point, the network drops a packet again and the steady-state model continues as before. Let us look at a single congestion epoch.
For that simple model, the number of packets sent during the congestion epoch is w
2 + ( w 2 + 1) + ...(+ w 2 + w 2 ) = 3w 2 8 + 3w 4 .
For the packet to be lost , both copies sent over the two paths must be lost. Therefore, the loss rate is p = p 1 p 2 = 1 number of packets = 1 3w 2 8 + 3w 4 ≃ 8 3w 2 and W ≃ 8/3(p 1 p 2 ). The only difference from [29] is that the round-trip time as perceived by TCP-over-RAIL is no longer constant, but it depends on whether a packet is lost on any of the paths. Provided that the packet is received on at least one path, which has prob. (1 − p 1 p 2 ), we are still in the same congestion epoch and
RT T = RT T 1 , w.p. (1 − p 1 ) RT T 2 , w.p. p 1 (1 − p 2 )(4)
Therefore, the conditional expectation for RTT is given by Eq. (3); and the TCP throughput over RAIL is on average:
(number of packets) ( W 2 + 1) · E[RT T ] ≃ ... 1.22 E[RT T ] √ p 1 p 2(5)
Essentially, RAIL appears to the TCP flow as a virtual path with loss rate p = p 1 p 2 and round-trip time E[RT T ]. Notice that there are two factors to take into account in Eq.(2): a multiplication in loss (p 1 p 2 ) and an averaging in delay E [RTT]. The loss for RAIL is smaller than any of the two links: p > p 1 , p > p 2 . The same is not true for the delay which is a weighted average: RT T 1 < E[RT T ] < RT T 2 .
Implications. Let us now use this simple formula to study the sensitivity of tcp-over-RAIL throughput to the characteristics of the underlying paths. Proof. First, consider that RT T 1 = RT T 2 = RT T . Then, the RAIL link is equivalent to a single link with p = p 1 p 2 , which is better than any of the two by an order of magnitude. What happens when RT T 1 < RT T 2 ? It is easy to see that RAIL is better than the slower path (2), because RAIL has both smaller loss and shorter RTT than the slow path (2):
T T 2 = 1 √ p 1 RT T 2 E[RT T ] > 1 · 1 = 1(6)
Is RAIL better than the faster path (1) as well? RAIL is better in terms of loss but worse in terms of delay (E[RT T ] > RT T 1 ). It turns out that the multiplicative decrease in loss dominates the averaging in delay. In Fig.18, we consider p 1 = p 2 = p, we fix RT T 1 = 10ms and consider the full range of p and RT T 2 . We plot the ratio between the throughput for TCP-over-RAIL vs. TCPover-fast-link.
T T 1 = 1 √ p RT T 1 E[RT T ] where 1 √ p > 1 and RT T 1 E[RT T ] = ... = 1 + p 1 + p RT T2 RT T1 ≤ 1(7)
We see that tcp does 4-10 times better over RAIL than over the fast link (1), for all practical cases: loss rates up to 10% and difference in delay up to 100ms. Indeed, the difference in RT T cannot be exceed some tens of milliseconds (e.g. due to propagation or transmission ), and p should be really small, except for short time periods.
How many paths? For n paths with characteristics and following similar derivations, we find that:
(p i , RT T i ), i = 1..n, where RT T 1 < RT T 2 < ... < RT T n ,T (n) = 1.22 E[RT T ] √ p 1 p 2 ...p n ,
where:
E[RT T ] = [RT T 1 + RT T 2 p + ...RT T n p n−1 ](1 − p) 1 − p 1 ...p n(8)
The multiplicative factor √ p 1 ..p k dominates the averaging E[RTT]. Also large RTTs have discounted contributions. For p 1 = p 2 = ...p n , T (n) is a convex increasing function of n, which implies that adding more paths of similar loss rate, improves throughput but with decreasing increments.
Testbed Experiments on Reordering and TCP
In section 4.1.4, we saw that RAIL does not introduce reordering if both paths are well behaving, but may convert loss on the fast path to late -and at the extreme even outof-order packets under some conditions (dt ≤ d 2 − d 1 ). It is well known that reordering may have a reverse effect on TCP, as it falsely triggers the fast retransmit. In this section, we use testbed experiments to show that, even in cases that RAIL converts loss to reordering, this is actually beneficial for TCP. Recall that RAIL does not cause reordering, it only translates loss to reordering. Therefore, the fair question to ask is not how "TCP does with reordering vs. without reordering" but instead "how TCP does with x% of packets arriving out-of-order vs. x% of packets being lost".
Fact 3-revisited. Better late than never (and the earlier the better). We used the simplified testbed shown in Fig.19 Figure 21: RAILnet: a virtual multipoint reliable network to inject a controlled amount of loss and reordering, using Netem, on a single TCP flow. Fig.20 shows the results of the comparison. First, we introduced x% of loss, ranging from 0 to 20%; the TCP throughput is shown in dashed line. Then we introduced x% of reordering for a range of reordering gaps/delays, i.e. the packets arrive 10-90ms later than they should; the resulting TCP throughput is shown in a separate bold line for each delay value. We see that TCP performs much better with reordering than with loss, therefore it is indeed better to receive packets "late than never". Not surprisingly, the less the delay in delivery, the better the performance.
Furthermore, TCP has today several default options to deal with reordering: including SACK, DSACK and timestamps. We found that turning SACK on further improved the performance of TCP under reordering in Fig.20. In summary, we expect RAIL to help TCP for all practical cases, i.e. for small loss rates and delay differences between the paths in the order of 10-50ms. As an extreme measure, one can use the delay padding mechanism not only for voice, but also as a TCP ordering buffer to completely eliminate reordering.
Future Directions
We envision a RAIL-network architecture, where RAILedges are control points that use path redundancy, route control and application-specific mechanisms, to improve WAN performance.
A first extension has to do with topology. So far, we considered two RAILedge devices connecting two remote sites via multiple redundant links. We envision that this can be generalized to a virtual multipoint network or RAILnet, where multiple edge networks are reliably interconnected to each other, as shown in Fig.21. Each participating edge network is located behind its own RAILedge, and each RAILedge pair communicates over at least two Internet links. The Railnet interface represents the local point of attachment to a Railnet and should present itself as a regular interface to a multi-access subnet.
Second, we are interested in combining the proactive replication of RAIL with some kind of route control, in particular (i) selection of the right subset of physical paths within the same RAIL and also (ii) dynamically switching among them. In this paper, we focused on the first part (i.e. at combinations of paths with various characteristics, at different number of paths, at paths that are similar or different from each other) and tried to give recommendations on how to statically select among them. The second aspect is dynamic switching among sets of paths. We expect this to be a less constrained than single-path switching, because (i) redundant transmission is robust to short-lived problems and (ii) physical paths tend to have consistent behavior in the long time scales. Therefore, RAIL should relieve much of the urgency in dynamic path switching decisions.
One could further enhance the functionality of RAILedge. So far, we focused on replication of packets over multiple paths. Several other functions can be naturally added on an edge network device, including monitoring and path switching, compression, quality-of-service mechanisms, protocol specific acceleration. For example, one could decide to RAIL part of the traffic (e.g. VoIP or critical applications) and use striping for the remaining traffic; this could correspond to RAIL-0 in the raid taxonomy [15].
There are some additional interesting questions, we are currently pursuing as a direct extension of this work. First, we continue to study TCP over RAIL, using more accurate TCP models, and considering also short-lived connections; we are also working on a modification of our delay-padding algorithm, to remove reordering at the receiving RAILedge. Second, we are investigating the effect of RAIL on the rest of the traffic. E.g. when there is significant disparity in bandwidth, we expect RAIL-ed TCP to cause congestion on the limited-bandwidth path. Furthermore, what is the interaction between competing RAILs? Finally, it would be interesting to explore the benefit of adding additional RAILedges in the middle of the network.
The RAILnet architecture can be incrementally deployed by gradually adding more RAILedges. If widely deployed, it has the potential to fundamentally change the dynamics and economics of wide-area networks.
Conclusion
We proposed and evaluated the Redundant Array of Internet Links (RAIL) -a mechanism for improving packet delivery by proactively replicating packets over multiple Internet Links. We showed that RAIL significantly improves the performance in terms of network-as well as applicationlevel metrics. We studied different combinations of underlying paths: we found that most of the benefit comes from two paths of carefully managed; we also designed a delay padding algorithm to hide significant disparities among paths. RAIL can be gracefully combined with and greatly enhance other techniques currently used in overlay networks, such as dynamic path switching. Ultimately, it has the potential to greatly affect the dynamics and economics of widearea networks.
| 8,166 |
math0612388
|
2952450694
|
We study Semidefinite Programming, relaxations for Sensor Network Localization, with anchors and with noisy distance information. The main point of the paper is to view as a (nearest) Euclidean Distance Matrix, , completion problem and to show the advantages for using this latter, well studied model. We first show that the current popular relaxation is equivalent to known relaxations in the literature for completions. The existence of anchors in the problem is not special. The set of anchors simply corresponds to a given fixed clique for the graph of the problem. We next propose a method of projection when a large clique or a dense subgraph is identified in the underlying graph. This projection reduces the size, and improves the stability, of the relaxation. In addition, viewing the problem as an completion problem yields better low rank approximations for the low dimensional realizations. And, the projection reduction procedure can be repeated for other given cliques of sensors or for sets of sensors, where many distances are known. Thus, further size reduction can be obtained. Optimality duality conditions and a primal-dual interior-exterior path following algorithm are derived for the relaxations We discuss the relative stability and strength of two formulations and the corresponding algorithms that are used. In particular, we show that the quadratic formulation arising from the relaxation is better conditioned than the linearized form, that is used in the literature and that arises from applying a Schur complement.
|
The geometry of has been extensively studied in the literature, e.g. @cite_22 @cite_24 and more recently in @cite_3 @cite_10 and the references therein. The latter two references studied algorithms based on formulations of the completion problem.
|
{
"abstract": [
"",
"Abstract A partial pre-distance matrix A is a matrix with zero diagonal and with certain elements fixed to given nonnegative values; the other elements are considered free . The Euclidean distance matrix completion problem chooses nonnegative values for the free elements in order to obtain a Euclidean distance matrix, EDM. The nearest (or approximate) Euclidean distance matrix problem is to find a Euclidean distance matrix, EDM, that is nearest in the Frobenius norm to the matrix A , when the free variables are discounted. In this paper we introduce two algorithms: one for the exact completion problem and one for the approximate completion problem. Both use a reformulation of EDM into a semidefinite programming problem, SDP. The first algorithm is based on an implicit equation for the completion that for many instances provides an explicit solution. The other algorithm is based on primal–dual interior-point methods that exploit the structure and sparsity. Included are results on maps that arise that keep the EDM and SDP cones invariant. We briefly discuss numerical tests.",
"Abstract A distance matrix D of order n is symmetric with elements − 1 2 d ij 2 , where dii=0. D is Euclidean when the 1 2 n(n−1) quantities dij can be generated as the distances between a set of n points, X (n×p), in a Euclidean space of dimension p. The dimensionality of D is defined as the least value of p=rank(X) of any generating X; in general p+1 and p+2 are also acceptable but may include imaginary coordinates, even when D is Euclidean. Basic properties of Euclidean distance matrices are established; in particular, when ρ=rank(D) it is shown that, depending on whether eTD−e is not or is zero, the generating points lie in either p=ρ−1 dimensions, in which case they lie on a hypersphere, or in p=ρ−2 dimensions, in which case they do not. (The notation e is used for a vector all of whose values are one.) When D is non-Euclidean its dimensionality p=r+s will comprise r real and s imaginary columns of X, and (r, s) are invariant for all generating X of minimal rank. Higher-ranking representations can arise only from p+1=(r+1)+s or p+1=r+ (s+1) or p+2=(r+1)+(s+1), so that not only are r, s invariant, but they are both minimal for all admissible representations X.",
"Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (EDMCP) is to find the unspecified elements of A that make A a Euclidean distance matrix (EDM). In this paper, we follow the successful approach in [20] and solve the EDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primal-dual interior-point algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed."
],
"cite_N": [
"@cite_24",
"@cite_10",
"@cite_22",
"@cite_3"
],
"mid": [
"1582520000",
"2075893026",
"2037402401",
"2125947724"
]
}
| 0 |
||
math0612388
|
2952450694
|
We study Semidefinite Programming, relaxations for Sensor Network Localization, with anchors and with noisy distance information. The main point of the paper is to view as a (nearest) Euclidean Distance Matrix, , completion problem and to show the advantages for using this latter, well studied model. We first show that the current popular relaxation is equivalent to known relaxations in the literature for completions. The existence of anchors in the problem is not special. The set of anchors simply corresponds to a given fixed clique for the graph of the problem. We next propose a method of projection when a large clique or a dense subgraph is identified in the underlying graph. This projection reduces the size, and improves the stability, of the relaxation. In addition, viewing the problem as an completion problem yields better low rank approximations for the low dimensional realizations. And, the projection reduction procedure can be repeated for other given cliques of sensors or for sets of sensors, where many distances are known. Thus, further size reduction can be obtained. Optimality duality conditions and a primal-dual interior-exterior path following algorithm are derived for the relaxations We discuss the relative stability and strength of two formulations and the corresponding algorithms that are used. In particular, we show that the quadratic formulation arising from the relaxation is better conditioned than the linearized form, that is used in the literature and that arises from applying a Schur complement.
|
The relaxations solve a closest matrix problem and generally use the @math norm. The @math norm is used in @cite_29 , where the noise in the radio signal is assumed to come from a multivariate normal distribution with mean @math and variance-covariance matrix @math , i.e. from a spherical normal distribution so that the least squares estimates are the maximum likelihood estimates. (We use the @math norm as well in this paper. Our approach follows that in @cite_3 for completion without anchors.)
|
{
"abstract": [
"A growth inhibitor for cariogenic bacteria which comprises containing therein l- alpha -cadinol as an active ingredient. In particular, the effect of growth inhibition to Streptococcus mutans IPCR 1009 strain can be produced at a concentration of 1 50,000.",
"Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (EDMCP) is to find the unspecified elements of A that make A a Euclidean distance matrix (EDM). In this paper, we follow the successful approach in [20] and solve the EDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primal-dual interior-point algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed."
],
"cite_N": [
"@cite_29",
"@cite_3"
],
"mid": [
"157909282",
"2125947724"
]
}
| 0 |
||
cs0611106
|
2168478420
|
In this paper, both non-mixing and mixing local minima of the entropy are analyzed from the viewpoint of blind source separation (BSS); they correspond respectively to acceptable and spurious solutions of the BSS problem. The contribution of this work is twofold. First, a Taylor development is used to show that the exact output entropy cost function has a non-mixing minimum when this output is proportional to any of the non-Gaussian sources, and not only when the output is proportional to the lowest entropic source. Second, in order to prove that mixing entropy minima exist when the source densities are strongly multimodal, an entropy approximator is proposed. The latter has the major advantage that an error bound can be provided. Even if this approximator (and the associated bound) is used here in the BSS context, it can be applied for estimating the entropy of any random variable with multimodal density
|
It has been shown that the global minimum of @math with @math is reached when the output @math is proportional to the source with the lowest entropy @cite_21 . It is proven in @cite_17 that when a fixed-variance output is proportional to one of the sources, then, under some technical conditions, the cumulant-based approximation of entropy @math used in FastICA @cite_17 reaches a non-mixing local minimum. Finally, based on the entropy power inequality @cite_1 , it is also proven in @cite_7 that, in the two-dimensional case, Shannon's entropy has a local minimum when the output is proportional to a non-Gaussian source.
|
{
"abstract": [
"This paper reports a study on the problem of the blind simultaneous extraction of specific groups of independent components from a linear mixture. This paper first presents a general overview and unification of several information theoretic criteria for the extraction of a single independent component. Then, our contribution fills the theoretical gap that exists between extraction and separation by presenting tools that extend these criteria to allow the simultaneous blind extraction of subsets with an arbitrary number of independent components. In addition, we analyze a family of learning algorithms based on Stiefel manifolds and the natural gradient ascent, present the nonlinear optimal activations (score) functions, and provide new or extended local stability conditions. Finally, we illustrate the performance and features of the proposed approach by computer-simulation experiments.",
"The role of inequalities in information theory is reviewed, and the relationship of these inequalities to inequalities in other branches of mathematics is developed. The simple inequalities for differential entropy are applied to the standard multivariate normal to furnish new and simpler proofs of the major determinant inequalities in classical mathematics. The authors discuss differential entropy inequalities for random subsets of samples. These inequalities when specialized to multivariate normal variables provide the determinant inequalities that are presented. The authors focus on the entropy power inequality (including the related Brunn-Minkowski, Young's, and Fisher information inequalities) and address various uncertainty principles and their interrelations. >",
"The marginal entropy h(Z) of a weighted sum of two variables Z = αX + βY, expressed as a function of its weights, is a usual cost function for blind source separation (BSS), and more precisely for independent component analysis (ICA). Even if some theoretical investigations were done about the relevance from the BSS point of view of the global minimum of h(Z), very little is known about possible local spurious minima.In order to analyze the global shape of this entropy as a function of the weights, its analytical expression is derived in the ideal case of independent variables. Because of the ICA assumption that distributions are unknown, simulation results are used to show how and when local spurious minima may appear. Firstly, the entropy of a whitened mixture, as a function of the weights and under the constraint of independence between the source variables, is shown to have only relevant minima for ICA if at most one of the source distributions is multimodal. Secondly, it is shown that if independent multimodal sources are involved in the mixture, spurious local minima may appear. Arguments are given to explain the existence of spurious minima of h(Z) in the case of multimodal sources. The presented justification can also explain the location of these minima knowing the source distributions. Finally, it results from numerical examples that the maximum-entropy mixture is not necessarily reached for the 'most mixed' one (i.e. equal mixture weights), but depends of the entropy of the mixed variables.",
"Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. We use a combination of two different approaches for linear ICA: Comon's information theoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and or of minimum variance. Finally, we introduce simple fixed-point algorithms for practical optimization of the contrast functions."
],
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_7",
"@cite_17"
],
"mid": [
"2158035059",
"2056129277",
"2073632369",
"2141224535"
]
}
| 0 |
||
cs0611106
|
2168478420
|
In this paper, both non-mixing and mixing local minima of the entropy are analyzed from the viewpoint of blind source separation (BSS); they correspond respectively to acceptable and spurious solutions of the BSS problem. The contribution of this work is twofold. First, a Taylor development is used to show that the exact output entropy cost function has a non-mixing minimum when this output is proportional to any of the non-Gaussian sources, and not only when the output is proportional to the lowest entropic source. Second, in order to prove that mixing entropy minima exist when the source densities are strongly multimodal, an entropy approximator is proposed. The latter has the major advantage that an error bound can be provided. Even if this approximator (and the associated bound) is used here in the BSS context, it can be applied for estimating the entropy of any random variable with multimodal density
|
As for the mutual information, simulations results in @cite_11 suggest that mixing local entropy minima exist in specific cases (i.e. when the source pdfs are strongly multimodal, which sometimes occur in practice, for sinusoid waveforms among other). These results, based on density estimation using the Parzen kernel method, are confirmed by other simulations using directly entropy estimation, such as Vasicek's one in @cite_8 or based on the approximator analyzed in this paper in @cite_18 . Rigorously speaking, the above results do not constitute an absolute proof since error bounds are not available for the approximation procedure. By contrast, a theoretical proof is given in @cite_14 , but for a specific example only (two bimodal sources sharing the same symmetric pdf). The existence of mixing local entropy minima has also been shown in @cite_9 (without detailed proof) in the case of two non symmetric sources with strongly multimodal pdfs.
|
{
"abstract": [
"Marginal entropy can be used as cost function for blind source separation (BSS). Recently, some authors have experimentally shown that such information-theoretic cost function may have spurious minima in specific situations. Hence, one could face spurious solutions of the BSS problem even if the mixture model is known, exactly as when using the maximum-likelihood criterion. Intuitive justifications of the spurious minima have been proposed, when the sources have multimodal densities. This paper aims to give mathematical arguments, complementary to existing simulation results, to explain the existence of such minima. This is done by first deriving a specific entropy estimator. Then, this estimator, although reliable only for multimodal sources with small-overlapping Gaussian modes, allows one to show that spurious minima may exist when dealing with such sources.",
"Recent simulation results have indicated that spurious minima in information-theoretic criteria with an orthogonality constraint for blind source separation may exist. Nevertheless, those results involve approximations (e.g., density estimation), so that they do not constitute an absolute proof. In this letter, the problem is tackled from a theoretical point of view. An example is provided for which it is rigorously proved that spurious minima can exist in both mutual information and negentropy optima. The proof is based on a Taylor expansion of the entropy.",
"This paper presents a new algorithm for the independent components analysis (ICA) problem based on an efficient entropy estimator. Like many previous methods, this algorithm directly minimizes the measure of departure from independence according to the estimated Kullback-Leibler divergence between the joint distribution and the product of the marginal distributions. We pair this approach with efficient entropy estimators from the statistics literature. In particular, the entropy estimator we use is consistent and exhibits rapid convergence. The algorithm based on this estimator is simple, computationally efficient, intuitively appealing, and outperforms other well known algorithms. In addition, the estimator's relative insensitivity to outliers translates into superior performance by our ICA algorithm on outlier tests. We present favorable comparisons to the Kernel ICA, FAST-ICA, JADE, and extended Infomax algorithms in extensive simulations. We also provide public domain source code for our algorithms.",
"This paper presents two approaches for showing that spurious minima of the entropy may exist in the blind source separation context. The first one is based on the calculation of first and second derivative of the output entropy and The second one is based on entropy approximator for multimodal variable having small overlap between the modes. It is shown that spurious entropy minima arise when the source distribution becomes more and more multimodal.",
"Recently, several authors have emphasized the existence of spurious maxima in usual contrast functions for source separation (e.g., the likelihood and the mutual information) when several sources have multimodal distributions. The aim of this letter is to compare the information theoretic contrasts to cumulant-based ones from the robustness to spurious maxima point of view. Even if all of them tend to measure, in some way, the same quantity, which is the output independence (or equivalently, the output non-Gaussianity), it is shown that in the case of a mixture involving two sources, the kurtosis-based contrast functions are more robust than the information theoretic ones when the source distributions are multimodal."
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_11"
],
"mid": [
"2135806772",
"2146830462",
"2110841704",
"2132084190",
"2130788103"
]
}
| 0 |
||
nlin0609027
|
1901978607
|
The body and spatial representations of rigid body motion correspond, respectively, to the convective and spatial representations of continuum dynamics. With a view to developing a unified computational approach for both types of problems, the discrete Clebsch approach of Cotter and Holm for continuum mechanics is applied to derive (i) body and spatial representations of discrete time models of various rigid body motions and (ii) the discrete momentum maps associated with symmetry reduction for these motions. For these problems, this paper shows that the discrete Clebsch approach yields a known class of explicit variational integrators, called discrete Moser-Veselov (DMV) integrators. The spatial representation of DMV integrators are Poisson with respect to a Lie-Poisson bracket for the semi-direct product Lie algebra. Numerical results are presented which confirm the conservative properties and accuracy of the numerical solutions.
|
Similar approaches derive discrete equations of rigid body motion for optimal control problems. Bloch, Crouch, Marsden and Ratiu @cite_15 , for example, derive the symmetrised rigid body equations by introducing optimality constraints in the action principle. We distinguish our approach from theirs in two ways. Firstly, although they consider the rigid body motion as an optimal control problem with an associated constrained action principle, they do not identify the constraints as Clebsch variables and derive the momentum maps. Secondly, they present left and right trivialisations of @math where as we present body and spatial representations of a left SO(3) action invariant Lagrangian only. The authors make this point when distinguishing their approach from that of Holm and Kupershmidt @cite_27 . We use the expression for the (left) momentum map to prove that the flow on the cotangent bundle preserves spatial angular momentum and derive the equations of motion.
|
{
"abstract": [
"Abstract Poisson brackets are constructed by the same mathematical procedure for three physical theories: ideal magnetohydrodynamics, multifluid plasmas, and elasticity. Each of these brackets is given a simple Lie-algebraic interpretation. Moreover, each bracket is induced to physical space by use of a canonical Poisson bracket in the space of Clebsch potentials, which are constructed for each physical theory by the standard procedure of constrained Lagrangians.",
"This paper analyses continuous and discrete versions of the generalized rigid body equations and the role of these equations in numerical analysis, optimal control and integrable Hamiltonian systems. In particular, we present a symmetric representation of the rigid body equations on the Cartesian product SO(n) × SO(n) and study its associated symplectic structure. We describe the relationship of these ideas with the Moser–Veselov theory of discrete integrable systems and with the theory of variational symplectic integrators. Preliminary work on the ideas discussed in this paper may be found in (Bloch AM, Crouch P, Marsden J E and Ratiu T S 1998 Proc. IEEE Conf. on Decision and Control 37 2249–54)."
],
"cite_N": [
"@cite_27",
"@cite_15"
],
"mid": [
"2081703719",
"2109132421"
]
}
|
DISCRETE MOSER-VESELOV INTEGRATORS FOR SPATIAL AND BODY REPRESENTATIONS OF RIGID BODY MOTIONS
| 0 |
|
cs0609074
|
1838824728
|
A ubiquitous computing environment consists of many resources that need to be identified by users and applications. Users and developers require some way to identify resources by human readable names. In addition, ubiquitous computing environments impose additional requirements such as the ability to work well with ad hoc situations and the provision of names that depend on context. The Non-anchored Unified Naming (NUN) system was designed to satisfy these requirements. It is based on relative naming among resources and provides the ability to name arbitrary types of resources. By having resources themselves take part in naming, resources are able to able contribute their specialized knowledge into the name resolution process, making context-dependent mapping of names to resources possible. The ease of which new resource types can be added makes it simple to incorporate new types of contextual information within names. In this paper, we describe the naming system and evaluate its use.
|
INS @cite_13 identifies network services using intentional names, which specify the kind of network service desired instead of the network address. It supports the lazy binding of names to resources by combining naming and transport. Network services must have access to all relevant contextual information when registering an intentional name. INS uses a network of intentional name resolvers as its infrastructure.
|
{
"abstract": [
"This paper presents the design and implementation of the Intentional Naming System (INS), a resource discovery and service location system for dynamic and mobile networks of devices and computers. Such environments require a naming system that is (i) expressive, to describe and make requests based on specific properties of services, (ii) responsive, to track changes due to mobility and performance, (iii) robust, to handle failures, and (iv) easily configurable. INS uses a simple language based on attributes and values for its names. Applications use the language to describe what they are looking for (i.e., their intent), not where to find things (i.e., not hostnames). INS implements a late binding mechanism that integrates name resolution and message routing, enabling clients to continue communicating with end-nodes even if the name-to-address mappings change while a session is in progress. INS resolvers self-configure to form an application-level overlay network, which they use to discover new services, perform late binding, and maintain weak consistency of names using soft-state name exchanges and updates. We analyze the performance of the INS algorithms and protocols, present measurements of a Java-based implementation, and describe three applications we have implemented that demonstrate the feasibility and utility of INS."
],
"cite_N": [
"@cite_13"
],
"mid": [
"2125855750"
]
}
|
A Non-anchored Unified Naming System for Ad Hoc Computing Environments
|
Computer systems are composed of a multitude of resources that must be identified. Such resources can be identified among computer systems using memory addresses, process identifiers, IP addresses, universally unique identifiers, etc. However, these are extremely unwieldy for humans. For this reason, computer systems usually provide a variety of ways to identify resources by human readable names. A naming system resolves such human readable names into a machine readable form.
This need is no less for ubiquitous computing environments. A ubiquitous computing environment is composed of a large number of mobile and immobile computing elements that should work seamlessly with each other. In addition, the many computing elements may be used in a wide variety of situations that cannot be anticipated during development and deployment of the computing environment, which requires that the environment support ad hoc situations and ad hoc deployment of computing elements.
A naming system which provides human readable names for such environments should work well even with unpredictable situations, and yet it should allow for context dependent naming of resources in order to support seamless operation among computing elements. It should also be easy to add new communication methods and information sources as the need arises. However, previous naming systems have difficulties supporting these requirements.
One of the more common problems in previous naming systems is the use of a single global namespace [1,3]. Namespace conflicts arise when independently deploying multiple instances of such a naming system. The same thing may be named differently in different deployments of the naming system, and even worse, different things may be named the same way. A global deployment of the naming system avoids these problems, but global deployment is very difficult. DNS [10] is practically the only case where a naming system was successfully deployed globally.
However, even global deployment does not solve all problems with using a global namespace. Designing a global namespace such that every object in the world can be named, expressive enough to provide context dependent naming, and yet simple enough so that people can easily understand it may not be feasible. There are also problems in how to name things in ad hoc situations and how to handle disconnected operation from the global naming infrastructure.
Another problem with some of the existing naming systems is that they are limited in the types of resources that can be named [1,3,11,7]. Such limitations can force the use of multiple naming systems that all work differently for each resource type. This will also result in a great amount of redundancy, especially if each naming system needs to be able to handle comparable degrees of expressiveness.
An additional problem is that an individual component often needs to be able to handle all kinds of information sources in order to assign names to resources. For example, the intentional naming system [1] requires that a network service must be able to find out all relevant context that is reflected in the intentional name in order to register itself with the naming system. Relevant context may include location, user, activity, etc. Not only would it be difficult for an individual component to handle all relevant context, but it is even more difficult if additional context needs to be reflected in names.
Our approach is to have resources directly name each other using local names. A name is a chain of these local names, and only makes sense with respect to a specific resource. By using a flexible resource description scheme and a recursive resolution process, each resource only needs to know how to handle a limited number of resource types. New resource types can be added relatively easily by updating only a limited number of existing resources. Certain resource types could resolve local names in a context-dependent manner.
This approach works naturally in ubiquitous computing environments. By using only relative naming, all of the problems associated with using a global namespace can be avoided. Having resources name other resources, making the addition of new resource types easy, and the ability to use arbitrary resource types makes it possible to express arbitrary context within a name. And the general way in which resources can be described allows the use of a single consistent naming system for naming all sorts of resources. This is in contrast to other naming systems that have aimed to support ubiquitous computing environments such as INS [1], Solar [3], CFS [7], UIA [6], etc., which do not handle all of the above requirements.
We describe our approach in detail in section 2. Section 3 describes common components which resources may use when participating in naming. Section 4 describes some examples of resources and measures the overhead when using the naming system in lieu of querying the resources directly in order to identify a resource. We compare with related work in section 5 and conclude in section 6.
Overview
The unit of naming in the Non-anchored Unified Naming (NUN) system is a resource. A resource is something we wish to identify using human readable names. Similarly to how URIs and URNs are defined [2,9], a resource is not something that will be concretely defined. This is because we do not want to restrict the types of resources which can be named. Examples of resources are documents, images, processes, computers, physical locations, schedules, people, etc. No infrastructure is required besides the resources themselves.
A resource is not only named, but it can also name other resources. Each resource is associated with a local namespace which is logically comprised of a set of local names, each of which are mapped to another resource. Ideally, the resource itself will resolve a local name directly into a machine readable description for another resource as in figure 1(a), since the resource itself would presumably best know which names make sense and how to resolve these names to other resources. When this is not possible, a separate resolver would have to resolve the local name for the resource as in figure 1(b). In the rest of the paper, we do not distinguish between the resource itself and a separate resolver.
A name in NUN is a chain of one or more local names. However, a name does not identify a resource by itself. Instead, a name identifies a resource only in the context of some other specific resource, which we will call the initial resource. When the initial resource is asked to resolve a name, the resource resolves the first local name in the chain to another resource, which is in turn asked to resolve the rest of the chain. Names and local names are explained in detail in section 2.1, while the resolution process is explained in section 2.3.
There is almost no constraint on how each resource maps a local name to another resource. 1 This implies that the name graph, where each resource is a vertex and each binding of a local name to a resource is an edge, is a general directed graph. This is in contrast to many other naming systems where the name graph is structured, e.g. a tree or a forest of trees [10,1]. Basically, a name and an initial resource in NUN specifies a path in the name graph, where Resources are not concretely defined. However, computer systems must still be able to actually use a resource and/or resolve names from a resource, so we require a way to describe resources in a machine readable form without restricting the type of resources that can be described. How this is done in NUN is explained in section 2.2.
Name structure
A name in NUN is actually a compound name, which is a chain of one or more local names. Given a local name, a resource can directly resolve it to another resource. A local name is composed of a primary name and an optional set of one or more attribute-value pairs. A primary name is a string which would typically be used to describe what the resource is. For example, laptop could name ::= "(" local name + ")" local name ::= primary name | primary name "[" attributes "]" attributes ::= pair | attributes "," pair pair ::= label "=" value value ::= string value | name | resource resource ::= "[" identifier description "]" identifier , description ::= binary string primary name , label , string value ::= alphanumeric string Figure 3: BNF grammar for canonical syntax of names identify a laptop, and alice could identify a person whose name is Alice.
The optional set of attribute-value pairs maps an attribute label to a value. An attribute label is a string identifying the attribute, while a value may be a string, a nested name, or a resource description. A string value would be typically used when textually annotating the primary name in order to refine the resolution result, while a name value is typically used to identify a resource which may be relevant during name resolution. A name in an attribute-value pair is resolved with respect to the initial resource.
An example of an attribute-value pair with a string value could be resolu-tion=1024x768 when we want a display with a resolution of 1024×768, while an example with a name value could be user=(printer owner) when we want a resource being used by the owner of a printer.
The value of an attribute-value pair may also be a resource description. A resource description is a machine readable description of a resource and is explained in section 2.2. Such a value is not meant to be read or written by humans. Instead, it is used to support the recursive name resolution process described in section 2.3.
The canonical syntax for names, which will be the default representation of names seen by users, is shown in figure 3. Some examples of names expressed in this syntax are:
• (printer) could denote the default printer for some user • (printer administrator) could denote the administrator of the default printer for some user
• (documents research naming) could denote a file in some file server
• (author[n=3]
) could denote the third author of some document
• (alice location display[user=(supervisor)]) could denote the display located where the person that some user names alice is, and to which the supervisor of this user is allowed access
Resource description
In order to name arbitrary resources, the machine readable description of a resource must not place restrictions on how resources can be described. And yet it must also include enough information such that resolving names from the described resource and actual use of the resource can be done automatically by computer.
The approach we use is describe a resource using a resource type identifier and a resource specification, which is an arbitrary byte string that is interpreted according to the resource type specified. Using an arbitrary byte string allows us to describe any kind of resource, and the resource type identifier allows a computing element to recognize whether it can interpret the byte string appropriately.
A resource type identifier is a random bit string of fixed length. 2 With a sufficiently large length, the probability of two resource type identifiers colliding is virtually zero. This allows developers to add new resource types without having to register the resource type identifier with a central authority. This is in contrast to other kinds of identifiers such as OIDs [17], where identifier assignment is ultimately derived from a central authority.
Given a resource type identifier in a resource description, a computing element is able to find out:
• whether it can resolve names from the described resource
• whether it can actually use the described resource Currently a given resource description is assumed to describe the same resource in all circumstances. This may not always be possible (e.g. the resource specification may have to include a private IP address), so methods for circumventing this limitation without sacrificing the flexibility of the resource description scheme are currently under investigation. Table 1 lists some examples of resource specifications that may be possible. Even with the limited number of examples, it is clear that there is a great variety of ways by which resources may be described and accessed.
Name resolution
A name identifies a resource only in the context of an initial resource. The initial resource must somehow be known to the consumer of the name. This can happen if the initial resource is a well-known one, e.g. it could be a directory provided by a large content provider. More typically, the consumer of the name will also be the initial resource, so there would obviously be no problem in locating the initial resource.
The consumer of the name must know how to resolve names from the initial resource. This can be done with the resource description for the initial resource and if the consumer knows how to handle the specified resource type, but this is not essential. The consumer may have some other means of identifying and accessing the initial resource.
In practical terms, the initial resource acts as a black box which resolves a name into a resource description and the validity period during which it believes that the mapping is valid. Conceptually, the initial resource resolves the first local name in the name to some resource which we will call the intermediate resource. This resource is described in a machine readable form as in section 2.2. Any name values in attribute-value pairs in the first local name are also resolved into a resource description during this step. The initial resource will also decide the validity period during which it believes that the mapping from the local name to the intermediate resource is valid.
If the name only included a single local name, then the initial resource will return the resource description to the consumer, which will use it to do whatever it needs to with the described resource. Otherwise, the initial resource constructs a new name from the original name with the first local name omitted.
Remaining name values in attribute-value pairs are also resolved into resource descriptions by the same process as described in this section.
The initial resource then uses the resource type identifier to figure out if it knows how to resolve names from the intermediate resource. If the resource type identifier is unknown, then the initial resource tells the consumer that it cannot resolve the given name. Otherwise, the initial resource requests that the intermediate resource resolve the new name constructed above to yet another resource. The intermediate resource basically follows the same procedure as the initial resource, with the initial resource playing the role of the consumer and the intermediate resource playing the role of the initial resource, and returns a resource description and validity period.
The initial resource then returns to the consumer the resource description and the intersection of the validity periods for the intermediate resource and the final resource that was resolved. Since the resource description is returned without modification, the initial resource need not know how to handle the described resource. Figure 4 outlines the resolution process.
The validity periods described above can be either fixed amounts of time during which a mapping is presumably valid after name resolution, or they can be expiration times after which it is assumed that there is a significant probability of the mapping changing. For example, a mapping can be specified as being valid for 10 minutes after name resolution, or it can be specified as being valid until 09:00 on May 3, 2007.
Common components
Exactly how a resource resolves a name into another resource is entirely dependent on the resource itself. However, parts of the resolution process are basically the same among most resources, so a library which handles these common parts would be useful. The following are the components that would be included in such a library:
Name parser This parses a name expressed in the canonical syntax.
Recursive name resolver Given a resource description, one needs to be able to resolve names from the resource described. This component looks at the resource type identifier and invokes the appropriate code which can handle the specified resource type.
Generic name resolver Name resolution involves parsing the name, resolving the first local name to another resource, asking that other resource to resolve the rest of the name, and updating the validity period of the mapping. This sequence is basically the same for most resources, so a generic name resolver invokes the appropriate components in the correct order.
When the above components are provided by a library, a resource only needs to implement the interface which external computing elements use to resolve names, the mapping from local names to resources, and the code for resolving names from other resource types. The rest of the resolution process is handled by the generic name resolver.
We have implemented a library providing the above components in Java.
Optional components
Besides the components that have been previously mentioned, there are common components that only some resources would find useful. These components are not essential in the sense that name resolution would still work without them. One such component is a name cache. A name cache embedded within a resource would cache mappings from names to resource descriptions. The cache would use the validity period of the mapping so that it can expire obsolete mappings. This would improve resolution speed when resolving names is slow for one or more resources, e.g. when a resource must be queried over a slow network or if a large amount of computation is required to resolve a local name.
To control access to local namespace of a resource, we can use authorization certificates to specify whether another resource may access the local namespace. Similarly, to ensure the authenticity of a mapping of a name to a resource, we can use a binding certificate which binds a name to a resource description during a limited time. We plan to use SPKI [5], a public key infrastructure, to implement this kind of access control and authenticity assurance. Similarly to NUN, SPKI does not rely on a global namespace for managing public keys.
We can also envision the use of a resource type repository which can map resource type identifiers to mobile code which is able to resolve names from a resource with the given resource type. A resource using such a repository would be able to name a much wider variety of resources easily. This would require some way to handle mobile code security and a lookup infrastructure such as a distributed hash table.
Evaluation
To illustrate the potential utility of NUN, we have created several simple resource types which cooperate with each other to provide human readable names to resources. The resource types are listed in table 2. The resources are heterogeneous, where some resources are simple static pieces of data and others are network services. Even the network services do not have to use the same communication methods. This is possible because we use the resource type identifier in a resource description to determine how to handle the described resource.
Given the resources listed in table 2, we can think of some plausible scenarios in which names are used:
• The calendar server needs to send a reminder to the moderator when there is a meeting during the day. It can find the moderator's email address by querying itself the name (today meeting moderator email).
The calendar server maps today to the appropriate time period and searches for the first event tagged with meeting. From the event description, it can extract the identifier of the moderator, which is then used to query the user database. The description for the moderator is obtained, from which the email address can be extracted.
• A user of a calendar may wish to know the status of the location for a scheduled meeting. He can use an application which asks the calendar server to resolve the name (today meeting location occupant) to find someone who is at the location.
The application asks the calendar server to resolve the name by invoking an RMI method. The calendar server then internally resolves today and meeting as in the previous example. From the event description, it extracts the location identifier. It then asks the location manager to resolve the name occupant, which is resolved to the user identifier.
Note that the application need only know how to query names from the calendar server and interpret the resource description for a user. It did not have to know about the internals of the calendar server or anything about the location manager.
• In order to begin a presentation, a computer may need a file named naming.ppt owned by a user within a certain location. It can query the name (occupant files naming.ppt) from the location manager using the location identifier. Here we see that an occupant is not only a named resource but can also names other resources.
The location manager will find the user identifier, obtain the user description from the user database, extract the URL prefix of his file collection, and get the URL for the desired file. As in the previous example, the original computer does not need to know anything about users.
The resource types in table 2 were implemented in Java. Each were assigned a random identifier and their resource specifications were defined. Name resolution code and resource types which have builtin support for NUN use the library described in section 3, with the exceptions of the string and file resource types, which do not map any local names.
Resource type Description String
Simple character strings. Email addresses and common names are of this type. A string does not map any local name to other resource. File A file specified by a URL. While it would be ideal if each file mapped names to other resources according to its semantic content, the namespace of a file is empty in our implementation. File collection A collection of files. This is specified as a URL prefix. A local name is mapped to a file by prepending the prefix to the local name. Location A physical location maintained by a location manager. Each location is specified by a unique random identifier. A location maps the local name occupant to the user who is in the location. The location manager is a TCP/IP based server. It can return the list of users in a specified location. It has builtin support for NUN, so it can directly resolve names for a physical location when a specially crafted message is received.
Calendar
This is an RMI-based calendar server. It supports the query of events within a specified time period that are tagged with specific strings. It also supports NUN natively, so that it can directly resolve names when a certain RMI method is invoked. It maps names such as today to time periods. Time period This is a time period in a specific calendar. It maps a local name to the first event within the time period which includes the local name as a tag. The calendar server resolves names for this resource. Event A scheduled event in a calendar. An event may be tagged by several strings such as meeting or playtime. Each event is associated with a moderator, a location, and a set of related files. Events are described in a static text format. Name resolution is done by interpreting the static data into the appropriate resource description. User
Represents a physical user. Each user is specified by a unique random identifier, which is used for indexing a user in a user database server. The user database server is based on TCP/IP, which returns a description of the user based on the identifier. The server does not include support for naming, so separate name resolution code is required to map local names by interpreting the description. The resolution code maps local names to email addresses and the collections of users' files. The user database resided in a 3GHz Pentium D machine with 3GB of RAM, while the location manager and calendar server resided in 1GHz PowerPC machines with 1GB of RAM each. In one configuration the systems were connected over Gigabit Ethernet, while in another configuration they were connected to each other by a 802.11g wireless network.
We measured the time it took to resolve names into resources for the examples we discussed. We also measured the time it took when we queried the resources directly to obtain the necessary contextual information and to discover the desired resource based on this information. The actual work done between the two approaches is basically the same, but using the former approach is much simpler since we only need to query the appropriate resource with a name that is easy to construct. The latter approach requires that code be written for each situation to query the necessary information sources, which is substantially more complex and is often not possible. Table 3 compares the amount of time each approach takes when the systems are connected over Gigabit Ethernet. Each name resolution was repeated 1000 times. The measurements show that using NUN incurs negligible impact on performance. In fact, the overhead from NUN pales in comparison to the variability due to the network. This is even more pronounced with a wireless network, as can be seen in table 4.
Conclusions
In this paper, we have described the Non-anchored Unified Naming system. Instead of having a naming service which exists independently from resources, its approach is to have resources themselves name other resources by local names. The rationale is that a resource is best suited to apply its own specialized knowledge and capabilities when resolving names which incorporate them.
A name is a chain of local names which is resolved by an initial resource, which is determined according to the needs of users and applications. Eschewing the use of absolute naming and using only relative naming makes it simple to handle unpredictable situations that may arise within a ubiquitous computing environment. NUN is capable of naming arbitrary resources by resolving names into a resource described by a flexible resource description scheme. This allows the use of a consistent naming scheme for identifying arbitrary types of resources. It also makes it simple to incorporate new kinds of contextual information within the name simply by adding new resources which provide the desired information.
The name resolution process does not require that a single computing element know how to handle all resource types. This simplifies the implementation of resources and reduces the amount of memory required to support naming. This allows limited devices such as PDAs or other electronic appliances to participate in the naming process, where they may contribute their specialized knowledge to the naming process.
The ease by which new contextual information sources may be added, the ability to handle ad hoc situations, and the ability to provide a consistent naming scheme for arbitrary resources makes NUN suitable for identifying resources in a ubiquitous computing environment.
| 4,502 |
cs0609074
|
1838824728
|
A ubiquitous computing environment consists of many resources that need to be identified by users and applications. Users and developers require some way to identify resources by human readable names. In addition, ubiquitous computing environments impose additional requirements such as the ability to work well with ad hoc situations and the provision of names that depend on context. The Non-anchored Unified Naming (NUN) system was designed to satisfy these requirements. It is based on relative naming among resources and provides the ability to name arbitrary types of resources. By having resources themselves take part in naming, resources are able to able contribute their specialized knowledge into the name resolution process, making context-dependent mapping of names to resources possible. The ease of which new resource types can be added makes it simple to incorporate new types of contextual information within names. In this paper, we describe the naming system and evaluate its use.
|
There have been naming systems not targeted for ubiquitous computing environments that also use relative naming. Tilde @cite_2 and Prospero @cite_0 are file systems based on relative naming. Prospero is also able to support a limited form of location-aware computing by creating symbolic links according to the login terminal of a user @cite_7 .
|
{
"abstract": [
"Recent growth of the Internet has greatly increased the amount of information that is accessible and the number of resources that are available to users. To exploit this growth, it must be possible for users to find the information and resources they need. Existing techniques for organizing systems have evolved from those used on centralized systems, but these techniques are inadequate for organizing information on a global scale. This article describes Prospero, a distributed file system based on the Virtual System Model. Prospero provides tools to help users organize Internet resources. These tools allow users to construct customized views of available resources, while taking advantage of the structure imposed by others. Prospero provides a framework that can tie together various indexing services producing the fabric on which resource discovery techniques can be applied.",
"As computers become pervasive, users will access processing, storage, and communication resources from locations that have not been practical in the past. Such users will demand support for location-independent computing. While the basic system components used might change as the user moves from place to place, the appearance of the system should remain constant. In this paper we discuss the role of, and requirements for, directory services in support of integrated, location-independent computing. We focus on two specific problems: the server selection problem and the user location problem. We present solutions to these problems based on the Prospero Directory Service. The solutions demonstrate several unique features of Prospero that make it particularly suited for support of location-independent computing.",
""
],
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_2"
],
"mid": [
"2101731962",
"5172367",
"2014372543"
]
}
|
A Non-anchored Unified Naming System for Ad Hoc Computing Environments
|
Computer systems are composed of a multitude of resources that must be identified. Such resources can be identified among computer systems using memory addresses, process identifiers, IP addresses, universally unique identifiers, etc. However, these are extremely unwieldy for humans. For this reason, computer systems usually provide a variety of ways to identify resources by human readable names. A naming system resolves such human readable names into a machine readable form.
This need is no less for ubiquitous computing environments. A ubiquitous computing environment is composed of a large number of mobile and immobile computing elements that should work seamlessly with each other. In addition, the many computing elements may be used in a wide variety of situations that cannot be anticipated during development and deployment of the computing environment, which requires that the environment support ad hoc situations and ad hoc deployment of computing elements.
A naming system which provides human readable names for such environments should work well even with unpredictable situations, and yet it should allow for context dependent naming of resources in order to support seamless operation among computing elements. It should also be easy to add new communication methods and information sources as the need arises. However, previous naming systems have difficulties supporting these requirements.
One of the more common problems in previous naming systems is the use of a single global namespace [1,3]. Namespace conflicts arise when independently deploying multiple instances of such a naming system. The same thing may be named differently in different deployments of the naming system, and even worse, different things may be named the same way. A global deployment of the naming system avoids these problems, but global deployment is very difficult. DNS [10] is practically the only case where a naming system was successfully deployed globally.
However, even global deployment does not solve all problems with using a global namespace. Designing a global namespace such that every object in the world can be named, expressive enough to provide context dependent naming, and yet simple enough so that people can easily understand it may not be feasible. There are also problems in how to name things in ad hoc situations and how to handle disconnected operation from the global naming infrastructure.
Another problem with some of the existing naming systems is that they are limited in the types of resources that can be named [1,3,11,7]. Such limitations can force the use of multiple naming systems that all work differently for each resource type. This will also result in a great amount of redundancy, especially if each naming system needs to be able to handle comparable degrees of expressiveness.
An additional problem is that an individual component often needs to be able to handle all kinds of information sources in order to assign names to resources. For example, the intentional naming system [1] requires that a network service must be able to find out all relevant context that is reflected in the intentional name in order to register itself with the naming system. Relevant context may include location, user, activity, etc. Not only would it be difficult for an individual component to handle all relevant context, but it is even more difficult if additional context needs to be reflected in names.
Our approach is to have resources directly name each other using local names. A name is a chain of these local names, and only makes sense with respect to a specific resource. By using a flexible resource description scheme and a recursive resolution process, each resource only needs to know how to handle a limited number of resource types. New resource types can be added relatively easily by updating only a limited number of existing resources. Certain resource types could resolve local names in a context-dependent manner.
This approach works naturally in ubiquitous computing environments. By using only relative naming, all of the problems associated with using a global namespace can be avoided. Having resources name other resources, making the addition of new resource types easy, and the ability to use arbitrary resource types makes it possible to express arbitrary context within a name. And the general way in which resources can be described allows the use of a single consistent naming system for naming all sorts of resources. This is in contrast to other naming systems that have aimed to support ubiquitous computing environments such as INS [1], Solar [3], CFS [7], UIA [6], etc., which do not handle all of the above requirements.
We describe our approach in detail in section 2. Section 3 describes common components which resources may use when participating in naming. Section 4 describes some examples of resources and measures the overhead when using the naming system in lieu of querying the resources directly in order to identify a resource. We compare with related work in section 5 and conclude in section 6.
Overview
The unit of naming in the Non-anchored Unified Naming (NUN) system is a resource. A resource is something we wish to identify using human readable names. Similarly to how URIs and URNs are defined [2,9], a resource is not something that will be concretely defined. This is because we do not want to restrict the types of resources which can be named. Examples of resources are documents, images, processes, computers, physical locations, schedules, people, etc. No infrastructure is required besides the resources themselves.
A resource is not only named, but it can also name other resources. Each resource is associated with a local namespace which is logically comprised of a set of local names, each of which are mapped to another resource. Ideally, the resource itself will resolve a local name directly into a machine readable description for another resource as in figure 1(a), since the resource itself would presumably best know which names make sense and how to resolve these names to other resources. When this is not possible, a separate resolver would have to resolve the local name for the resource as in figure 1(b). In the rest of the paper, we do not distinguish between the resource itself and a separate resolver.
A name in NUN is a chain of one or more local names. However, a name does not identify a resource by itself. Instead, a name identifies a resource only in the context of some other specific resource, which we will call the initial resource. When the initial resource is asked to resolve a name, the resource resolves the first local name in the chain to another resource, which is in turn asked to resolve the rest of the chain. Names and local names are explained in detail in section 2.1, while the resolution process is explained in section 2.3.
There is almost no constraint on how each resource maps a local name to another resource. 1 This implies that the name graph, where each resource is a vertex and each binding of a local name to a resource is an edge, is a general directed graph. This is in contrast to many other naming systems where the name graph is structured, e.g. a tree or a forest of trees [10,1]. Basically, a name and an initial resource in NUN specifies a path in the name graph, where Resources are not concretely defined. However, computer systems must still be able to actually use a resource and/or resolve names from a resource, so we require a way to describe resources in a machine readable form without restricting the type of resources that can be described. How this is done in NUN is explained in section 2.2.
Name structure
A name in NUN is actually a compound name, which is a chain of one or more local names. Given a local name, a resource can directly resolve it to another resource. A local name is composed of a primary name and an optional set of one or more attribute-value pairs. A primary name is a string which would typically be used to describe what the resource is. For example, laptop could name ::= "(" local name + ")" local name ::= primary name | primary name "[" attributes "]" attributes ::= pair | attributes "," pair pair ::= label "=" value value ::= string value | name | resource resource ::= "[" identifier description "]" identifier , description ::= binary string primary name , label , string value ::= alphanumeric string Figure 3: BNF grammar for canonical syntax of names identify a laptop, and alice could identify a person whose name is Alice.
The optional set of attribute-value pairs maps an attribute label to a value. An attribute label is a string identifying the attribute, while a value may be a string, a nested name, or a resource description. A string value would be typically used when textually annotating the primary name in order to refine the resolution result, while a name value is typically used to identify a resource which may be relevant during name resolution. A name in an attribute-value pair is resolved with respect to the initial resource.
An example of an attribute-value pair with a string value could be resolu-tion=1024x768 when we want a display with a resolution of 1024×768, while an example with a name value could be user=(printer owner) when we want a resource being used by the owner of a printer.
The value of an attribute-value pair may also be a resource description. A resource description is a machine readable description of a resource and is explained in section 2.2. Such a value is not meant to be read or written by humans. Instead, it is used to support the recursive name resolution process described in section 2.3.
The canonical syntax for names, which will be the default representation of names seen by users, is shown in figure 3. Some examples of names expressed in this syntax are:
• (printer) could denote the default printer for some user • (printer administrator) could denote the administrator of the default printer for some user
• (documents research naming) could denote a file in some file server
• (author[n=3]
) could denote the third author of some document
• (alice location display[user=(supervisor)]) could denote the display located where the person that some user names alice is, and to which the supervisor of this user is allowed access
Resource description
In order to name arbitrary resources, the machine readable description of a resource must not place restrictions on how resources can be described. And yet it must also include enough information such that resolving names from the described resource and actual use of the resource can be done automatically by computer.
The approach we use is describe a resource using a resource type identifier and a resource specification, which is an arbitrary byte string that is interpreted according to the resource type specified. Using an arbitrary byte string allows us to describe any kind of resource, and the resource type identifier allows a computing element to recognize whether it can interpret the byte string appropriately.
A resource type identifier is a random bit string of fixed length. 2 With a sufficiently large length, the probability of two resource type identifiers colliding is virtually zero. This allows developers to add new resource types without having to register the resource type identifier with a central authority. This is in contrast to other kinds of identifiers such as OIDs [17], where identifier assignment is ultimately derived from a central authority.
Given a resource type identifier in a resource description, a computing element is able to find out:
• whether it can resolve names from the described resource
• whether it can actually use the described resource Currently a given resource description is assumed to describe the same resource in all circumstances. This may not always be possible (e.g. the resource specification may have to include a private IP address), so methods for circumventing this limitation without sacrificing the flexibility of the resource description scheme are currently under investigation. Table 1 lists some examples of resource specifications that may be possible. Even with the limited number of examples, it is clear that there is a great variety of ways by which resources may be described and accessed.
Name resolution
A name identifies a resource only in the context of an initial resource. The initial resource must somehow be known to the consumer of the name. This can happen if the initial resource is a well-known one, e.g. it could be a directory provided by a large content provider. More typically, the consumer of the name will also be the initial resource, so there would obviously be no problem in locating the initial resource.
The consumer of the name must know how to resolve names from the initial resource. This can be done with the resource description for the initial resource and if the consumer knows how to handle the specified resource type, but this is not essential. The consumer may have some other means of identifying and accessing the initial resource.
In practical terms, the initial resource acts as a black box which resolves a name into a resource description and the validity period during which it believes that the mapping is valid. Conceptually, the initial resource resolves the first local name in the name to some resource which we will call the intermediate resource. This resource is described in a machine readable form as in section 2.2. Any name values in attribute-value pairs in the first local name are also resolved into a resource description during this step. The initial resource will also decide the validity period during which it believes that the mapping from the local name to the intermediate resource is valid.
If the name only included a single local name, then the initial resource will return the resource description to the consumer, which will use it to do whatever it needs to with the described resource. Otherwise, the initial resource constructs a new name from the original name with the first local name omitted.
Remaining name values in attribute-value pairs are also resolved into resource descriptions by the same process as described in this section.
The initial resource then uses the resource type identifier to figure out if it knows how to resolve names from the intermediate resource. If the resource type identifier is unknown, then the initial resource tells the consumer that it cannot resolve the given name. Otherwise, the initial resource requests that the intermediate resource resolve the new name constructed above to yet another resource. The intermediate resource basically follows the same procedure as the initial resource, with the initial resource playing the role of the consumer and the intermediate resource playing the role of the initial resource, and returns a resource description and validity period.
The initial resource then returns to the consumer the resource description and the intersection of the validity periods for the intermediate resource and the final resource that was resolved. Since the resource description is returned without modification, the initial resource need not know how to handle the described resource. Figure 4 outlines the resolution process.
The validity periods described above can be either fixed amounts of time during which a mapping is presumably valid after name resolution, or they can be expiration times after which it is assumed that there is a significant probability of the mapping changing. For example, a mapping can be specified as being valid for 10 minutes after name resolution, or it can be specified as being valid until 09:00 on May 3, 2007.
Common components
Exactly how a resource resolves a name into another resource is entirely dependent on the resource itself. However, parts of the resolution process are basically the same among most resources, so a library which handles these common parts would be useful. The following are the components that would be included in such a library:
Name parser This parses a name expressed in the canonical syntax.
Recursive name resolver Given a resource description, one needs to be able to resolve names from the resource described. This component looks at the resource type identifier and invokes the appropriate code which can handle the specified resource type.
Generic name resolver Name resolution involves parsing the name, resolving the first local name to another resource, asking that other resource to resolve the rest of the name, and updating the validity period of the mapping. This sequence is basically the same for most resources, so a generic name resolver invokes the appropriate components in the correct order.
When the above components are provided by a library, a resource only needs to implement the interface which external computing elements use to resolve names, the mapping from local names to resources, and the code for resolving names from other resource types. The rest of the resolution process is handled by the generic name resolver.
We have implemented a library providing the above components in Java.
Optional components
Besides the components that have been previously mentioned, there are common components that only some resources would find useful. These components are not essential in the sense that name resolution would still work without them. One such component is a name cache. A name cache embedded within a resource would cache mappings from names to resource descriptions. The cache would use the validity period of the mapping so that it can expire obsolete mappings. This would improve resolution speed when resolving names is slow for one or more resources, e.g. when a resource must be queried over a slow network or if a large amount of computation is required to resolve a local name.
To control access to local namespace of a resource, we can use authorization certificates to specify whether another resource may access the local namespace. Similarly, to ensure the authenticity of a mapping of a name to a resource, we can use a binding certificate which binds a name to a resource description during a limited time. We plan to use SPKI [5], a public key infrastructure, to implement this kind of access control and authenticity assurance. Similarly to NUN, SPKI does not rely on a global namespace for managing public keys.
We can also envision the use of a resource type repository which can map resource type identifiers to mobile code which is able to resolve names from a resource with the given resource type. A resource using such a repository would be able to name a much wider variety of resources easily. This would require some way to handle mobile code security and a lookup infrastructure such as a distributed hash table.
Evaluation
To illustrate the potential utility of NUN, we have created several simple resource types which cooperate with each other to provide human readable names to resources. The resource types are listed in table 2. The resources are heterogeneous, where some resources are simple static pieces of data and others are network services. Even the network services do not have to use the same communication methods. This is possible because we use the resource type identifier in a resource description to determine how to handle the described resource.
Given the resources listed in table 2, we can think of some plausible scenarios in which names are used:
• The calendar server needs to send a reminder to the moderator when there is a meeting during the day. It can find the moderator's email address by querying itself the name (today meeting moderator email).
The calendar server maps today to the appropriate time period and searches for the first event tagged with meeting. From the event description, it can extract the identifier of the moderator, which is then used to query the user database. The description for the moderator is obtained, from which the email address can be extracted.
• A user of a calendar may wish to know the status of the location for a scheduled meeting. He can use an application which asks the calendar server to resolve the name (today meeting location occupant) to find someone who is at the location.
The application asks the calendar server to resolve the name by invoking an RMI method. The calendar server then internally resolves today and meeting as in the previous example. From the event description, it extracts the location identifier. It then asks the location manager to resolve the name occupant, which is resolved to the user identifier.
Note that the application need only know how to query names from the calendar server and interpret the resource description for a user. It did not have to know about the internals of the calendar server or anything about the location manager.
• In order to begin a presentation, a computer may need a file named naming.ppt owned by a user within a certain location. It can query the name (occupant files naming.ppt) from the location manager using the location identifier. Here we see that an occupant is not only a named resource but can also names other resources.
The location manager will find the user identifier, obtain the user description from the user database, extract the URL prefix of his file collection, and get the URL for the desired file. As in the previous example, the original computer does not need to know anything about users.
The resource types in table 2 were implemented in Java. Each were assigned a random identifier and their resource specifications were defined. Name resolution code and resource types which have builtin support for NUN use the library described in section 3, with the exceptions of the string and file resource types, which do not map any local names.
Resource type Description String
Simple character strings. Email addresses and common names are of this type. A string does not map any local name to other resource. File A file specified by a URL. While it would be ideal if each file mapped names to other resources according to its semantic content, the namespace of a file is empty in our implementation. File collection A collection of files. This is specified as a URL prefix. A local name is mapped to a file by prepending the prefix to the local name. Location A physical location maintained by a location manager. Each location is specified by a unique random identifier. A location maps the local name occupant to the user who is in the location. The location manager is a TCP/IP based server. It can return the list of users in a specified location. It has builtin support for NUN, so it can directly resolve names for a physical location when a specially crafted message is received.
Calendar
This is an RMI-based calendar server. It supports the query of events within a specified time period that are tagged with specific strings. It also supports NUN natively, so that it can directly resolve names when a certain RMI method is invoked. It maps names such as today to time periods. Time period This is a time period in a specific calendar. It maps a local name to the first event within the time period which includes the local name as a tag. The calendar server resolves names for this resource. Event A scheduled event in a calendar. An event may be tagged by several strings such as meeting or playtime. Each event is associated with a moderator, a location, and a set of related files. Events are described in a static text format. Name resolution is done by interpreting the static data into the appropriate resource description. User
Represents a physical user. Each user is specified by a unique random identifier, which is used for indexing a user in a user database server. The user database server is based on TCP/IP, which returns a description of the user based on the identifier. The server does not include support for naming, so separate name resolution code is required to map local names by interpreting the description. The resolution code maps local names to email addresses and the collections of users' files. The user database resided in a 3GHz Pentium D machine with 3GB of RAM, while the location manager and calendar server resided in 1GHz PowerPC machines with 1GB of RAM each. In one configuration the systems were connected over Gigabit Ethernet, while in another configuration they were connected to each other by a 802.11g wireless network.
We measured the time it took to resolve names into resources for the examples we discussed. We also measured the time it took when we queried the resources directly to obtain the necessary contextual information and to discover the desired resource based on this information. The actual work done between the two approaches is basically the same, but using the former approach is much simpler since we only need to query the appropriate resource with a name that is easy to construct. The latter approach requires that code be written for each situation to query the necessary information sources, which is substantially more complex and is often not possible. Table 3 compares the amount of time each approach takes when the systems are connected over Gigabit Ethernet. Each name resolution was repeated 1000 times. The measurements show that using NUN incurs negligible impact on performance. In fact, the overhead from NUN pales in comparison to the variability due to the network. This is even more pronounced with a wireless network, as can be seen in table 4.
Conclusions
In this paper, we have described the Non-anchored Unified Naming system. Instead of having a naming service which exists independently from resources, its approach is to have resources themselves name other resources by local names. The rationale is that a resource is best suited to apply its own specialized knowledge and capabilities when resolving names which incorporate them.
A name is a chain of local names which is resolved by an initial resource, which is determined according to the needs of users and applications. Eschewing the use of absolute naming and using only relative naming makes it simple to handle unpredictable situations that may arise within a ubiquitous computing environment. NUN is capable of naming arbitrary resources by resolving names into a resource described by a flexible resource description scheme. This allows the use of a consistent naming scheme for identifying arbitrary types of resources. It also makes it simple to incorporate new kinds of contextual information within the name simply by adding new resources which provide the desired information.
The name resolution process does not require that a single computing element know how to handle all resource types. This simplifies the implementation of resources and reduces the amount of memory required to support naming. This allows limited devices such as PDAs or other electronic appliances to participate in the naming process, where they may contribute their specialized knowledge to the naming process.
The ease by which new contextual information sources may be added, the ability to handle ad hoc situations, and the ability to provide a consistent naming scheme for arbitrary resources makes NUN suitable for identifying resources in a ubiquitous computing environment.
| 4,502 |
cs0609074
|
1838824728
|
A ubiquitous computing environment consists of many resources that need to be identified by users and applications. Users and developers require some way to identify resources by human readable names. In addition, ubiquitous computing environments impose additional requirements such as the ability to work well with ad hoc situations and the provision of names that depend on context. The Non-anchored Unified Naming (NUN) system was designed to satisfy these requirements. It is based on relative naming among resources and provides the ability to name arbitrary types of resources. By having resources themselves take part in naming, resources are able to able contribute their specialized knowledge into the name resolution process, making context-dependent mapping of names to resources possible. The ease of which new resource types can be added makes it simple to incorporate new types of contextual information within names. In this paper, we describe the naming system and evaluate its use.
|
Like INS, Active Names @cite_4 combines naming and transport. Its purpose is to provide an extensible network infrastructure based on names. The routing mechanism is similar to the name resolution process in NUN in that a name can be divided into multiple components, and each component in the name determines the next program used in routing a data packet. The work done by each program is arbitrary, so a great deal of flexibility is possible when routing packets.
|
{
"abstract": [
"In this paper, we explore flexible name resolution as a way of supporting extensibility for wide-area distributed services. Our approach, called Active Names, maps names to a chain of mobile programs that can customize how a service is located and how its results are transformed and transported back to the client. To illustrate the properties of our system, we implement prototypes of server selection based on end-to-end performance measurements, location-independent data transformation, and caching of composable active objects and demonstrate up to a five-fold performance improvement to end users. We show how these new services are developed, composed, and secured in our framework. Finally, we develop a set of algorithms to control how mobile Active Name programs are mapped onto available wide-area resources to optimize performance and availability."
],
"cite_N": [
"@cite_4"
],
"mid": [
"2178101454"
]
}
|
A Non-anchored Unified Naming System for Ad Hoc Computing Environments
|
Computer systems are composed of a multitude of resources that must be identified. Such resources can be identified among computer systems using memory addresses, process identifiers, IP addresses, universally unique identifiers, etc. However, these are extremely unwieldy for humans. For this reason, computer systems usually provide a variety of ways to identify resources by human readable names. A naming system resolves such human readable names into a machine readable form.
This need is no less for ubiquitous computing environments. A ubiquitous computing environment is composed of a large number of mobile and immobile computing elements that should work seamlessly with each other. In addition, the many computing elements may be used in a wide variety of situations that cannot be anticipated during development and deployment of the computing environment, which requires that the environment support ad hoc situations and ad hoc deployment of computing elements.
A naming system which provides human readable names for such environments should work well even with unpredictable situations, and yet it should allow for context dependent naming of resources in order to support seamless operation among computing elements. It should also be easy to add new communication methods and information sources as the need arises. However, previous naming systems have difficulties supporting these requirements.
One of the more common problems in previous naming systems is the use of a single global namespace [1,3]. Namespace conflicts arise when independently deploying multiple instances of such a naming system. The same thing may be named differently in different deployments of the naming system, and even worse, different things may be named the same way. A global deployment of the naming system avoids these problems, but global deployment is very difficult. DNS [10] is practically the only case where a naming system was successfully deployed globally.
However, even global deployment does not solve all problems with using a global namespace. Designing a global namespace such that every object in the world can be named, expressive enough to provide context dependent naming, and yet simple enough so that people can easily understand it may not be feasible. There are also problems in how to name things in ad hoc situations and how to handle disconnected operation from the global naming infrastructure.
Another problem with some of the existing naming systems is that they are limited in the types of resources that can be named [1,3,11,7]. Such limitations can force the use of multiple naming systems that all work differently for each resource type. This will also result in a great amount of redundancy, especially if each naming system needs to be able to handle comparable degrees of expressiveness.
An additional problem is that an individual component often needs to be able to handle all kinds of information sources in order to assign names to resources. For example, the intentional naming system [1] requires that a network service must be able to find out all relevant context that is reflected in the intentional name in order to register itself with the naming system. Relevant context may include location, user, activity, etc. Not only would it be difficult for an individual component to handle all relevant context, but it is even more difficult if additional context needs to be reflected in names.
Our approach is to have resources directly name each other using local names. A name is a chain of these local names, and only makes sense with respect to a specific resource. By using a flexible resource description scheme and a recursive resolution process, each resource only needs to know how to handle a limited number of resource types. New resource types can be added relatively easily by updating only a limited number of existing resources. Certain resource types could resolve local names in a context-dependent manner.
This approach works naturally in ubiquitous computing environments. By using only relative naming, all of the problems associated with using a global namespace can be avoided. Having resources name other resources, making the addition of new resource types easy, and the ability to use arbitrary resource types makes it possible to express arbitrary context within a name. And the general way in which resources can be described allows the use of a single consistent naming system for naming all sorts of resources. This is in contrast to other naming systems that have aimed to support ubiquitous computing environments such as INS [1], Solar [3], CFS [7], UIA [6], etc., which do not handle all of the above requirements.
We describe our approach in detail in section 2. Section 3 describes common components which resources may use when participating in naming. Section 4 describes some examples of resources and measures the overhead when using the naming system in lieu of querying the resources directly in order to identify a resource. We compare with related work in section 5 and conclude in section 6.
Overview
The unit of naming in the Non-anchored Unified Naming (NUN) system is a resource. A resource is something we wish to identify using human readable names. Similarly to how URIs and URNs are defined [2,9], a resource is not something that will be concretely defined. This is because we do not want to restrict the types of resources which can be named. Examples of resources are documents, images, processes, computers, physical locations, schedules, people, etc. No infrastructure is required besides the resources themselves.
A resource is not only named, but it can also name other resources. Each resource is associated with a local namespace which is logically comprised of a set of local names, each of which are mapped to another resource. Ideally, the resource itself will resolve a local name directly into a machine readable description for another resource as in figure 1(a), since the resource itself would presumably best know which names make sense and how to resolve these names to other resources. When this is not possible, a separate resolver would have to resolve the local name for the resource as in figure 1(b). In the rest of the paper, we do not distinguish between the resource itself and a separate resolver.
A name in NUN is a chain of one or more local names. However, a name does not identify a resource by itself. Instead, a name identifies a resource only in the context of some other specific resource, which we will call the initial resource. When the initial resource is asked to resolve a name, the resource resolves the first local name in the chain to another resource, which is in turn asked to resolve the rest of the chain. Names and local names are explained in detail in section 2.1, while the resolution process is explained in section 2.3.
There is almost no constraint on how each resource maps a local name to another resource. 1 This implies that the name graph, where each resource is a vertex and each binding of a local name to a resource is an edge, is a general directed graph. This is in contrast to many other naming systems where the name graph is structured, e.g. a tree or a forest of trees [10,1]. Basically, a name and an initial resource in NUN specifies a path in the name graph, where Resources are not concretely defined. However, computer systems must still be able to actually use a resource and/or resolve names from a resource, so we require a way to describe resources in a machine readable form without restricting the type of resources that can be described. How this is done in NUN is explained in section 2.2.
Name structure
A name in NUN is actually a compound name, which is a chain of one or more local names. Given a local name, a resource can directly resolve it to another resource. A local name is composed of a primary name and an optional set of one or more attribute-value pairs. A primary name is a string which would typically be used to describe what the resource is. For example, laptop could name ::= "(" local name + ")" local name ::= primary name | primary name "[" attributes "]" attributes ::= pair | attributes "," pair pair ::= label "=" value value ::= string value | name | resource resource ::= "[" identifier description "]" identifier , description ::= binary string primary name , label , string value ::= alphanumeric string Figure 3: BNF grammar for canonical syntax of names identify a laptop, and alice could identify a person whose name is Alice.
The optional set of attribute-value pairs maps an attribute label to a value. An attribute label is a string identifying the attribute, while a value may be a string, a nested name, or a resource description. A string value would be typically used when textually annotating the primary name in order to refine the resolution result, while a name value is typically used to identify a resource which may be relevant during name resolution. A name in an attribute-value pair is resolved with respect to the initial resource.
An example of an attribute-value pair with a string value could be resolu-tion=1024x768 when we want a display with a resolution of 1024×768, while an example with a name value could be user=(printer owner) when we want a resource being used by the owner of a printer.
The value of an attribute-value pair may also be a resource description. A resource description is a machine readable description of a resource and is explained in section 2.2. Such a value is not meant to be read or written by humans. Instead, it is used to support the recursive name resolution process described in section 2.3.
The canonical syntax for names, which will be the default representation of names seen by users, is shown in figure 3. Some examples of names expressed in this syntax are:
• (printer) could denote the default printer for some user • (printer administrator) could denote the administrator of the default printer for some user
• (documents research naming) could denote a file in some file server
• (author[n=3]
) could denote the third author of some document
• (alice location display[user=(supervisor)]) could denote the display located where the person that some user names alice is, and to which the supervisor of this user is allowed access
Resource description
In order to name arbitrary resources, the machine readable description of a resource must not place restrictions on how resources can be described. And yet it must also include enough information such that resolving names from the described resource and actual use of the resource can be done automatically by computer.
The approach we use is describe a resource using a resource type identifier and a resource specification, which is an arbitrary byte string that is interpreted according to the resource type specified. Using an arbitrary byte string allows us to describe any kind of resource, and the resource type identifier allows a computing element to recognize whether it can interpret the byte string appropriately.
A resource type identifier is a random bit string of fixed length. 2 With a sufficiently large length, the probability of two resource type identifiers colliding is virtually zero. This allows developers to add new resource types without having to register the resource type identifier with a central authority. This is in contrast to other kinds of identifiers such as OIDs [17], where identifier assignment is ultimately derived from a central authority.
Given a resource type identifier in a resource description, a computing element is able to find out:
• whether it can resolve names from the described resource
• whether it can actually use the described resource Currently a given resource description is assumed to describe the same resource in all circumstances. This may not always be possible (e.g. the resource specification may have to include a private IP address), so methods for circumventing this limitation without sacrificing the flexibility of the resource description scheme are currently under investigation. Table 1 lists some examples of resource specifications that may be possible. Even with the limited number of examples, it is clear that there is a great variety of ways by which resources may be described and accessed.
Name resolution
A name identifies a resource only in the context of an initial resource. The initial resource must somehow be known to the consumer of the name. This can happen if the initial resource is a well-known one, e.g. it could be a directory provided by a large content provider. More typically, the consumer of the name will also be the initial resource, so there would obviously be no problem in locating the initial resource.
The consumer of the name must know how to resolve names from the initial resource. This can be done with the resource description for the initial resource and if the consumer knows how to handle the specified resource type, but this is not essential. The consumer may have some other means of identifying and accessing the initial resource.
In practical terms, the initial resource acts as a black box which resolves a name into a resource description and the validity period during which it believes that the mapping is valid. Conceptually, the initial resource resolves the first local name in the name to some resource which we will call the intermediate resource. This resource is described in a machine readable form as in section 2.2. Any name values in attribute-value pairs in the first local name are also resolved into a resource description during this step. The initial resource will also decide the validity period during which it believes that the mapping from the local name to the intermediate resource is valid.
If the name only included a single local name, then the initial resource will return the resource description to the consumer, which will use it to do whatever it needs to with the described resource. Otherwise, the initial resource constructs a new name from the original name with the first local name omitted.
Remaining name values in attribute-value pairs are also resolved into resource descriptions by the same process as described in this section.
The initial resource then uses the resource type identifier to figure out if it knows how to resolve names from the intermediate resource. If the resource type identifier is unknown, then the initial resource tells the consumer that it cannot resolve the given name. Otherwise, the initial resource requests that the intermediate resource resolve the new name constructed above to yet another resource. The intermediate resource basically follows the same procedure as the initial resource, with the initial resource playing the role of the consumer and the intermediate resource playing the role of the initial resource, and returns a resource description and validity period.
The initial resource then returns to the consumer the resource description and the intersection of the validity periods for the intermediate resource and the final resource that was resolved. Since the resource description is returned without modification, the initial resource need not know how to handle the described resource. Figure 4 outlines the resolution process.
The validity periods described above can be either fixed amounts of time during which a mapping is presumably valid after name resolution, or they can be expiration times after which it is assumed that there is a significant probability of the mapping changing. For example, a mapping can be specified as being valid for 10 minutes after name resolution, or it can be specified as being valid until 09:00 on May 3, 2007.
Common components
Exactly how a resource resolves a name into another resource is entirely dependent on the resource itself. However, parts of the resolution process are basically the same among most resources, so a library which handles these common parts would be useful. The following are the components that would be included in such a library:
Name parser This parses a name expressed in the canonical syntax.
Recursive name resolver Given a resource description, one needs to be able to resolve names from the resource described. This component looks at the resource type identifier and invokes the appropriate code which can handle the specified resource type.
Generic name resolver Name resolution involves parsing the name, resolving the first local name to another resource, asking that other resource to resolve the rest of the name, and updating the validity period of the mapping. This sequence is basically the same for most resources, so a generic name resolver invokes the appropriate components in the correct order.
When the above components are provided by a library, a resource only needs to implement the interface which external computing elements use to resolve names, the mapping from local names to resources, and the code for resolving names from other resource types. The rest of the resolution process is handled by the generic name resolver.
We have implemented a library providing the above components in Java.
Optional components
Besides the components that have been previously mentioned, there are common components that only some resources would find useful. These components are not essential in the sense that name resolution would still work without them. One such component is a name cache. A name cache embedded within a resource would cache mappings from names to resource descriptions. The cache would use the validity period of the mapping so that it can expire obsolete mappings. This would improve resolution speed when resolving names is slow for one or more resources, e.g. when a resource must be queried over a slow network or if a large amount of computation is required to resolve a local name.
To control access to local namespace of a resource, we can use authorization certificates to specify whether another resource may access the local namespace. Similarly, to ensure the authenticity of a mapping of a name to a resource, we can use a binding certificate which binds a name to a resource description during a limited time. We plan to use SPKI [5], a public key infrastructure, to implement this kind of access control and authenticity assurance. Similarly to NUN, SPKI does not rely on a global namespace for managing public keys.
We can also envision the use of a resource type repository which can map resource type identifiers to mobile code which is able to resolve names from a resource with the given resource type. A resource using such a repository would be able to name a much wider variety of resources easily. This would require some way to handle mobile code security and a lookup infrastructure such as a distributed hash table.
Evaluation
To illustrate the potential utility of NUN, we have created several simple resource types which cooperate with each other to provide human readable names to resources. The resource types are listed in table 2. The resources are heterogeneous, where some resources are simple static pieces of data and others are network services. Even the network services do not have to use the same communication methods. This is possible because we use the resource type identifier in a resource description to determine how to handle the described resource.
Given the resources listed in table 2, we can think of some plausible scenarios in which names are used:
• The calendar server needs to send a reminder to the moderator when there is a meeting during the day. It can find the moderator's email address by querying itself the name (today meeting moderator email).
The calendar server maps today to the appropriate time period and searches for the first event tagged with meeting. From the event description, it can extract the identifier of the moderator, which is then used to query the user database. The description for the moderator is obtained, from which the email address can be extracted.
• A user of a calendar may wish to know the status of the location for a scheduled meeting. He can use an application which asks the calendar server to resolve the name (today meeting location occupant) to find someone who is at the location.
The application asks the calendar server to resolve the name by invoking an RMI method. The calendar server then internally resolves today and meeting as in the previous example. From the event description, it extracts the location identifier. It then asks the location manager to resolve the name occupant, which is resolved to the user identifier.
Note that the application need only know how to query names from the calendar server and interpret the resource description for a user. It did not have to know about the internals of the calendar server or anything about the location manager.
• In order to begin a presentation, a computer may need a file named naming.ppt owned by a user within a certain location. It can query the name (occupant files naming.ppt) from the location manager using the location identifier. Here we see that an occupant is not only a named resource but can also names other resources.
The location manager will find the user identifier, obtain the user description from the user database, extract the URL prefix of his file collection, and get the URL for the desired file. As in the previous example, the original computer does not need to know anything about users.
The resource types in table 2 were implemented in Java. Each were assigned a random identifier and their resource specifications were defined. Name resolution code and resource types which have builtin support for NUN use the library described in section 3, with the exceptions of the string and file resource types, which do not map any local names.
Resource type Description String
Simple character strings. Email addresses and common names are of this type. A string does not map any local name to other resource. File A file specified by a URL. While it would be ideal if each file mapped names to other resources according to its semantic content, the namespace of a file is empty in our implementation. File collection A collection of files. This is specified as a URL prefix. A local name is mapped to a file by prepending the prefix to the local name. Location A physical location maintained by a location manager. Each location is specified by a unique random identifier. A location maps the local name occupant to the user who is in the location. The location manager is a TCP/IP based server. It can return the list of users in a specified location. It has builtin support for NUN, so it can directly resolve names for a physical location when a specially crafted message is received.
Calendar
This is an RMI-based calendar server. It supports the query of events within a specified time period that are tagged with specific strings. It also supports NUN natively, so that it can directly resolve names when a certain RMI method is invoked. It maps names such as today to time periods. Time period This is a time period in a specific calendar. It maps a local name to the first event within the time period which includes the local name as a tag. The calendar server resolves names for this resource. Event A scheduled event in a calendar. An event may be tagged by several strings such as meeting or playtime. Each event is associated with a moderator, a location, and a set of related files. Events are described in a static text format. Name resolution is done by interpreting the static data into the appropriate resource description. User
Represents a physical user. Each user is specified by a unique random identifier, which is used for indexing a user in a user database server. The user database server is based on TCP/IP, which returns a description of the user based on the identifier. The server does not include support for naming, so separate name resolution code is required to map local names by interpreting the description. The resolution code maps local names to email addresses and the collections of users' files. The user database resided in a 3GHz Pentium D machine with 3GB of RAM, while the location manager and calendar server resided in 1GHz PowerPC machines with 1GB of RAM each. In one configuration the systems were connected over Gigabit Ethernet, while in another configuration they were connected to each other by a 802.11g wireless network.
We measured the time it took to resolve names into resources for the examples we discussed. We also measured the time it took when we queried the resources directly to obtain the necessary contextual information and to discover the desired resource based on this information. The actual work done between the two approaches is basically the same, but using the former approach is much simpler since we only need to query the appropriate resource with a name that is easy to construct. The latter approach requires that code be written for each situation to query the necessary information sources, which is substantially more complex and is often not possible. Table 3 compares the amount of time each approach takes when the systems are connected over Gigabit Ethernet. Each name resolution was repeated 1000 times. The measurements show that using NUN incurs negligible impact on performance. In fact, the overhead from NUN pales in comparison to the variability due to the network. This is even more pronounced with a wireless network, as can be seen in table 4.
Conclusions
In this paper, we have described the Non-anchored Unified Naming system. Instead of having a naming service which exists independently from resources, its approach is to have resources themselves name other resources by local names. The rationale is that a resource is best suited to apply its own specialized knowledge and capabilities when resolving names which incorporate them.
A name is a chain of local names which is resolved by an initial resource, which is determined according to the needs of users and applications. Eschewing the use of absolute naming and using only relative naming makes it simple to handle unpredictable situations that may arise within a ubiquitous computing environment. NUN is capable of naming arbitrary resources by resolving names into a resource described by a flexible resource description scheme. This allows the use of a consistent naming scheme for identifying arbitrary types of resources. It also makes it simple to incorporate new kinds of contextual information within the name simply by adding new resources which provide the desired information.
The name resolution process does not require that a single computing element know how to handle all resource types. This simplifies the implementation of resources and reduces the amount of memory required to support naming. This allows limited devices such as PDAs or other electronic appliances to participate in the naming process, where they may contribute their specialized knowledge to the naming process.
The ease by which new contextual information sources may be added, the ability to handle ad hoc situations, and the ability to provide a consistent naming scheme for arbitrary resources makes NUN suitable for identifying resources in a ubiquitous computing environment.
| 4,502 |
cs0608100
|
2951193962
|
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.
|
french02 cites Structure Mapping Theory (SMT) @cite_43 and its implementation in the Structure Mapping Engine (SME) @cite_54 as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
|
{
"abstract": [
"A theory of analogy must describe how the meaning of on analogy is derived from the meonings of its parts. In the structure-mapplng theory, the interpretation rules ore characterized OS implicit rules for mapping knowledge about a base domain into a torget domain. Two important features of the theory are (a) the rules depend only on syntactic properties of the knowledge representation, and not on the specific content of the domoins; ond (b) the theoretical fromework allows analogies to be distinguished o ond (b) The particular relations mapped ore determined by systemaficity. OS defined by the existence of higher-order relations.",
"Thispaperdescribes thestructure-mapping engine(SME), a program for studying . analogical processing .SME has been built to explore Gentner's structure-mapping theory of analogy, and provides a \"tool kit\" for constructing matching algorithms consistent with this theory . Its flexibility enhances cognitive simulation studies by simplifying experimentation . Furthermore, SME is very efficient, making it a useful component in machine learning systems as well . We review the structure-mapping theory and describe the design of the engine . We analyze the complexity of the algorithm, and demonstrate that"
],
"cite_N": [
"@cite_43",
"@cite_54"
],
"mid": [
"2026161499",
"2145454741"
]
}
|
Similarity of Semantic Relations
|
There are at least two kinds of similarity. Attributional similarity is correspondence between attributes and relational similarity is correspondence between relations (Medin, Goldstone, and Gentner, 1990). When two words have a high degree of attributional similarity, we call them synonyms. When two word pairs have a high degree of relational similarity, we say they are analogous.
Verbal analogies are often written in the form A:B::C:D, meaning A is to B as C is to D; for example, traffic:street::water:riverbed. Traffic flows over a street; water flows over a riverbed. A street carries traffic; a riverbed carries water. There is a high degree of relational similarity between the word pair traffic:street and the word pair water:riverbed. In fact, this analogy is the basis of several mathematical theories of traffic flow (Daganzo, 1994).
In Section 2, we look more closely at the connections between attributional and relational similarity. In analogies such as mason:stone::carpenter:wood, it seems that relational similarity can be reduced to attributional similarity, since mason and carpenter are attributionally similar, as are stone and wood. In general, this reduction fails. Consider the analogy traffic:street::water:riverbed. Traffic and water are not attributionally similar. Street and riverbed are only moderately attributionally similar.
Many algorithms have been proposed for measuring the attributional similarity between two words (Lesk, 1969;Resnik, 1995; Landauer and Dumais, 1997; Jiang and Conrath, 1997; Lin, 1998b;Turney, 2001;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003). Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais, 1997), information retrieval (Deerwester et al., 1990), determining semantic orientation (Turney, 2002), grading student essays (Rehder et al., 1998), measuring textual cohesion (Morris and Hirst, 1991), and word sense disambiguation (Lesk, 1986).
On the other hand, since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. Many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity. We discuss related problems in natural language processing, information retrieval, and information extraction in more detail in Section 3. This paper builds on the Vector Space Model (VSM) of information retrieval. Given a query, a search engine produces a ranked list of documents. The documents are ranked in order of decreasing attributional similarity between the query and each document. Almost all modern search engines measure attributional similarity using the VSM (Baeza-Yates and Ribeiro-Neto, 1999). Turney and Littman (2005) adapt the VSM approach to measuring relational similarity. They used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words. Section 4 presents the VSM approach to measuring similarity.
In Section 5, we present an algorithm for measuring relational similarity, which we call Latent Relational Analysis (LRA). The algorithm learns from a large corpus of unlabeled, unstructured text, without supervision. LRA extends the VSM approach of Turney and Littman (2005) in three ways: (1) The connecting patterns are derived automatically from the corpus, instead of using a fixed set of patterns.
(2) Singular Value Decomposition (SVD) is used to smooth the frequency data. (3) Given a word pair such as traffic:street, LRA considers transformations of the word pair, generated by replacing one of the words by synonyms, such as traffic:road, traffic:highway.
Section 6 presents our experimental evaluation of LRA with a collection of 374 multiple-choice word analogy questions from the SAT college entrance exam. 1 An example of a typical SAT question appears in Table 1. In the educational testing literature, the first pair (mason:stone) is called the stem of the analogy. The correct choice is called the solution and the incorrect choices are distractors. We evaluate LRA by testing its ability to select the solution and avoid the distractors. The average performance of collegebound senior high school students on verbal SAT questions corresponds to an accuracy of about 57%. LRA achieves an accuracy of about 56%. On these same questions, the VSM attained 47%.
One application for relational similarity is classifying semantic relations in nounmodifier pairs (Turney and Littman, 2005). In Section 7, we evaluate the performance of LRA with a set of 600 noun-modifier pairs from Nastase and Szpakowicz (2003). The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the head noun (printer) and the modifier (laser). The 600 pairs have been manually labeled with 30 classes of semantic relations. For example, "laser printer" is classified as instrument; the printer uses the laser as an instrument for We approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem. The 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbour in the training set. LRA is used to measure distance (i.e., similarity, nearness). LRA achieves an accuracy of 39.8% on the 30-class problem and 58.0% on the 5-class problem. On the same 600 noun-modifier pairs, the VSM had accuracies of 27.8% (30-class) and 45.7% (5-class) (Turney and Littman, 2005).
We discuss the experimental results, limitations of LRA, and future work in Section 8 and we conclude in Section 9.
Attributional and Relational Similarity
In this section, we explore connections between attributional and relational similarity.
Types of Similarity
Medin, Goldstone, and Gentner (1990) distinguish attributes and relations as follows:
Attributes are predicates taking one argument (e.g., X is red, X is large), whereas relations are predicates taking two or more arguments (e.g., X collides with Y , X is larger than Y ). Attributes are used to state properties of objects; relations express relations between objects or propositions. Gentner (1983) notes that what counts as an attribute or a relation can depend on the context. For example, large can be viewed as an attribute of X, LARGE(X ), or a relation between X and some standard Y , LARGER THAN(X , Y ).
The amount of attributional similarity between two words, A and B, depends on the degree of correspondence between the properties of A and B. A measure of attributional similarity is a function that maps two words, A and B, to a real number, sim a (A, B) ∈ ℜ. The more correspondence there is between the properties of A and B, the greater their attributional similarity. For example, dog and wolf have a relatively high degree of attributional similarity.
The amount of relational similarity between two pairs of words, A:B and C:D, depends on the degree of correspondence between the relations between A and B and the relations between C and D. A measure of relational similarity is a function that maps two pairs, A:B and C:D, to a real number, sim r (A : B, C : D) ∈ ℜ. The more correspondence there is between the relations of A:B and C:D, the greater their relational similarity. For example, dog:bark and cat:meow have a relatively high degree of relational
As these examples show, semantic relatedness is the same as attributional similarity (e.g., hot and cold are both kinds of temperature, pencil and paper are both used for writing). Here we prefer to use the term attributional similarity, because it emphasizes the contrast with relational similarity. The term semantic relatedness may lead to confusion when the term relational similarity is also under discussion.
Resnik (1995) describes semantic similarity as follows:
Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. (1989) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonimic (IS-A) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information.
Thus semantic similarity is a specific type of attributional similarity. The term semantic similarity is misleading, because it refers to a type of attributional similarity, yet relational similarity is not any less semantic than attributional similarity. To avoid confusion, we will use the terms attributional similarity and relational similarity, following Medin, Goldstone, and Gentner (1990). Instead of semantic similarity (Resnik, 1995) or semantically similar (Chiarello et al., 1990), we prefer the term taxonomical similarity, which we take to be a specific type of attributional similarity. We interpret synonymy as a high degree of attributional similarity. Analogy is a high degree of relational similarity.
Measuring Attributional Similarity
Algorithms for measuring attributional similarity can be lexicon-based (Lesk, 1986;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003), corpus-based (Lesk, 1969;Landauer and Dumais, 1997;Lin, 1998a;Turney, 2001), or a hybrid of the two (Resnik, 1995;Jiang and Conrath, 1997;Turney et al., 2003). Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpus-based algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. However, experiments do not support this intuition.
Several algorithms have been evaluated using 80 multiple-choice synonym ques- Table 2. Table 3 shows the best performance on the TOEFL questions for each type of attributional similarity algorithm. The results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy.
Using Attributional Similarity to Solve Analogies
We may distinguish near analogies (mason:stone::carpenter:wood) from far analogies (traffic:street::water:riverbed) (Gentner, 1983;Medin, Goldstone, and Gentner, 1990). In an analogy A:B::C:D, where there is a high degree of relational similarity between A:B and C:D, if there is also a high degree of attributional similarity between A and C, and between B and D, then A:B::C:D is a near analogy; otherwise, it is a far analogy. It seems possible that SAT analogy questions might consist largely of near analogies, in which case they can be solved using attributional similarity measures. We could score each candidate analogy by the average of the attributional similarity, sim a , between A and C and between B and D: score(A : B ::
C : D) = 1 2 (sim a (A, C) + sim a (B, D))(1)
This kind of approach was used in two of the thirteen modules in Turney et al. (2003) (see Section 3.1).
To evaluate this approach, we applied several measures of attributional similarity to our collection of 374 SAT questions. The performance of the algorithms was measured by precision, recall, and F , defined as follows: precision = number of correct guesses total number of guesses made
(2)
F = 2 × precision × recall precision + recall (4)
Note that recall is the same as percent correct (for multiple-choice questions, with only zero or one guesses allowed per question, but not in general). Table 4 shows the experimental results for our set of 374 analogy questions. For example, using the algorithm of Hirst and St-Onge (1998), 120 questions were answered correctly, 224 incorrectly, and 30 questions were skipped. When the algorithm assigned the same similarity to all of the choices for a given question, that question was skipped. The precision was 120/(120 + 224) and the recall was 120/(120 + 224 + 30).
The first five algorithms in Table 4 are implemented in Pedersen's WordNet-Similarity package. 2 The sixth algorithm (Turney, 2001) used the Waterloo MultiText System, as described in Terra and Clarke (2003).
The difference between the lowest performance (Jiang and Conrath, 1997) and random guessing is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, the difference between the highest performance (Turney, 2001) and the VSM approach (Turney and Littman, 2005) is also statistically significant with 95% confidence. We conclude that there are enough near analogies in the 374 SAT questions for attributional similarity to perform better than random guessing, but not enough near analogies for attributional similarity to perform as well as relational similarity.
Recognizing Word Analogies
The problem of recognizing word analogies is, given a stem word pair and a finite list of choice word pairs, select the choice that is most analogous to the stem. This problem was first attempted by a system called Argus (Reitman, 1965), using a small hand-built semantic network. Argus could only solve the limited set of analogy questions that its programmer had anticipated. Argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity. Turney et al. (2003) combined 13 independent modules to answer SAT questions. The final output of the system was based on a weighted combination of the outputs of each individual module. The best of the 13 modules was the VSM, which is described in detail in Turney and Littman (2005). The VSM was evaluated on a set of 374 SAT questions, achieving a score of 47%.
In contrast with the corpus-based approach of Turney and Littman (2005), Veale (2004) applied a lexicon-based approach to the same 374 SAT questions, attaining a score of 43%. Veale evaluated the quality of a candidate analogy A:B::C:D by looking for paths in WordNet, joining A to B and C to D. The quality measure was based on the similarity between the A:B paths and the C:D paths. Turney (2005) introduced Latent Relational Analysis (LRA), an enhanced version of the VSM approach, which reached 56% on the 374 SAT questions. Here we go beyond Turney (2005) by describing LRA in more detail, performing more extensive experiments, and analyzing the algorithm and related work in more depth. French (2002) cites Structure Mapping Theory (SMT) (Gentner, 1983) and its implementation in the Structure Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner, 1989) as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
Structure Mapping Theory
For example, there is an analogy between the solar system and Rutherford's model of the atom (Falkenhainer, Forbus, and Gentner, 1989). The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
Each individual connection (e.g., from revolve(planet, sun) to revolve(electron, nucleus)) in an analogical mapping implies that the connected relations are similar; thus, SMT requires a measure of relational similarity, in order to form maps. Early versions of SME only mapped identical relations, but later versions of SME allowed similar, non-identical relations to match (Falkenhainer, 1990). However, the focus of research in analogy-making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations, hence the similarity measures used in SME at the level of individual connections are somewhat rudimentary.
We believe that a more sophisticated measure of relational similarity, such as LRA, may enhance the performance of SME. Likewise, the focus of our work here is on the similarity between particular relations, and we ignore systematic mapping between sets animal:eat::inflation:reduce of relations, so LRA may also be enhanced by integration with SME.
Metaphor
Metaphorical language is very common in our daily life; so common that we are usually unaware of it (Lakoff and Johnson, 1980). argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language (Lakoff and Johnson, 1980). Dolan (1995) describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's (1995) algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors. Lakoff and Johnson (1980) give many examples of sentences in support of their claim that metaphorical language is ubiquitous. The metaphors in their sample sentences can be expressed using SAT-style verbal analogies of the form A:B::C:D. The first column in Table 5 is a list of sentences from Lakoff and Johnson (1980) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy.
Classifying Semantic Relations
The task of classifying semantic relations is to identify the relation between a pair of words. Often the pairs are restricted to noun-modifier pairs, but there are many interesting relations, such as antonymy, that do not occur in noun-modifier pairs. However, noun-modifier pairs are interesting due to their high frequency in English. For instance, WordNet 2.0 contains more than 26,000 noun-modifier pairs, although many common noun-modifiers are not in WordNet, especially technical terms. Rosario and Hearst (2001) and Rosario, Hearst, and Fillmore (2002) classify nounmodifier relations in the medical domain, using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources for representing each noun-modifier pair with a feature vector. They trained a neural network to distinguish 13 classes of semantic relations. Nastase and Szpakowicz (2003) explore a similar approach to classifying general noun-modifier pairs (i.e., not restricted to a particular domain, such as medicine), using WordNet and Roget's Thesaurus as lexical resources. Vanderwende (1994) used hand-built rules, together with a lexical knowledge base, to classify noun-modifier pairs.
None of these approaches explicitly involved measuring relational similarity, but any classification of semantic relations necessarily employs some implicit notion of relational similarity, since members of the same class must be relationally similar to some extent. Barker and Szpakowicz (1998) tried a corpus-based approach that explicitly used a measure of relational similarity, but their measure was based on literal matching, which limited its ability to generalize. Moldovan et al. (2004) also used a measure of relational similarity, based on mapping each noun and modifier into semantic classes in WordNet. The noun-modifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm, to improve the mapping of the noun and modifier into WordNet. Turney and Littman (2005) used the VSM (as a component in a single nearest neighbour learning algorithm) to measure relational similarity. We take the same approach here, substituting LRA for the VSM, in Section 7.
Lauer (1995) used a corpus-based approach (using the BNC) to paraphrase nounmodifier pairs, by inserting the prepositions of, for, in, at, on, from, with, and about. For example, reptile haven was paraphrased as haven for reptiles. Lapata and Keller (2004) achieved improved results on this task, by using the database of AltaVista's search engine as a corpus.
Word Sense Disambiguation
We believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text. If we can identify the semantic relations between the given word and its context, then we can disambiguate the given word. Yarowsky's (1993) observation that collocations are almost always monosemous is evidence for this view. Federici, Montemagni, and Pirrelli (1997) present an analogybased approach to word sense disambiguation.
For example, consider the word plant. Out of context, plant could refer to an industrial plant or a living organism. Suppose plant appears in some text near food. A typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism (Lesk, 1986;Banerjee and Pedersen, 2003). In this case, the decision may not be clear, since industrial plants often produce food and living organisms often serve as food. It would be very helpful to know the relation between food and plant in this example. In the phrase "food for the plant", the relation between food and plant strongly suggests that the plant is a living organism, since industrial plants do not need food. In the text "food at the plant", the relation strongly suggests that the plant is an industrial plant, since living organisms are not usually considered as locations. Thus an algorithm for classifying semantic relations (as in Section 7) should be helpful for word sense disambiguation.
Information Extraction
The problem of relation extraction is, given an input document and a specific relation R, extract all pairs of entities (if any) that have the relation R in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. Zelenko, Aone, and Richardella (2003) present a kernel method for extracting the relations person-affiliation and organization-location. For example, in the sentence "John Smith is the chief scientist of the Hardcom Corporation," there is a person-affiliation relation between "John Smith" and "Hardcom Corporation" (Zelenko, Aone, and Richardella, 2003). This is similar to the problem of classifying semantic relations (Section 3.4), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
In the VSM approach to classifying semantic relations (Turney and Littman, 2005), we would have a training set of labeled examples of the relation person-affiliation, for instance. Each example would be represented by a vector of pattern frequencies. Given a specific document discussing "John Smith" and "Hardcom Corporation", we could construct a vector representing the relation between these two entities, and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors. It would seem that there is a problem here, because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for "John Smith" and "Hardcom Corporation" would be very sparse, since these entities might be mentioned only once in the given document. However, this is not a new problem for the Vector Space Model; it is the standard situation when the VSM is used for information retrieval. A query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vector. There are well-known techniques in information retrieval for coping with this disparity, such as weighting schemes for query vectors that are different from the weighting schemes for document vectors (Salton and Buckley, 1988).
Question Answering
In their paper on classifying semantic relations, Moldovan et al. (2004) suggest that an important application of their work is Question Answering. As defined in the Text REtrieval Conference (TREC) Question Answering (QA) track, the task is to answer simple questions, such as "Where have nuclear incidents occurred?", by retrieving a relevant document from a large corpus and then extracting a short string from the document, such as "The Three Mile Island nuclear incident caused a DOE policy crisis." Moldovan et al. (2004) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text. They argue that the desired semantic relation can easily be inferred from the surface form of the question. A question of the form "Where ...?" is likely to be seeking for entities with a location relation and a question of the form "What did ... make?" is likely to be looking for entities with a product relation. In Section 7, we show how LRA can recognize relations such as location and product (see Table 19). Hearst (1992) presents an algorithm for learning hyponym (type of) relations from a corpus and Berland and Charniak (1999) describe how to learn meronym (part of) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words (Fellbaum, 1998) and Nastase and Szpakowicz (2003) list 30 semantic relations for noun-modifier pairs. Hearst (1992) and Berland and Charniak (1999) use manually generated rules to mine text for semantic relations. Turney and Littman (2005) also use a manually generated set of 64 patterns.
Automatic Thesaurus Generation
LRA does not use a predefined set of patterns; it learns patterns from a large corpus. Instead of manually generating new rules or patterns for each new semantic relation, it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations. A nearest neighbour algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations, given the appropriate labeled training data.
Girju, Badulescu, and Moldovan (2003) present an algorithm for learning meronym relations from a corpus. Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. However, they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. Veale (2003) has developed an algorithm for recognizing certain types of word analogies, based on information in WordNet. He proposes to use the algorithm for analogical information retrieval. For example, the query "Muslim church" should return "mosque" and the query "Hindu bible" should return "the Vedas". The algorithm was designed with a focus on analogies of the form adjective:noun::adjective:noun, such as Christian:church::Muslim:mosque.
Information Retrieval
A measure of relational similarity is applicable to this task. Given a pair of words, A and B, the task is to return another pair of words, X and Y , such that there is high relational similarity between the pair A:X and the pair Y :B. For example, given A = "Muslim" and B = "church", return X = "mosque" and Y = "Christian". (The pair Muslim:mosque has a high relational similarity to the pair Christian:church.)
Marx et al. (2002) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora. Each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus. For example, one experiment used a corpus of Buddhist documents and a corpus of Christian documents. A cluster of words such as {Hindu, Mahayana, Zen, ...} from the Buddhist corpus was coupled with a cluster of words such as {Catholic, Protestant, ...} from the Christian corpus. Thus the algorithm appears to have discovered an analogical mapping between Buddhist schools and traditions and Christian schools and traditions. This is interesting work, but it is not directly applicable to SAT analogies, because it discovers analogies between clusters of words, rather than individual words.
Identifying Semantic Roles
A semantic frame for an event such as judgement contains semantic roles such as judge, evaluee, and reason, whereas an event such as statement contains roles such as speaker, addressee, and message (Gildea and Jurafsky, 2002). The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. Moldovan et al. (2004) argue that semantic roles are merely a special case of semantic relations (Section 3.4), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
The Vector Space Model
This section examines past work on measuring attributional and relational similarity using the Vector Space Model (VSM).
Measuring Attributional Similarity with the Vector Space Model
The VSM was first developed for information retrieval (Salton and McGill, 1983;Salton and Buckley, 1988;Salton, 1989) and it is at the core of most modern search engines (Baeza-Yates and Ribeiro-Neto, 1999).
In the VSM approach to information retrieval, queries and documents are represented by vectors. Elements in these vectors are based on the frequencies of words in the corresponding queries and documents. The frequencies are usually transformed by various formulas and weights, tailored to improve the effectiveness of the search engine (Salton, 1989). The attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors. For a given query, the search engine sorts the matching documents in order of decreasing cosine.
The VSM approach has also been used to measure the attributional similarity of words (Lesk, 1969;Ruge, 1992;Pantel and Lin, 2002). Pantel and Lin (2002) clustered words according to their attributional similarity, as measured by a VSM. Their algorithm is able to discover the different senses of polysemous words, using unsupervised learning.
Latent Semantic Analysis enhances the VSM approach to information retrieval by using the Singular Value Decomposition (SVD) to smooth the vectors, which helps to handle noise and sparseness in the data (Deerwester et al., 1990;Dumais, 1993; Landauer and Dumais, 1997). SVD improves both document-query attributional similarity measures (Deerwester et al., 1990;Dumais, 1993) and word-word attributional similarity measures (Landauer and Dumais, 1997). LRA also uses SVD to smooth vectors, as we discuss in Section 5.
Measuring Relational Similarity with the Vector Space Model
Let R 1 be the semantic relation (or set of relations) between a pair of words, A and B, and let R 2 be the semantic relation (or set of relations) between another pair, C and D. We wish to measure the relational similarity between R 1 and R 2 . The relations R 1 and R 2 are not given to us; our task is to infer these hidden (latent) relations and then compare them.
In the VSM approach to relational similarity (Turney and Littman, 2005), we create vectors, r 1 and r 2 , that represent features of R 1 and R 2 , and then measure the similarity of R 1 and R 2 by the cosine of the angle θ between r 1 and r 2 :
r 1 = r 1,1 , . . . , r 1,n (5) r 2 = r 2,1 , . . . r 2,n (6) cosine(θ) = n i=1 r 1,i · r 2,i n i=1 (r 1,i ) 2 · n i=1 (r 2,i ) 2 = r 1 · r 2 √ r 1 · r 1 · √ r 2 · r 2 = r 1 · r 2 r 1 · r 2 (7)
We create a vector, r, to characterize the relationship between two words, X and Y , by counting the frequencies of various short phrases containing X and Y . Turney and Littman (2005) use a list of 64 joining terms, such as "of", "for", and "to", to form 128 phrases that contain X and Y , such as "X of Y ", "Y of X", "X for Y ", "Y for X", "X to Y ", and "Y to X". These phrases are then used as queries for a search engine and the number of hits (matching documents) is recorded for each query. This process yields a vector of 128 numbers. If the number of hits for a query is x, then the corresponding element in the vector r is log(x + 1). Several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures (Salton and Buckley, 1988;Ruge, 1992;Lin, 1998b).
Turney and Littman (2005) evaluated the VSM approach by its performance on 374 SAT analogy questions, achieving a score of 47%. Since there are five choices for each question, the expected score for random guessing is 20%. To answer a multiple-choice analogy question, vectors are created for the stem pair and each choice pair, and then cosines are calculated for the angles between the stem pair and each choice pair. The best guess is the choice pair with the highest cosine. We use the same set of analogy questions to evaluate LRA in Section 6.
The VSM was also evaluated by its performance as a distance (nearness) measure in a supervised nearest neighbour classifier for noun-modifier semantic relations (Turney and Littman, 2005). The evaluation used 600 hand-labeled noun-modifier pairs from Nastase and Szpakowicz (2003). A testing pair is classified by searching for its single nearest neighbour in the labeled training data. The best guess is the label for the training pair with the highest cosine. LRA is evaluated with the same set of nounmodifier pairs in Section 7. Turney and Littman (2005) used the AltaVista search engine to obtain the frequency information required to build vectors for the VSM. Thus their corpus was the set of all web pages indexed by AltaVista. At the time, the English subset of this corpus consisted of about 5 × 10 11 words. Around April 2004, AltaVista made substantial changes to their search engine, removing their advanced search operators. Their search engine no longer supports the asterisk operator, which was used by Turney and Littman (2005) for stemming and wild-card searching. AltaVista also changed their policy towards automated searching, which is now forbidden. 3 Turney and Littman (2005) used AltaVista's hit count, which is the number of documents (web pages) matching a given query, but LRA uses the number of passages (strings) matching a query. In our experiments with LRA (Sections 6 and 7), we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003), running on a 16 CPU Beowulf Cluster, with a corpus of about 5 × 10 10 English words. The Waterloo MultiText System (WMTS) is a distributed (multiprocessor) search engine, designed primarily for passage retrieval (although document retrieval is possible, as a special case of passage retrieval). The text and index require approximately one terabyte of disk space. Although AltaVista only gives a rough estimate of the number of matching documents, the Waterloo MultiText System gives exact counts of the number of matching passages. Turney et al. (2003) combine 13 independent modules to answer SAT questions. The performance of LRA significantly surpasses this combined system, but there is no real contest between these approaches, because we can simply add LRA to the combination, as a fourteenth module. Since the VSM module had the best performance of the thirteen modules (Turney et al., 2003), the following experiments focus on comparing VSM and LRA.
Latent Relational Analysis
LRA takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs. LRA relies on three resources, a search engine with a very large corpus of text, a broad-coverage thesaurus of synonyms, and an efficient implementation of SVD.
We first present a short description of the core algorithm. Later, in the following subsections, we will give a detailed description of the algorithm, as it is applied in the experiments in Sections 6 and 7.
• Given a set of word pairs as input, look in a thesaurus for synonyms for each word in each word pair. For each input pair, make alternate pairs by replacing the original words with their synonyms. The alternate pairs are intended to form near analogies with the corresponding original pairs (see Section 2.3).
• Filter out alternate pairs that do not form near analogies, by dropping alternate pairs that co-occur rarely in the corpus. In the preceding step, if a synonym replaced an ambiguous original word, but the synonym captures the wrong sense of the original word, it is likely that there is no significant relation between the words in the alternate pair, so they will rarely co-occur.
• For each original and alternate pair, search in the corpus for short phrases that begin with one member of the pair and end with the other. These phrases characterize the relation between the words in each pair.
• For each phrase from the previous step, create several patterns, by replacing words in the phrase with wild cards.
• Build a pair-pattern frequency matrix, in which each cell represents the number of times that the corresponding pair (row) appears in the corpus with the corresponding pattern (column). The number will usually be zero, resulting in a sparse matrix.
• Apply the Singular Value Decomposition to the matrix. This reduces noise in the matrix and helps with sparse data.
• Suppose that we wish to calculate the relational similarity between any two of the original pairs. Start by looking for the two row vectors in the pair-pattern frequency matrix that correspond to the two original pairs. Calculate the cosine of the angle between these two row vectors. Then merge the cosine of the two original pairs with the cosines of their corresponding alternate pairs, as follows. If an analogy formed with alternate pairs has a higher cosine than the original pairs, we assume that we have found a better way to express the analogy, but we have not significantly changed its meaning. If the cosine is lower, we assume that we may have changed the meaning, by inappropriately replacing words with synonyms. Filter out inappropriate alternates by dropping all analogies formed of alternates, such that the cosines are less than the cosine for the original pairs. The relational similarity between the two original pairs is then calculated as the average of all of the remaining cosines.
The motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus. The hope is that we can find near analogies for the original pairs, such that the near analogies co-occur more frequently in the corpus. The danger is that the alternates may have different relations from the originals. The filtering steps above aim to reduce this risk.
Input and Output
In our experiments, the input set contains from 600 to 2,244 word pairs. The output similarity measure is based on cosines, so the degree of similarity can range from −1 (dissimilar; θ = 180 • ) to +1 (similar; θ = 0 • ). Before applying SVD, the vectors are completely nonnegative, which implies that the cosine can only range from 0 to +1, but SVD introduces negative values, so it is possible for the cosine to be negative, although we have never observed this in our experiments.
Search Engine and Corpus
In the following experiments, we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003). 4 The corpus consists of about 5 × 10 10 English words, gathered by a web crawler, mainly from US academic web sites. The web pages cover a very wide range of topics, styles, genres, quality, and writing skill. The WMTS is well suited to LRA, because the WMTS scales well to large corpora (one terabyte, in our case), it gives exact frequency counts (unlike most web search engines), it is designed for passage retrieval (rather than document retrieval), and it has a powerful query syntax.
Thesaurus
As a source of synonyms, we use Lin's (1998a) automatically generated thesaurus. This thesaurus is available through an online interactive demonstration or it can be downloaded. 5 We used the online demonstration, since the downloadable version seems to contain fewer words. For each word in the input set of word pairs, we automatically query the online demonstration and fetch the resulting list of synonyms. As a courtesy to other users of Lin's online system, we insert a 20 second delay between each query.
Lin's thesaurus was generated by parsing a corpus of about 5 × 10 7 English words, consisting of text from the Wall Street Journal, San Jose Mercury, and AP Newswire (Lin, 1998a). The parser was used to extract pairs of words and their grammatical relations. Words were then clustered into synonym sets, based on the similarity of their grammatical relations. Two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words. Given a word and its part of speech, Lin's thesaurus provides a list of words, sorted in order of decreasing attributional similarity. This sorting is convenient for LRA, since it makes it possible to focus on words with higher attributional similarity and ignore the rest. WordNet, in contrast, given a word and its part of speech, provides a list of words grouped by the possible senses of the given word, with groups sorted by the frequencies of the senses. WordNet's sorting does not directly correspond to sorting by degree of attributional similarity, although various algorithms have been proposed for deriving attributional similarity from WordNet (Resnik, 1995; Jiang and Conrath, 1997; Budanitsky and Hirst, 2001; Banerjee and Pedersen, 2003).
Singular Value Decomposition
We use Rohde's SVDLIBC implementation of the Singular Value Decomposition, which is based on SVDPACKC (Berry, 1992). 6 In LRA, SVD is used to reduce noise and compensate for sparseness.
The Algorithm
We will go through each step of LRA, using an example to illustrate the steps. Assume that the input to LRA is the 374 multiple-choice SAT word analogy questions of Turney and Littman (2005). Since there are six word pairs per question (the stem and five choices), the input consists of 2,244 word pairs. Let's suppose that we wish to calculate the relational similarity between the pair quart:volume and the pair mile:distance, taken from the SAT question in Table 6. The LRA algorithm consists of the following twelve steps:
1. Find alternates: For each word pair A:B in the input set, look in Lin's (1998a) thesaurus for the top num sim words (in the following experiments, num sim is 10) that are most similar to A. For each A ′ that is similar to A, make a new word pair A ′ :B. Likewise, look for the top num sim words that are most similar to B, and for each B ′ , make a new word pair A:B ′ . A:B is called the original pair and each A ′ :B or A:B ′ is an alternate pair. The intent is that alternates should have almost the same semantic relations as the original. For each input pair, there will now be 2 × num sim alternate pairs. When looking for similar words in Lin's (1998a) thesaurus, avoid words that seem unusual (e.g., hyphenated words, words with three characters or less, words with non-alphabetical char- Table 6 This SAT question, from Claman (2000), is used to illustrate the steps in the LRA algorithm.
Stem: quart:volume Choices: (a) day:night (b) mile:distance (c) decade:century (d) friction:heat (e) part:whole Solution: (b) mile:distance acters, multi-word phrases, and capitalized words). The first column in Table 7 shows the alternate pairs that are generated for the original pair quart:volume.
Filter alternates:
For each original pair A:B, filter the 2 × num sim alternates as follows. For each alternate pair, send a query to the WMTS, to find the frequency of phrases that begin with one member of the pair and end with the other. The phrases cannot have more than max phrase words (we use max phrase = 5). Sort the alternate pairs by the frequency of their phrases.
Select the top num f ilter most frequent alternates and discard the remainder (we use num f ilter = 3, so 17 alternates are dropped). This step tends to eliminate alternates that have no clear semantic relation. The third column in Table 7 shows the frequency with which each pair co-occurs in a window of max phrase words. The last column in Table 7 shows the pairs that are selected.
Find phrases:
For each pair (originals and alternates), make a list of phrases in the corpus that contain the pair. Query the WMTS for all phrases that begin with one member of the pair and end with the other (in either order). We ignore suffixes when searching for phrases that match a given pair. The phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pair. These phrases give us information about the semantic relations between the words in each pair. A phrase with no words between the two members of the word pair would give us very little information about the semantic relations (other than that the words occur together with a certain frequency in a certain order). Table 8 gives some examples of phrases in the corpus that match the pair quart:volume.
Find patterns:
For each phrase found in the previous step, build patterns from the intervening words. A pattern is constructed by replacing any or all or none of the intervening words with wild cards (one wild card can only replace one word). If a phrase is n words long, there are n − 2 intervening words between the members of the given word pair (e.g., between quart and volume). Thus a phrase with n words generates 2 (n−2) patterns. (We use max phrase = 5, so a phrase generates at most eight patterns.) For each pattern, count the number of pairs (originals and alternates) with phrases that match the pattern (a wild card must match exactly one word). Keep the top num patterns most frequent patterns and discard the rest (we use num patterns = 4, 000). Typically there will be millions of patterns, so it is not feasible to keep them all.
Map pairs to rows:
In preparation for building the matrix X, create a mapping of word pairs to row numbers. For each pair A:B, create a row for A:B and Table 7 Alternate forms of the original pair quart:volume. The first column shows the original pair and the alternate pairs. The second column shows Lin's similarity score for the alternate word compared to the original word. For example, the similarity between quart and pint is 0.210. The third column shows the frequency of the pair in the WMTS corpus. The fourth column shows the pairs that pass the filtering step (i.e., step 2). "quarts liquid volume" "volume in quarts" "quarts of volume" "volume capacity quarts" "quarts in volume"
Word pair
"volume being about two quarts" "quart total volume" "volume of milk in quarts" "quart of spray volume" "volume include measures like quart" Table 9 Frequencies of various patterns for quart:volume. The asterisk "*" represents the wildcard. Suffixes are ignored, so "quart" matches "quarts". For example, "quarts in volume" is one of the four phrases that match "quart P volume" when P is "in". P = "in" P = "* of" P = "of *" P = "* *" freq("quart P volume") 4 1 5 19 freq("volume P quart") 10 0 2 16 another row for B:A. This will make the matrix more symmetrical, reflecting our knowledge that the relational similarity between A:B and C:D should be the same as the relational similarity between B:A and D:C. This duplication of rows is examined in Section 6.6.
6. Map patterns to columns: Create a mapping of the top num patterns patterns to column numbers. For each pattern P , create a column for "word 1 P word 2 " and another column for "word 2 P word 1 ". Thus there will be 2 × num patterns columns in X. This duplication of columns is examined in Section 6.6.
7. Generate a sparse matrix: Generate a matrix X in sparse matrix format, suitable for input to SVDLIBC. The value for the cell in row i and column j is the frequency of the j-th pattern (see step 6) in phrases that contain the i-th word pair (see step 5). Table 9 gives some examples of pattern frequencies for quart:volume.
8. Calculate entropy: Apply log and entropy transformations to the sparse matrix (Landauer and Dumais, 1997). These transformations have been found to be very helpful for information retrieval (Harman, 1986;. Let x i,j be the cell in row i and column j of the matrix X from step 7. Let m be the number of rows in X and let n be the number of columns. We wish to weight the cell x i,j by the entropy of the j-th column. To calculate the entropy of the column, we need to convert the column into a vector of probabilities. Let p i,j be the probability of x i,j , calculated by normalizing the column vector so that the sum of the elements is one, p i,j = x i,j / m k=1 x k,j . The entropy of the j-th column is then H j = − m k=1 p k,j log(p k,j ). Entropy is at its maximum when p i,j is a uniform distribution, p i,j = 1/m, in which case H j = log(m). Entropy is at its minimum when p i,j is 1 for some value of i and 0 for all other values of i, in which case H j = 0. We want to give more weight to columns (patterns) with frequencies that vary substantially from one row (word pair) to the next, and less weight to columns that are uniform. Therefore we weight the cell x i,j by w j = 1 − H j / log(m), which varies from 0 when p i,j is uniform to 1 when entropy is minimal. We also apply the log transformation to frequencies, log(x i,j + 1). (Entropy is calculated with the original frequency values, before the log transformation is applied.) For all i and all j, replace the original value x i,j in X by the new value w j log(x i,j + 1). This is an instance of the TF-IDF (Term Frequency-Inverse Document Frequency) family of transformations, which is familiar in information retrieval (Salton and Buckley, 1988; Baeza-Yates and Ribeiro-Neto, 1999): log(x i,j + 1) is the TF term and w j is the IDF term. 9. Apply SVD: After the log and entropy transformations have been applied to the matrix X, run SVDLIBC. SVD decomposes a matrix X into a product of three matrices UΣV T , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length: U T U = V T V = I) and Σ is a diagonal matrix of singular values (hence SVD) (Golub and Van Loan, 1996). If X is of rank r, then Σ is also of rank r. Let Σ k , where k < r, be the diagonal matrix formed from the top k singular values, and let U k and V k be the matrices produced by selecting the corresponding columns from U and V. The matrix U k Σ k V T k is the matrix of rank k that best approximates the original matrix X, in the sense that it minimizes the approximation errors. That
is,X = U k Σ k V T k minimizes X − X F over all matricesX of rank k, where
. . . F denotes the Frobenius norm (Golub and Van Loan, 1996). We may think of this matrix U k Σ k V T k as a "smoothed" or "compressed" version of the original matrix. In the subsequent steps, we will be calculating cosines for row vectors. For this purpose, we can simplify calculations by dropping V. The cosine of two vectors is their dot product, after they have been normalized to unit length. The matrix XX T contains the dot products of all of the row vectors. We can find the dot product of the i-th and j-th row vectors by looking at the cell in row i, column j of the matrix XX T . Since V T V = I, we have XX T = UΣV T (UΣV T ) T = UΣV T VΣ T U T = UΣ(UΣ) T , which means that we can calculate cosines with the smaller matrix UΣ, instead of using X = UΣV T (Deerwester et al., 1990).
10. Projection: Calculate U k Σ k (we use k = 300). This matrix has the same number of rows as X, but only k columns (instead of 2 × num patterns columns; in our experiments, that is 300 columns instead of 8,000). We can compare two word pairs by calculating the cosine of the corresponding row vectors in U k Σ k . The row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space. The value k = 300 is recommended by Landauer and Dumais (1997) for measuring the attributional similarity between words. We investigate other values in Section 6.4. Table 10 gives the cosines for the sixteen combinations.
Evaluate alternates: Let
12. Calculate relational similarity: The relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D.
The requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies, which may be introduced in step 1 and may have slipped through the filtering in step 2. Averaging the cosines, as opposed to taking their maximum, is intended to provide some resistance to noise. For quart:volume and mile:distance, the third column in Table 10 shows which alternates are used to calculate the average. For these two pairs, the average of the selected cosines is 0.677. In Table 7, we see that pumping:volume Table 10 The sixteen combinations and their cosines. A:B::C:D expresses the analogy "A is to B as C is to D". The third column indicates those combinations for which the cosine is greater than or equal to the cosine of the original analogy, quart:volume::mile:distance. has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. However, Table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12.
Steps 11 and 12 can be repeated for each two input pairs that are to be compared. This completes the description of LRA. Table 11 gives the cosines for the sample SAT question. The choice pair with the highest average cosine (the choice with the largest value in column #1), choice (b), is the solution for this question; LRA answers the question correctly. For comparison, column #2 gives the cosines for the original pairs and column #3 gives the highest cosine. For this particular SAT question, there is one choice that has the highest cosine for all three columns, choice (b), although this is not true in general. Note that the gap between the first choice (b) and the second choice (d) is largest for the average cosines (column #1). This suggests that the average of the cosines (column #1) is better at discriminating the correct choice than either the original cosine (column #2) or the highest cosine (column #3).
Experiments with Word Analogy Questions
This section presents various experiments with 374 multiple-choice SAT word analogy questions. Table 12 shows the performance of the baseline LRA system on the 374 SAT questions, using the parameter settings and configuration described in Section 5. LRA correctly answered 210 of the 374 questions. 160 questions were answered incorrectly and 4 questions were skipped, because the stem pair and its alternates were represented by zero Table 11 Cosines for the sample SAT question given in Table 6. Column #1 gives the averages of the cosines that are greater than or equal to the original cosines (e.g., the average of the cosines that are marked "yes" in Table 10 is 0.677; see choice (b) in column #1). Column #2 gives the cosine for the original pairs (e.g., the cosine for the first pair in Table 10 is 0.525; see choice (b) in column #2). Column #3 gives the maximum cosine for the sixteen possible analogies (e.g., the maximum cosine in Table 10 vectors. The performance of LRA is significantly better than the lexicon-based approach of Veale (2004) (see Section 3.1) and the best performance using attributional similarity (see Section 2.3), with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). As another point of reference, consider the simple strategy of always guessing the choice with the highest co-occurrence frequency. The idea here is that the words in the solution pair may occur together frequently, because there is presumably a clear and meaningful relation between the solution words, whereas the distractors may only occur together rarely, because they have no meaningful relation. This strategy is signifcantly worse than random guessing. The opposite strategy, always guessing the choice pair with the lowest co-occurrence frequency, is also worse than random guessing (but not significantly). It appears that the designers of the SAT questions deliberately chose distractors that would thwart these two strategies.
Baseline LRA System
With 374 questions and 6 word pairs per question (one stem and five choices), there are 2,244 pairs in the input set. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 8,976 pairs. In step 5, for each pair A:B, we add B:A, yielding 17,952 pairs. However, some pairs are dropped because they correspond to zero vectors (they do not appear together in a window of five words in the WMTS . Also, a few words do not appear in Lin's thesaurus, and some word pairs appear twice in the SAT questions (e.g., lion:cat). The sparse matrix (step 7) has 17,232 rows (word pairs) and 8,000 columns (patterns), with a density of 5.8% (percentage of nonzero values). Table 13 gives the time required for each step of LRA, a total of almost nine days. All of the steps used a single CPU on a desktop computer, except step 3, finding the phrases for each word pair, which used a 16 CPU Beowulf cluster. Most of the other steps are parallelizable; with a bit of programming effort, they could also be executed on the Beowulf cluster. All CPUs (both desktop and cluster) were 2.4 GHz Intel Xeons. The desktop computer had 2 GB of RAM and the cluster had a total of 16 GB of RAM. Table 14 compares LRA to the Vector Space Model with the 374 analogy questions. VSM-AV refers to the VSM using AltaVista's database as a corpus. The VSM-AV results are taken from Turney and Littman (2005). As mentioned in Section 4.2, we estimate this corpus contained about 5 × 10 11 English words at the time the VSM-AV experiments took place. VSM-WMTS refers to the VSM using the WMTS, which contains about 5 × 10 10 English words. We generated the VSM-WMTS results by adapting the VSM to the WMTS. The algorithm is slightly different from Turney and Littman (2005), because we used passage frequencies instead of document frequencies.
LRA versus VSM
All three pairwise differences in recall in Table 14 are statistically significant with 95% confidence, using the Fisher Exact Test (Agresti, 1990). The pairwise differences in Table 15 Comparison with human SAT performance. The last column in the table indicates whether (YES) or not (NO) the average human performance (57%) falls within the 95% confidence interval of the corresponding algorithm's performance. The confidence intervals are calculated using the Binomial Exact Test (Agresti, 1990 precision between LRA and the two VSM variations are also significant, but the difference in precision between the two VSM variations (42.4% versus 47.7%) is not significant. Although VSM-AV has a corpus ten times larger than LRA's, LRA still performs better than VSM-AV.
Comparing VSM-AV to VSM-WMTS, the smaller corpus has reduced the score of the VSM, but much of the drop is due to the larger number of questions that were skipped (34 for VSM-WMTS versus 5 for VSM-AV). With the smaller corpus, many more of the input word pairs simply do not appear together in short phrases in the corpus. LRA is able to answer as many questions as VSM-AV, although it uses the same corpus as VSM-WMTS, because Lin's thesaurus allows LRA to substitute synonyms for words that are not in the corpus.
VSM-AV required 17 days to process the 374 analogy questions (Turney and Littman, 2005), compared to 9 days for LRA. As a courtesy to AltaVista, Turney and Littman (2005) inserted a five second delay between each query. Since the WMTS is running locally, there is no need for delays. VSM-WMTS processed the questions in only one day.
Human Performance
The average performance of college-bound senior high school students on verbal SAT questions corresponds to a recall (percent correct) of about 57% (Turney and Littman, 2005). The SAT I test consists of 78 verbal questions and 60 math questions (there is also an SAT II test, covering specific subjects, such as chemistry). Analogy questions are only a subset of the 78 verbal SAT questions. If we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal SAT I questions, then we can estimate that the average college-bound senior would correctly answer about 57% of the 374 analogy questions.
Of our 374 SAT questions, 190 are from a collection of ten official SAT tests (Claman, 2000). On this subset of the questions, LRA has a recall of 61.1%, compared to a recall of 51.1% on the other 184 questions. The 184 questions that are not from Claman (2000) seem to be more difficult. This indicates that we may be underestimating how well LRA performs, relative to college-bound senior high school students. Claman (2000) suggests that the analogy questions may be somewhat harder than other verbal SAT questions, so we may be slightly overestimating the mean human score on the analogy questions. Table 15 gives the 95% confidence intervals for LRA, VSM-AV, and VSM-WMTS, calculated by the Binomial Exact Test (Agresti, 1990). There is no significant difference between LRA and human performance, but VSM-AV and VSM-WMTS are significantly below human-level performance.
Varying the Parameters in LRA
There are several parameters in the LRA algorithm (see Section 5.5). The parameter values were determined by trying a small number of possible values on a small set of questions that were set aside. Since LRA is intended to be an unsupervised learning algorithm, we did not attempt to tune the parameter values to maximize the precision and recall on the 374 SAT questions. We hypothesized that LRA is relatively insensitive to the values of the parameters. Table 16 shows the variation in the performance of LRA as the parameter values are adjusted. We take the baseline parameter settings (given in Section 5.5) and vary each parameter, one at a time, while holding the remaining parameters fixed at their baseline values. None of the precision and recall values are significantly different from the baseline, according to the Fisher Exact Test (Agresti, 1990), at the 95% confidence level. This supports the hypothesis that the algorithm is not sensitive to the parameter values.
Although a full run of LRA on the 374 SAT questions takes nine days, for some of the parameters it is possible to reuse cached data from previous runs. We limited the experiments with num sim and max phrase because caching was not as helpful for these parameters, so experimenting with them required several weeks.
Ablation Experiments
As mentioned in the introduction, LRA extends the VSM approach of Turney and Littman (2005) by (1) exploring variations on the analogies by replacing words with synonyms (step 1),
(2) automatically generating connecting patterns (step 4), and (3) smoothing the data with SVD (step 9). In this subsection, we ablate each of these three components to assess their contribution to the performance of LRA. Table 17 shows the results. Without SVD (compare column #1 to #2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, we hypothesize that the drop in performance would be significant with a larger set of word pairs. More word pairs would increase the sample size, which would decrease the 95% confidence interval, which would likely show that SVD is making a significant contribution. Furthermore, more word pairs would increase the matrix size, which would give SVD more leverage. For example, Landauer and Dumais (1997) apply SVD to a matrix of of 30,473 columns by 60,768 rows, but our matrix here is 8,000 columns by 17,232 rows. We are currently gathering more SAT questions, to test this hypothesis.
Without synonyms (compare column #1 to #3 in Table 17), recall drops significantly (from 56.1% to 49.5%), but the drop in precision is not significant. When the synonym component is dropped, the number of skipped questions rises from 4 to 22, which demonstrates the value of the synonym component of LRA for compensating for sparse data.
When both SVD and synonyms are dropped (compare column #1 to #4 in Table 17), the decrease in recall is significant, but the decrease in precision is not significant. Again, we believe that a larger sample size would show the drop in precision is significant.
If we eliminate both synonyms and SVD from LRA, all that distinguishes LRA from VSM-WMTS is the patterns (step 4). The VSM approach uses a fixed list of 64 patterns to generate 128 dimensional vectors (Turney and Littman, 2005), whereas LRA uses a dynamically generated set of 4,000 patterns, resulting in 8,000 dimensional vectors. We can see the value of the automatically generated patterns by comparing LRA without synonyms and SVD (column #4) to VSM-WMTS (column #5). The difference in both precision and recall is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
The ablation experiments support the value of the patterns (step 4) and synonyms (step 1) in LRA, but the contribution of SVD (step 9) has not been proven, although we believe more data will support its effectiveness. Nonetheless, the three components together result in a 16% increase in F (compare #1 to #5).
Matrix Symmetry
We know a priori that, if A:B::C:D, then B:A::D:C. For example, "mason is to stone as carpenter is to wood" implies "stone is to mason as wood is to carpenter". Therefore a good measure of relational similarity, sim r , should obey the following equation:
In steps 5 and 6 of the LRA algorithm (Section 5.5), we ensure that the matrix X is symmetrical, so that equation (8) is necessarily true for LRA. The matrix is designed so that the row vector for A:B is different from the row vector for B:A only by a permutation of the elements. The same permutation distinguishes the row vectors for C:D and D:C. Therefore the cosine of the angle between A:B and C:D must be identical to the cosine of the angle between B:A and D:C (see equation (7)).
To discover the consequences of this design decision, we altered steps 5 and 6 so that symmetry is no longer preserved. In step 5, for each word pair A:B that appears in the input set, we only have one row. There is no row for B:A unless B:A also appears in the input set. Thus the number of rows in the matrix dropped from 17,232 to 8,616.
In step 6, we no longer have two columns for each pattern P , one for "word 1 P word 2 " and another for "word 2 P word 1 ". However, to be fair, we kept the total number of columns at 8,000. In step 4, we selected the top 8,000 patterns (instead of the top 4,000), distinguishing the pattern "word 1 P word 2 " from the pattern "word 2 P word 1 " (instead of considering them equivalent). Thus a pattern P with a high frequency is likely to appear in two columns, in both possible orders, but a lower frequency pattern might appear in only one column, in only one possible order.
These changes resulted in a slight decrease in performance. Recall dropped from 56.1% to 55.3% and precision dropped from 56.8% to 55.9%. The decrease is not statistically significant. However, the modified algorithm no longer obeys equation (8).
Although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the SAT questions, we prefer to retain symmetry, to ensure that equation (8) is satisfied.
Note that, if A:B::C:D, it does not follow that B:A::C:D. For example, it is false that "stone is to mason as carpenter is to wood". In general (except when the semantic relations between A and B are symmetrical), we have the following inequality:
Therefore we do not want A:B and B:A to be represented by identical row vectors, although it would ensure that equation (8) is satisfied.
All Alternates versus Better Alternates
In step 12 of LRA, the relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D. That is, the average includes only those alternates that are "better" than the originals. Taking all alternates instead of the better alternates, recall drops from 56.1% to 40.4% and precision drops from 56.8% to 40.8%. Both decreases are statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
Interpreting Vectors
Suppose a word pair A:B corresponds to a vector r in the matrix X. It would be convenient if inspection of r gave us a simple explanation or description of the relation between A and B. For example, suppose the word pair ostrich:bird maps to the row vector r. It would be pleasing to look in r and find that the largest element corresponds to the pattern "is the largest" (i.e., "ostrich is the largest bird"). Unfortunately, inspection of r reveals no such convenient patterns. We hypothesize that the semantic content of a vector is distributed over the whole vector; it is not concentrated in a few elements. To test this hypothesis, we modified step 10 of LRA. Instead of projecting the 8,000 dimensional vectors into the 300 dimensional space U k Σ k , we use the matrix U k Σ k V T k . This matrix yields the same cosines as U k Σ k , but preserves the original 8,000 dimensions, making it easier to interpret the row vectors. For each row vector in U k Σ k V T k , we select the N largest values and set all other values to zero. The idea here is that we will only pay attention to the N most important patterns in r; the remaining patterns will be ignored. This reduces the length of the row vectors, but the cosine is the dot product of normalized vectors (all vectors are normalized to unit length; see equation (7)), so the change to the vector lengths has no impact; only the angle of the vectors is important. If most of the semantic content is in the N largest elements of r, then setting the remaining elements to zero should have relatively little impact. Table 18 shows the performance as N varies from 1 to 3,000. The precision and recall are significantly below the baseline LRA until N ≥ 300 (95% confidence, Fisher Exact Test). In other words, for a typical SAT analogy question, we need to examine the top 300 patterns to explain why LRA selected one choice instead of another.
We are currently working on an extension of LRA that will explain with a single pattern why one choice is better than another. We have had some promising results, but this work is not yet mature. However, we can confidently claim that interpreting the vectors is not trivial.
Manual Patterns versus Automatic Patterns
Turney and Littman (2005) used 64 manually generated patterns whereas LRA uses 4,000 automatically generated patterns. We know from Section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns. It may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns. If we require an exact match, 50 of the 64 manual patterns can be found in the automatic patterns. If we are lenient about wildcards, and count the pattern "not the" as matching "* not the" (for example), then 60 of the 64 manual patterns appear within the automatic patterns. This suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns, rather than a qualitative difference in the patterns. Turney and Littman (2005) point out that some of their 64 patterns have been used by other researchers. For example, Hearst (1992) used the pattern "such as" to discover hyponyms and Berland and Charniak (1999) used the pattern "of the" to discover meronyms. Both of these patterns are included in the 4,000 patterns automatically generated by LRA.
The novelty in Turney and Littman (2005) is that their patterns are not used to mine text for instances of word pairs that fit the patterns (Hearst, 1992; Berland and Charniak, 1999); instead, they are used to gather frequency data for building vectors that represent the relation between a given pair of words. The results in Section 6.8 show that a vector contains more information than any single pattern or small set of patterns; a vector is a distributed representation. LRA is distinct from Hearst (1992) and Berland and Charniak (1999) in its focus on distributed representations, which it shares with Turney and Littman (2005), but LRA goes beyond Turney and Littman (2005) by finding patterns automatically. Riloff and Jones (1999) and Yangarber (2003) also find patterns automatically, but their goal is to mine text for instances of word pairs; the same goal as Hearst (1992) and Berland and Charniak (1999). Because LRA uses patterns to build distributed vector representations, it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of Hearst (1992), Berland and Charniak (1999), Riloff and Jones (1999), and Yangarber (2003). Therefore LRA can simply select the highest frequency patterns (step 4 in Section 5.5); it does not need the more sophisticated selection algorithms of Riloff and Jones (1999) and Yangarber (2003).
Experiments with Noun-Modifier Relations
This section describes experiments with 600 noun-modifier pairs, hand-labeled with 30 classes of semantic relations (Nastase and Szpakowicz, 2003). In the following experiments, LRA is used with the baseline parameter values, exactly as described in Section 5.5. No adjustments were made to tune LRA to the noun-modifier pairs. LRA is used as a distance (nearness) measure in a single nearest neighbour supervised learning algorithm.
Classes of Relations
The following experiments use the 600 labeled noun-modifier pairs of Nastase and Szpakowicz (2003). This data set includes information about the part of speech and WordNet synset (synonym set; i.e., word sense tag) of each word, but our algorithm does not use this information. Table 19 lists the 30 classes of semantic relations. The table is based on Appendix A of Nastase and Szpakowicz (2003), with some simplifications. The original table listed several semantic relations for which there were no instances in the data set. These were relations that are typically expressed with longer phrases (three or more words), rather than noun-modifier word pairs. For clarity, we decided not to include these relations in Table 19.
In this table, H represents the head noun and M represents the modifier. For example, in "flu virus", the head noun (H) is "virus" and the modifier (M ) is "flu" (*). In English, the modifier (typically a noun or adjective) usually precedes the head noun. In the description of purpose, V represents an arbitrary verb. In "concert hall", the hall is for presenting concerts (V is "present") or holding concerts (V is "hold") ( †).
Nastase and Szpakowicz (2003) organized the relations into groups. The five capitalized terms in the "Relation" column of Table 19 are the names of five groups of semantic relations. (The original table had a sixth group, but there are no examples of this group in the data set.) We make use of this grouping in the following experiments.
Baseline LRA with Single Nearest Neighbour
The following experiments use single nearest neighbour classification with leave-oneout cross-validation. For leave-one-out cross-validation, the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers. The data set is split 600 times, so that each noun-modifier gets a turn as the testing word pair. The predicted class of the testing pair is the class of the single nearest neighbour in the training set. As the measure of nearness, we use LRA to calculate the relational similarity between the testing pair and the training pairs. The single nearest neighbour algorithm is a supervised learning algorithm (i.e., it requires a training set of labeled data), but we are using LRA to measure the distance between a pair and its potential neighbours, and LRA is itself determined in an unsupervised fashion (i.e., LRA does not need labeled data).
Each SAT question has five choices, so answering 374 SAT questions required calculating 374 × 5 × 16 = 29, 920 cosines. The factor of 16 comes from the alternate pairs, step 11 in LRA. With the noun-modifier pairs, using leave-one-out cross-validation, each test pair has 599 choices, so an exhaustive application of LRA would require calculating 600 × 599 × 16 = 5, 750, 400 cosines. To reduce the amount of computation required, we first find the 30 nearest neighbours for each pair, ignoring the alternate pairs (600 × 599 = 359, 400 cosines), and then apply the full LRA, including the alternates, to just those 30 neighbours (600 × 30 × 16 = 288, 000 cosines), which requires calculating only 359, 400 + 288, 000 = 647, 400 cosines.
There are 600 word pairs in the input set for LRA. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 2,400 pairs. In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. However, some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. The sparse matrix (step 7) has 4,748 rows and 8,000 columns, with a density of 8.4%.
Following Turney and Littman (2005), we evaluate the performance by accuracy and also by the macroaveraged F measure (Lewis, 1991). Macroaveraging calculates the precision, recall, and F for each class separately, and then calculates the average across all classes. Microaveraging combines the true positive, false positive, and false negative counts for all of the classes, and then calculates precision, recall, and F from the combined counts. Macroaveraging gives equal weight to all classes, but microaveraging gives more weight to larger classes. We use macroaveraging (giving equal weight to all classes), because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus.
Classification with 30 distinct classes is a hard problem. To make the task easier, we can collapse the 30 classes to 5 classes, using the grouping that is given in Table 19. For example, agent and beneficiary both collapse to participant. On the 30 class problem, LRA with the single nearest neighbour algorithm achieves an accuracy of 39.8% (239/600) and a macroaveraged F of 36.6%. Always guessing the majority class would result in an accuracy of 8.2% (49/600). On the 5 class problem, the accuracy is 58.0% (348/600) and the macroaveraged F is 54.6%. Always guessing the majority class would give an accuracy of 43.3% (260/600). For both the 30 class and 5 class problems, LRA's accuracy is significantly higher than guessing the majority class, with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). Table 20 shows the performance of LRA and VSM on the 30 class problem. VSM-AV is VSM with the AltaVista corpus and VSM-WMTS is VSM with the WMTS corpus. The results for VSM-AV are taken from Turney and Littman (2005). All three pairwise differences in the three F measures are statistically significant at the 95% level, according to the Paired T-Test (Feelders and Verkooijen, 1995). The accuracy of LRA is significantly higher than the accuracies of VSM-AV and VSM-WMTS, according to the Fisher Exact Test (Agresti, 1990), but the difference between the two VSM accuracies is not significant. Table 21 compares the performance of LRA and VSM on the 5 class problem. The accuracy and F measure of LRA are significantly higher than the accuracies and F measures of VSM-AV and VSM-WMTS, but the differences between the two VSM accuracies and F measures are not significant.
LRA versus VSM
Discussion
The experimental results in Sections 6 and 7 demonstrate that LRA performs significantly better than the VSM, but it is also clear that there is room for improvement. The accuracy might not yet be adequate for practical applications, although past work has shown that it is possible to adjust the tradeoff of precision versus recall (Turney and Littman, 2005). For some of the applications, such as information extraction, LRA might be suitable if it is adjusted for high precision, at the expense of low recall.
Another limitation is speed; it took almost nine days for LRA to answer 374 analogy questions. However, with progress in computer hardware, speed will gradually become less of a concern. Also, the software has not been optimized for speed; there are several places where the efficiency could be increased and many operations are parallelizable. It may also be possible to precompute much of the information for LRA, although this would require substantial changes to the algorithm.
The difference in performance between VSM-AV and VSM-WMTS shows that VSM is sensitive to the size of the corpus. Although LRA is able to surpass VSM-AV when the WMTS corpus is only about one tenth the size of the AV corpus, it seems likely that LRA would perform better with a larger corpus. The WMTS corpus requires one terabyte of hard disk space, but progress in hardware will likely make ten or even one hundred terabytes affordable in the relatively near future.
For noun-modifier classification, more labeled data should yield performance improvements. With 600 noun-modifier pairs and 30 classes, the average class has only 20 examples. We expect that the accuracy would improve substantially with five or ten times more examples. Unfortunately, it is time consuming and expensive to acquire hand-labeled data.
Another issue with noun-modifier classification is the choice of classification scheme for the semantic relations. The 30 classes of Nastase and Szpakowicz (2003) might not be the best scheme. Other researchers have proposed different schemes (Vanderwende, 1994;Barker and Szpakowicz, 1998;Rosario and Hearst, 2001;Rosario, Hearst, and Fillmore, 2002). It seems likely that some schemes are easier for machine learning than others. For some applications, 30 classes may not be necessary; the 5 class scheme may be sufficient.
LRA, like VSM, is a corpus-based approach to measuring relational similarity. Past work suggests that a hybrid approach, combining multiple modules, some corpusbased, some lexicon-based, will surpass any purebred approach (Turney et al., 2003). In future work, it would be natural to combine the corpus-based approach of LRA with the lexicon-based approach of Veale (2004), perhaps using the combination method of Turney et al. (2003).
The Singular Value Decomposition is only one of many methods for handling sparse, noisy data. We have also experimented with Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999), Kernel Principal Components Analysis (KPCA) (Scholkopf, Smola, and Muller, 1997), and Iterative Scaling (IS) (Ando, 2000). We had some interesting results with small matrices (around 2,000 rows by 1,000 columns), but none of these methods seemed substantially better than SVD and none of them scaled up to the matrix sizes we are using here (e.g., 17,232 rows and 8,000 columns; see Section 6.1).
In step 4 of LRA, we simply select the top num patterns most frequent patterns and discard the remaining patterns. Perhaps a more sophisticated selection algorithm would improve the performance of LRA. We have tried a variety of ways of selecting patterns, but it seems that the method of selection has little impact on performance. We hypothesize that the distributed vector representation is not sensitive to the selection method, but it is possible that future work will find a method that yields significant improvement in performance.
Conclusion
This paper has introduced a new method for calculating relational similarity, Latent Relational Analysis. The experiments demonstrate that LRA performs better than the VSM approach, when evaluated with SAT word analogy questions and with the task of classifying noun-modifier expressions. The VSM approach represents the relation between a pair of words with a vector, in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus. LRA extends this approach in three ways:
| 14,134 |
cs0608100
|
2951193962
|
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.
|
For example, there is an analogy between the solar system and Rutherford's model of the atom @cite_54 . The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
|
{
"abstract": [
"Thispaperdescribes thestructure-mapping engine(SME), a program for studying . analogical processing .SME has been built to explore Gentner's structure-mapping theory of analogy, and provides a \"tool kit\" for constructing matching algorithms consistent with this theory . Its flexibility enhances cognitive simulation studies by simplifying experimentation . Furthermore, SME is very efficient, making it a useful component in machine learning systems as well . We review the structure-mapping theory and describe the design of the engine . We analyze the complexity of the algorithm, and demonstrate that"
],
"cite_N": [
"@cite_54"
],
"mid": [
"2145454741"
]
}
|
Similarity of Semantic Relations
|
There are at least two kinds of similarity. Attributional similarity is correspondence between attributes and relational similarity is correspondence between relations (Medin, Goldstone, and Gentner, 1990). When two words have a high degree of attributional similarity, we call them synonyms. When two word pairs have a high degree of relational similarity, we say they are analogous.
Verbal analogies are often written in the form A:B::C:D, meaning A is to B as C is to D; for example, traffic:street::water:riverbed. Traffic flows over a street; water flows over a riverbed. A street carries traffic; a riverbed carries water. There is a high degree of relational similarity between the word pair traffic:street and the word pair water:riverbed. In fact, this analogy is the basis of several mathematical theories of traffic flow (Daganzo, 1994).
In Section 2, we look more closely at the connections between attributional and relational similarity. In analogies such as mason:stone::carpenter:wood, it seems that relational similarity can be reduced to attributional similarity, since mason and carpenter are attributionally similar, as are stone and wood. In general, this reduction fails. Consider the analogy traffic:street::water:riverbed. Traffic and water are not attributionally similar. Street and riverbed are only moderately attributionally similar.
Many algorithms have been proposed for measuring the attributional similarity between two words (Lesk, 1969;Resnik, 1995; Landauer and Dumais, 1997; Jiang and Conrath, 1997; Lin, 1998b;Turney, 2001;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003). Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais, 1997), information retrieval (Deerwester et al., 1990), determining semantic orientation (Turney, 2002), grading student essays (Rehder et al., 1998), measuring textual cohesion (Morris and Hirst, 1991), and word sense disambiguation (Lesk, 1986).
On the other hand, since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. Many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity. We discuss related problems in natural language processing, information retrieval, and information extraction in more detail in Section 3. This paper builds on the Vector Space Model (VSM) of information retrieval. Given a query, a search engine produces a ranked list of documents. The documents are ranked in order of decreasing attributional similarity between the query and each document. Almost all modern search engines measure attributional similarity using the VSM (Baeza-Yates and Ribeiro-Neto, 1999). Turney and Littman (2005) adapt the VSM approach to measuring relational similarity. They used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words. Section 4 presents the VSM approach to measuring similarity.
In Section 5, we present an algorithm for measuring relational similarity, which we call Latent Relational Analysis (LRA). The algorithm learns from a large corpus of unlabeled, unstructured text, without supervision. LRA extends the VSM approach of Turney and Littman (2005) in three ways: (1) The connecting patterns are derived automatically from the corpus, instead of using a fixed set of patterns.
(2) Singular Value Decomposition (SVD) is used to smooth the frequency data. (3) Given a word pair such as traffic:street, LRA considers transformations of the word pair, generated by replacing one of the words by synonyms, such as traffic:road, traffic:highway.
Section 6 presents our experimental evaluation of LRA with a collection of 374 multiple-choice word analogy questions from the SAT college entrance exam. 1 An example of a typical SAT question appears in Table 1. In the educational testing literature, the first pair (mason:stone) is called the stem of the analogy. The correct choice is called the solution and the incorrect choices are distractors. We evaluate LRA by testing its ability to select the solution and avoid the distractors. The average performance of collegebound senior high school students on verbal SAT questions corresponds to an accuracy of about 57%. LRA achieves an accuracy of about 56%. On these same questions, the VSM attained 47%.
One application for relational similarity is classifying semantic relations in nounmodifier pairs (Turney and Littman, 2005). In Section 7, we evaluate the performance of LRA with a set of 600 noun-modifier pairs from Nastase and Szpakowicz (2003). The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the head noun (printer) and the modifier (laser). The 600 pairs have been manually labeled with 30 classes of semantic relations. For example, "laser printer" is classified as instrument; the printer uses the laser as an instrument for We approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem. The 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbour in the training set. LRA is used to measure distance (i.e., similarity, nearness). LRA achieves an accuracy of 39.8% on the 30-class problem and 58.0% on the 5-class problem. On the same 600 noun-modifier pairs, the VSM had accuracies of 27.8% (30-class) and 45.7% (5-class) (Turney and Littman, 2005).
We discuss the experimental results, limitations of LRA, and future work in Section 8 and we conclude in Section 9.
Attributional and Relational Similarity
In this section, we explore connections between attributional and relational similarity.
Types of Similarity
Medin, Goldstone, and Gentner (1990) distinguish attributes and relations as follows:
Attributes are predicates taking one argument (e.g., X is red, X is large), whereas relations are predicates taking two or more arguments (e.g., X collides with Y , X is larger than Y ). Attributes are used to state properties of objects; relations express relations between objects or propositions. Gentner (1983) notes that what counts as an attribute or a relation can depend on the context. For example, large can be viewed as an attribute of X, LARGE(X ), or a relation between X and some standard Y , LARGER THAN(X , Y ).
The amount of attributional similarity between two words, A and B, depends on the degree of correspondence between the properties of A and B. A measure of attributional similarity is a function that maps two words, A and B, to a real number, sim a (A, B) ∈ ℜ. The more correspondence there is between the properties of A and B, the greater their attributional similarity. For example, dog and wolf have a relatively high degree of attributional similarity.
The amount of relational similarity between two pairs of words, A:B and C:D, depends on the degree of correspondence between the relations between A and B and the relations between C and D. A measure of relational similarity is a function that maps two pairs, A:B and C:D, to a real number, sim r (A : B, C : D) ∈ ℜ. The more correspondence there is between the relations of A:B and C:D, the greater their relational similarity. For example, dog:bark and cat:meow have a relatively high degree of relational
As these examples show, semantic relatedness is the same as attributional similarity (e.g., hot and cold are both kinds of temperature, pencil and paper are both used for writing). Here we prefer to use the term attributional similarity, because it emphasizes the contrast with relational similarity. The term semantic relatedness may lead to confusion when the term relational similarity is also under discussion.
Resnik (1995) describes semantic similarity as follows:
Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. (1989) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonimic (IS-A) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information.
Thus semantic similarity is a specific type of attributional similarity. The term semantic similarity is misleading, because it refers to a type of attributional similarity, yet relational similarity is not any less semantic than attributional similarity. To avoid confusion, we will use the terms attributional similarity and relational similarity, following Medin, Goldstone, and Gentner (1990). Instead of semantic similarity (Resnik, 1995) or semantically similar (Chiarello et al., 1990), we prefer the term taxonomical similarity, which we take to be a specific type of attributional similarity. We interpret synonymy as a high degree of attributional similarity. Analogy is a high degree of relational similarity.
Measuring Attributional Similarity
Algorithms for measuring attributional similarity can be lexicon-based (Lesk, 1986;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003), corpus-based (Lesk, 1969;Landauer and Dumais, 1997;Lin, 1998a;Turney, 2001), or a hybrid of the two (Resnik, 1995;Jiang and Conrath, 1997;Turney et al., 2003). Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpus-based algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. However, experiments do not support this intuition.
Several algorithms have been evaluated using 80 multiple-choice synonym ques- Table 2. Table 3 shows the best performance on the TOEFL questions for each type of attributional similarity algorithm. The results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy.
Using Attributional Similarity to Solve Analogies
We may distinguish near analogies (mason:stone::carpenter:wood) from far analogies (traffic:street::water:riverbed) (Gentner, 1983;Medin, Goldstone, and Gentner, 1990). In an analogy A:B::C:D, where there is a high degree of relational similarity between A:B and C:D, if there is also a high degree of attributional similarity between A and C, and between B and D, then A:B::C:D is a near analogy; otherwise, it is a far analogy. It seems possible that SAT analogy questions might consist largely of near analogies, in which case they can be solved using attributional similarity measures. We could score each candidate analogy by the average of the attributional similarity, sim a , between A and C and between B and D: score(A : B ::
C : D) = 1 2 (sim a (A, C) + sim a (B, D))(1)
This kind of approach was used in two of the thirteen modules in Turney et al. (2003) (see Section 3.1).
To evaluate this approach, we applied several measures of attributional similarity to our collection of 374 SAT questions. The performance of the algorithms was measured by precision, recall, and F , defined as follows: precision = number of correct guesses total number of guesses made
(2)
F = 2 × precision × recall precision + recall (4)
Note that recall is the same as percent correct (for multiple-choice questions, with only zero or one guesses allowed per question, but not in general). Table 4 shows the experimental results for our set of 374 analogy questions. For example, using the algorithm of Hirst and St-Onge (1998), 120 questions were answered correctly, 224 incorrectly, and 30 questions were skipped. When the algorithm assigned the same similarity to all of the choices for a given question, that question was skipped. The precision was 120/(120 + 224) and the recall was 120/(120 + 224 + 30).
The first five algorithms in Table 4 are implemented in Pedersen's WordNet-Similarity package. 2 The sixth algorithm (Turney, 2001) used the Waterloo MultiText System, as described in Terra and Clarke (2003).
The difference between the lowest performance (Jiang and Conrath, 1997) and random guessing is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, the difference between the highest performance (Turney, 2001) and the VSM approach (Turney and Littman, 2005) is also statistically significant with 95% confidence. We conclude that there are enough near analogies in the 374 SAT questions for attributional similarity to perform better than random guessing, but not enough near analogies for attributional similarity to perform as well as relational similarity.
Recognizing Word Analogies
The problem of recognizing word analogies is, given a stem word pair and a finite list of choice word pairs, select the choice that is most analogous to the stem. This problem was first attempted by a system called Argus (Reitman, 1965), using a small hand-built semantic network. Argus could only solve the limited set of analogy questions that its programmer had anticipated. Argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity. Turney et al. (2003) combined 13 independent modules to answer SAT questions. The final output of the system was based on a weighted combination of the outputs of each individual module. The best of the 13 modules was the VSM, which is described in detail in Turney and Littman (2005). The VSM was evaluated on a set of 374 SAT questions, achieving a score of 47%.
In contrast with the corpus-based approach of Turney and Littman (2005), Veale (2004) applied a lexicon-based approach to the same 374 SAT questions, attaining a score of 43%. Veale evaluated the quality of a candidate analogy A:B::C:D by looking for paths in WordNet, joining A to B and C to D. The quality measure was based on the similarity between the A:B paths and the C:D paths. Turney (2005) introduced Latent Relational Analysis (LRA), an enhanced version of the VSM approach, which reached 56% on the 374 SAT questions. Here we go beyond Turney (2005) by describing LRA in more detail, performing more extensive experiments, and analyzing the algorithm and related work in more depth. French (2002) cites Structure Mapping Theory (SMT) (Gentner, 1983) and its implementation in the Structure Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner, 1989) as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
Structure Mapping Theory
For example, there is an analogy between the solar system and Rutherford's model of the atom (Falkenhainer, Forbus, and Gentner, 1989). The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
Each individual connection (e.g., from revolve(planet, sun) to revolve(electron, nucleus)) in an analogical mapping implies that the connected relations are similar; thus, SMT requires a measure of relational similarity, in order to form maps. Early versions of SME only mapped identical relations, but later versions of SME allowed similar, non-identical relations to match (Falkenhainer, 1990). However, the focus of research in analogy-making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations, hence the similarity measures used in SME at the level of individual connections are somewhat rudimentary.
We believe that a more sophisticated measure of relational similarity, such as LRA, may enhance the performance of SME. Likewise, the focus of our work here is on the similarity between particular relations, and we ignore systematic mapping between sets animal:eat::inflation:reduce of relations, so LRA may also be enhanced by integration with SME.
Metaphor
Metaphorical language is very common in our daily life; so common that we are usually unaware of it (Lakoff and Johnson, 1980). argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language (Lakoff and Johnson, 1980). Dolan (1995) describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's (1995) algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors. Lakoff and Johnson (1980) give many examples of sentences in support of their claim that metaphorical language is ubiquitous. The metaphors in their sample sentences can be expressed using SAT-style verbal analogies of the form A:B::C:D. The first column in Table 5 is a list of sentences from Lakoff and Johnson (1980) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy.
Classifying Semantic Relations
The task of classifying semantic relations is to identify the relation between a pair of words. Often the pairs are restricted to noun-modifier pairs, but there are many interesting relations, such as antonymy, that do not occur in noun-modifier pairs. However, noun-modifier pairs are interesting due to their high frequency in English. For instance, WordNet 2.0 contains more than 26,000 noun-modifier pairs, although many common noun-modifiers are not in WordNet, especially technical terms. Rosario and Hearst (2001) and Rosario, Hearst, and Fillmore (2002) classify nounmodifier relations in the medical domain, using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources for representing each noun-modifier pair with a feature vector. They trained a neural network to distinguish 13 classes of semantic relations. Nastase and Szpakowicz (2003) explore a similar approach to classifying general noun-modifier pairs (i.e., not restricted to a particular domain, such as medicine), using WordNet and Roget's Thesaurus as lexical resources. Vanderwende (1994) used hand-built rules, together with a lexical knowledge base, to classify noun-modifier pairs.
None of these approaches explicitly involved measuring relational similarity, but any classification of semantic relations necessarily employs some implicit notion of relational similarity, since members of the same class must be relationally similar to some extent. Barker and Szpakowicz (1998) tried a corpus-based approach that explicitly used a measure of relational similarity, but their measure was based on literal matching, which limited its ability to generalize. Moldovan et al. (2004) also used a measure of relational similarity, based on mapping each noun and modifier into semantic classes in WordNet. The noun-modifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm, to improve the mapping of the noun and modifier into WordNet. Turney and Littman (2005) used the VSM (as a component in a single nearest neighbour learning algorithm) to measure relational similarity. We take the same approach here, substituting LRA for the VSM, in Section 7.
Lauer (1995) used a corpus-based approach (using the BNC) to paraphrase nounmodifier pairs, by inserting the prepositions of, for, in, at, on, from, with, and about. For example, reptile haven was paraphrased as haven for reptiles. Lapata and Keller (2004) achieved improved results on this task, by using the database of AltaVista's search engine as a corpus.
Word Sense Disambiguation
We believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text. If we can identify the semantic relations between the given word and its context, then we can disambiguate the given word. Yarowsky's (1993) observation that collocations are almost always monosemous is evidence for this view. Federici, Montemagni, and Pirrelli (1997) present an analogybased approach to word sense disambiguation.
For example, consider the word plant. Out of context, plant could refer to an industrial plant or a living organism. Suppose plant appears in some text near food. A typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism (Lesk, 1986;Banerjee and Pedersen, 2003). In this case, the decision may not be clear, since industrial plants often produce food and living organisms often serve as food. It would be very helpful to know the relation between food and plant in this example. In the phrase "food for the plant", the relation between food and plant strongly suggests that the plant is a living organism, since industrial plants do not need food. In the text "food at the plant", the relation strongly suggests that the plant is an industrial plant, since living organisms are not usually considered as locations. Thus an algorithm for classifying semantic relations (as in Section 7) should be helpful for word sense disambiguation.
Information Extraction
The problem of relation extraction is, given an input document and a specific relation R, extract all pairs of entities (if any) that have the relation R in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. Zelenko, Aone, and Richardella (2003) present a kernel method for extracting the relations person-affiliation and organization-location. For example, in the sentence "John Smith is the chief scientist of the Hardcom Corporation," there is a person-affiliation relation between "John Smith" and "Hardcom Corporation" (Zelenko, Aone, and Richardella, 2003). This is similar to the problem of classifying semantic relations (Section 3.4), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
In the VSM approach to classifying semantic relations (Turney and Littman, 2005), we would have a training set of labeled examples of the relation person-affiliation, for instance. Each example would be represented by a vector of pattern frequencies. Given a specific document discussing "John Smith" and "Hardcom Corporation", we could construct a vector representing the relation between these two entities, and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors. It would seem that there is a problem here, because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for "John Smith" and "Hardcom Corporation" would be very sparse, since these entities might be mentioned only once in the given document. However, this is not a new problem for the Vector Space Model; it is the standard situation when the VSM is used for information retrieval. A query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vector. There are well-known techniques in information retrieval for coping with this disparity, such as weighting schemes for query vectors that are different from the weighting schemes for document vectors (Salton and Buckley, 1988).
Question Answering
In their paper on classifying semantic relations, Moldovan et al. (2004) suggest that an important application of their work is Question Answering. As defined in the Text REtrieval Conference (TREC) Question Answering (QA) track, the task is to answer simple questions, such as "Where have nuclear incidents occurred?", by retrieving a relevant document from a large corpus and then extracting a short string from the document, such as "The Three Mile Island nuclear incident caused a DOE policy crisis." Moldovan et al. (2004) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text. They argue that the desired semantic relation can easily be inferred from the surface form of the question. A question of the form "Where ...?" is likely to be seeking for entities with a location relation and a question of the form "What did ... make?" is likely to be looking for entities with a product relation. In Section 7, we show how LRA can recognize relations such as location and product (see Table 19). Hearst (1992) presents an algorithm for learning hyponym (type of) relations from a corpus and Berland and Charniak (1999) describe how to learn meronym (part of) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words (Fellbaum, 1998) and Nastase and Szpakowicz (2003) list 30 semantic relations for noun-modifier pairs. Hearst (1992) and Berland and Charniak (1999) use manually generated rules to mine text for semantic relations. Turney and Littman (2005) also use a manually generated set of 64 patterns.
Automatic Thesaurus Generation
LRA does not use a predefined set of patterns; it learns patterns from a large corpus. Instead of manually generating new rules or patterns for each new semantic relation, it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations. A nearest neighbour algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations, given the appropriate labeled training data.
Girju, Badulescu, and Moldovan (2003) present an algorithm for learning meronym relations from a corpus. Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. However, they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. Veale (2003) has developed an algorithm for recognizing certain types of word analogies, based on information in WordNet. He proposes to use the algorithm for analogical information retrieval. For example, the query "Muslim church" should return "mosque" and the query "Hindu bible" should return "the Vedas". The algorithm was designed with a focus on analogies of the form adjective:noun::adjective:noun, such as Christian:church::Muslim:mosque.
Information Retrieval
A measure of relational similarity is applicable to this task. Given a pair of words, A and B, the task is to return another pair of words, X and Y , such that there is high relational similarity between the pair A:X and the pair Y :B. For example, given A = "Muslim" and B = "church", return X = "mosque" and Y = "Christian". (The pair Muslim:mosque has a high relational similarity to the pair Christian:church.)
Marx et al. (2002) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora. Each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus. For example, one experiment used a corpus of Buddhist documents and a corpus of Christian documents. A cluster of words such as {Hindu, Mahayana, Zen, ...} from the Buddhist corpus was coupled with a cluster of words such as {Catholic, Protestant, ...} from the Christian corpus. Thus the algorithm appears to have discovered an analogical mapping between Buddhist schools and traditions and Christian schools and traditions. This is interesting work, but it is not directly applicable to SAT analogies, because it discovers analogies between clusters of words, rather than individual words.
Identifying Semantic Roles
A semantic frame for an event such as judgement contains semantic roles such as judge, evaluee, and reason, whereas an event such as statement contains roles such as speaker, addressee, and message (Gildea and Jurafsky, 2002). The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. Moldovan et al. (2004) argue that semantic roles are merely a special case of semantic relations (Section 3.4), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
The Vector Space Model
This section examines past work on measuring attributional and relational similarity using the Vector Space Model (VSM).
Measuring Attributional Similarity with the Vector Space Model
The VSM was first developed for information retrieval (Salton and McGill, 1983;Salton and Buckley, 1988;Salton, 1989) and it is at the core of most modern search engines (Baeza-Yates and Ribeiro-Neto, 1999).
In the VSM approach to information retrieval, queries and documents are represented by vectors. Elements in these vectors are based on the frequencies of words in the corresponding queries and documents. The frequencies are usually transformed by various formulas and weights, tailored to improve the effectiveness of the search engine (Salton, 1989). The attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors. For a given query, the search engine sorts the matching documents in order of decreasing cosine.
The VSM approach has also been used to measure the attributional similarity of words (Lesk, 1969;Ruge, 1992;Pantel and Lin, 2002). Pantel and Lin (2002) clustered words according to their attributional similarity, as measured by a VSM. Their algorithm is able to discover the different senses of polysemous words, using unsupervised learning.
Latent Semantic Analysis enhances the VSM approach to information retrieval by using the Singular Value Decomposition (SVD) to smooth the vectors, which helps to handle noise and sparseness in the data (Deerwester et al., 1990;Dumais, 1993; Landauer and Dumais, 1997). SVD improves both document-query attributional similarity measures (Deerwester et al., 1990;Dumais, 1993) and word-word attributional similarity measures (Landauer and Dumais, 1997). LRA also uses SVD to smooth vectors, as we discuss in Section 5.
Measuring Relational Similarity with the Vector Space Model
Let R 1 be the semantic relation (or set of relations) between a pair of words, A and B, and let R 2 be the semantic relation (or set of relations) between another pair, C and D. We wish to measure the relational similarity between R 1 and R 2 . The relations R 1 and R 2 are not given to us; our task is to infer these hidden (latent) relations and then compare them.
In the VSM approach to relational similarity (Turney and Littman, 2005), we create vectors, r 1 and r 2 , that represent features of R 1 and R 2 , and then measure the similarity of R 1 and R 2 by the cosine of the angle θ between r 1 and r 2 :
r 1 = r 1,1 , . . . , r 1,n (5) r 2 = r 2,1 , . . . r 2,n (6) cosine(θ) = n i=1 r 1,i · r 2,i n i=1 (r 1,i ) 2 · n i=1 (r 2,i ) 2 = r 1 · r 2 √ r 1 · r 1 · √ r 2 · r 2 = r 1 · r 2 r 1 · r 2 (7)
We create a vector, r, to characterize the relationship between two words, X and Y , by counting the frequencies of various short phrases containing X and Y . Turney and Littman (2005) use a list of 64 joining terms, such as "of", "for", and "to", to form 128 phrases that contain X and Y , such as "X of Y ", "Y of X", "X for Y ", "Y for X", "X to Y ", and "Y to X". These phrases are then used as queries for a search engine and the number of hits (matching documents) is recorded for each query. This process yields a vector of 128 numbers. If the number of hits for a query is x, then the corresponding element in the vector r is log(x + 1). Several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures (Salton and Buckley, 1988;Ruge, 1992;Lin, 1998b).
Turney and Littman (2005) evaluated the VSM approach by its performance on 374 SAT analogy questions, achieving a score of 47%. Since there are five choices for each question, the expected score for random guessing is 20%. To answer a multiple-choice analogy question, vectors are created for the stem pair and each choice pair, and then cosines are calculated for the angles between the stem pair and each choice pair. The best guess is the choice pair with the highest cosine. We use the same set of analogy questions to evaluate LRA in Section 6.
The VSM was also evaluated by its performance as a distance (nearness) measure in a supervised nearest neighbour classifier for noun-modifier semantic relations (Turney and Littman, 2005). The evaluation used 600 hand-labeled noun-modifier pairs from Nastase and Szpakowicz (2003). A testing pair is classified by searching for its single nearest neighbour in the labeled training data. The best guess is the label for the training pair with the highest cosine. LRA is evaluated with the same set of nounmodifier pairs in Section 7. Turney and Littman (2005) used the AltaVista search engine to obtain the frequency information required to build vectors for the VSM. Thus their corpus was the set of all web pages indexed by AltaVista. At the time, the English subset of this corpus consisted of about 5 × 10 11 words. Around April 2004, AltaVista made substantial changes to their search engine, removing their advanced search operators. Their search engine no longer supports the asterisk operator, which was used by Turney and Littman (2005) for stemming and wild-card searching. AltaVista also changed their policy towards automated searching, which is now forbidden. 3 Turney and Littman (2005) used AltaVista's hit count, which is the number of documents (web pages) matching a given query, but LRA uses the number of passages (strings) matching a query. In our experiments with LRA (Sections 6 and 7), we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003), running on a 16 CPU Beowulf Cluster, with a corpus of about 5 × 10 10 English words. The Waterloo MultiText System (WMTS) is a distributed (multiprocessor) search engine, designed primarily for passage retrieval (although document retrieval is possible, as a special case of passage retrieval). The text and index require approximately one terabyte of disk space. Although AltaVista only gives a rough estimate of the number of matching documents, the Waterloo MultiText System gives exact counts of the number of matching passages. Turney et al. (2003) combine 13 independent modules to answer SAT questions. The performance of LRA significantly surpasses this combined system, but there is no real contest between these approaches, because we can simply add LRA to the combination, as a fourteenth module. Since the VSM module had the best performance of the thirteen modules (Turney et al., 2003), the following experiments focus on comparing VSM and LRA.
Latent Relational Analysis
LRA takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs. LRA relies on three resources, a search engine with a very large corpus of text, a broad-coverage thesaurus of synonyms, and an efficient implementation of SVD.
We first present a short description of the core algorithm. Later, in the following subsections, we will give a detailed description of the algorithm, as it is applied in the experiments in Sections 6 and 7.
• Given a set of word pairs as input, look in a thesaurus for synonyms for each word in each word pair. For each input pair, make alternate pairs by replacing the original words with their synonyms. The alternate pairs are intended to form near analogies with the corresponding original pairs (see Section 2.3).
• Filter out alternate pairs that do not form near analogies, by dropping alternate pairs that co-occur rarely in the corpus. In the preceding step, if a synonym replaced an ambiguous original word, but the synonym captures the wrong sense of the original word, it is likely that there is no significant relation between the words in the alternate pair, so they will rarely co-occur.
• For each original and alternate pair, search in the corpus for short phrases that begin with one member of the pair and end with the other. These phrases characterize the relation between the words in each pair.
• For each phrase from the previous step, create several patterns, by replacing words in the phrase with wild cards.
• Build a pair-pattern frequency matrix, in which each cell represents the number of times that the corresponding pair (row) appears in the corpus with the corresponding pattern (column). The number will usually be zero, resulting in a sparse matrix.
• Apply the Singular Value Decomposition to the matrix. This reduces noise in the matrix and helps with sparse data.
• Suppose that we wish to calculate the relational similarity between any two of the original pairs. Start by looking for the two row vectors in the pair-pattern frequency matrix that correspond to the two original pairs. Calculate the cosine of the angle between these two row vectors. Then merge the cosine of the two original pairs with the cosines of their corresponding alternate pairs, as follows. If an analogy formed with alternate pairs has a higher cosine than the original pairs, we assume that we have found a better way to express the analogy, but we have not significantly changed its meaning. If the cosine is lower, we assume that we may have changed the meaning, by inappropriately replacing words with synonyms. Filter out inappropriate alternates by dropping all analogies formed of alternates, such that the cosines are less than the cosine for the original pairs. The relational similarity between the two original pairs is then calculated as the average of all of the remaining cosines.
The motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus. The hope is that we can find near analogies for the original pairs, such that the near analogies co-occur more frequently in the corpus. The danger is that the alternates may have different relations from the originals. The filtering steps above aim to reduce this risk.
Input and Output
In our experiments, the input set contains from 600 to 2,244 word pairs. The output similarity measure is based on cosines, so the degree of similarity can range from −1 (dissimilar; θ = 180 • ) to +1 (similar; θ = 0 • ). Before applying SVD, the vectors are completely nonnegative, which implies that the cosine can only range from 0 to +1, but SVD introduces negative values, so it is possible for the cosine to be negative, although we have never observed this in our experiments.
Search Engine and Corpus
In the following experiments, we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003). 4 The corpus consists of about 5 × 10 10 English words, gathered by a web crawler, mainly from US academic web sites. The web pages cover a very wide range of topics, styles, genres, quality, and writing skill. The WMTS is well suited to LRA, because the WMTS scales well to large corpora (one terabyte, in our case), it gives exact frequency counts (unlike most web search engines), it is designed for passage retrieval (rather than document retrieval), and it has a powerful query syntax.
Thesaurus
As a source of synonyms, we use Lin's (1998a) automatically generated thesaurus. This thesaurus is available through an online interactive demonstration or it can be downloaded. 5 We used the online demonstration, since the downloadable version seems to contain fewer words. For each word in the input set of word pairs, we automatically query the online demonstration and fetch the resulting list of synonyms. As a courtesy to other users of Lin's online system, we insert a 20 second delay between each query.
Lin's thesaurus was generated by parsing a corpus of about 5 × 10 7 English words, consisting of text from the Wall Street Journal, San Jose Mercury, and AP Newswire (Lin, 1998a). The parser was used to extract pairs of words and their grammatical relations. Words were then clustered into synonym sets, based on the similarity of their grammatical relations. Two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words. Given a word and its part of speech, Lin's thesaurus provides a list of words, sorted in order of decreasing attributional similarity. This sorting is convenient for LRA, since it makes it possible to focus on words with higher attributional similarity and ignore the rest. WordNet, in contrast, given a word and its part of speech, provides a list of words grouped by the possible senses of the given word, with groups sorted by the frequencies of the senses. WordNet's sorting does not directly correspond to sorting by degree of attributional similarity, although various algorithms have been proposed for deriving attributional similarity from WordNet (Resnik, 1995; Jiang and Conrath, 1997; Budanitsky and Hirst, 2001; Banerjee and Pedersen, 2003).
Singular Value Decomposition
We use Rohde's SVDLIBC implementation of the Singular Value Decomposition, which is based on SVDPACKC (Berry, 1992). 6 In LRA, SVD is used to reduce noise and compensate for sparseness.
The Algorithm
We will go through each step of LRA, using an example to illustrate the steps. Assume that the input to LRA is the 374 multiple-choice SAT word analogy questions of Turney and Littman (2005). Since there are six word pairs per question (the stem and five choices), the input consists of 2,244 word pairs. Let's suppose that we wish to calculate the relational similarity between the pair quart:volume and the pair mile:distance, taken from the SAT question in Table 6. The LRA algorithm consists of the following twelve steps:
1. Find alternates: For each word pair A:B in the input set, look in Lin's (1998a) thesaurus for the top num sim words (in the following experiments, num sim is 10) that are most similar to A. For each A ′ that is similar to A, make a new word pair A ′ :B. Likewise, look for the top num sim words that are most similar to B, and for each B ′ , make a new word pair A:B ′ . A:B is called the original pair and each A ′ :B or A:B ′ is an alternate pair. The intent is that alternates should have almost the same semantic relations as the original. For each input pair, there will now be 2 × num sim alternate pairs. When looking for similar words in Lin's (1998a) thesaurus, avoid words that seem unusual (e.g., hyphenated words, words with three characters or less, words with non-alphabetical char- Table 6 This SAT question, from Claman (2000), is used to illustrate the steps in the LRA algorithm.
Stem: quart:volume Choices: (a) day:night (b) mile:distance (c) decade:century (d) friction:heat (e) part:whole Solution: (b) mile:distance acters, multi-word phrases, and capitalized words). The first column in Table 7 shows the alternate pairs that are generated for the original pair quart:volume.
Filter alternates:
For each original pair A:B, filter the 2 × num sim alternates as follows. For each alternate pair, send a query to the WMTS, to find the frequency of phrases that begin with one member of the pair and end with the other. The phrases cannot have more than max phrase words (we use max phrase = 5). Sort the alternate pairs by the frequency of their phrases.
Select the top num f ilter most frequent alternates and discard the remainder (we use num f ilter = 3, so 17 alternates are dropped). This step tends to eliminate alternates that have no clear semantic relation. The third column in Table 7 shows the frequency with which each pair co-occurs in a window of max phrase words. The last column in Table 7 shows the pairs that are selected.
Find phrases:
For each pair (originals and alternates), make a list of phrases in the corpus that contain the pair. Query the WMTS for all phrases that begin with one member of the pair and end with the other (in either order). We ignore suffixes when searching for phrases that match a given pair. The phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pair. These phrases give us information about the semantic relations between the words in each pair. A phrase with no words between the two members of the word pair would give us very little information about the semantic relations (other than that the words occur together with a certain frequency in a certain order). Table 8 gives some examples of phrases in the corpus that match the pair quart:volume.
Find patterns:
For each phrase found in the previous step, build patterns from the intervening words. A pattern is constructed by replacing any or all or none of the intervening words with wild cards (one wild card can only replace one word). If a phrase is n words long, there are n − 2 intervening words between the members of the given word pair (e.g., between quart and volume). Thus a phrase with n words generates 2 (n−2) patterns. (We use max phrase = 5, so a phrase generates at most eight patterns.) For each pattern, count the number of pairs (originals and alternates) with phrases that match the pattern (a wild card must match exactly one word). Keep the top num patterns most frequent patterns and discard the rest (we use num patterns = 4, 000). Typically there will be millions of patterns, so it is not feasible to keep them all.
Map pairs to rows:
In preparation for building the matrix X, create a mapping of word pairs to row numbers. For each pair A:B, create a row for A:B and Table 7 Alternate forms of the original pair quart:volume. The first column shows the original pair and the alternate pairs. The second column shows Lin's similarity score for the alternate word compared to the original word. For example, the similarity between quart and pint is 0.210. The third column shows the frequency of the pair in the WMTS corpus. The fourth column shows the pairs that pass the filtering step (i.e., step 2). "quarts liquid volume" "volume in quarts" "quarts of volume" "volume capacity quarts" "quarts in volume"
Word pair
"volume being about two quarts" "quart total volume" "volume of milk in quarts" "quart of spray volume" "volume include measures like quart" Table 9 Frequencies of various patterns for quart:volume. The asterisk "*" represents the wildcard. Suffixes are ignored, so "quart" matches "quarts". For example, "quarts in volume" is one of the four phrases that match "quart P volume" when P is "in". P = "in" P = "* of" P = "of *" P = "* *" freq("quart P volume") 4 1 5 19 freq("volume P quart") 10 0 2 16 another row for B:A. This will make the matrix more symmetrical, reflecting our knowledge that the relational similarity between A:B and C:D should be the same as the relational similarity between B:A and D:C. This duplication of rows is examined in Section 6.6.
6. Map patterns to columns: Create a mapping of the top num patterns patterns to column numbers. For each pattern P , create a column for "word 1 P word 2 " and another column for "word 2 P word 1 ". Thus there will be 2 × num patterns columns in X. This duplication of columns is examined in Section 6.6.
7. Generate a sparse matrix: Generate a matrix X in sparse matrix format, suitable for input to SVDLIBC. The value for the cell in row i and column j is the frequency of the j-th pattern (see step 6) in phrases that contain the i-th word pair (see step 5). Table 9 gives some examples of pattern frequencies for quart:volume.
8. Calculate entropy: Apply log and entropy transformations to the sparse matrix (Landauer and Dumais, 1997). These transformations have been found to be very helpful for information retrieval (Harman, 1986;. Let x i,j be the cell in row i and column j of the matrix X from step 7. Let m be the number of rows in X and let n be the number of columns. We wish to weight the cell x i,j by the entropy of the j-th column. To calculate the entropy of the column, we need to convert the column into a vector of probabilities. Let p i,j be the probability of x i,j , calculated by normalizing the column vector so that the sum of the elements is one, p i,j = x i,j / m k=1 x k,j . The entropy of the j-th column is then H j = − m k=1 p k,j log(p k,j ). Entropy is at its maximum when p i,j is a uniform distribution, p i,j = 1/m, in which case H j = log(m). Entropy is at its minimum when p i,j is 1 for some value of i and 0 for all other values of i, in which case H j = 0. We want to give more weight to columns (patterns) with frequencies that vary substantially from one row (word pair) to the next, and less weight to columns that are uniform. Therefore we weight the cell x i,j by w j = 1 − H j / log(m), which varies from 0 when p i,j is uniform to 1 when entropy is minimal. We also apply the log transformation to frequencies, log(x i,j + 1). (Entropy is calculated with the original frequency values, before the log transformation is applied.) For all i and all j, replace the original value x i,j in X by the new value w j log(x i,j + 1). This is an instance of the TF-IDF (Term Frequency-Inverse Document Frequency) family of transformations, which is familiar in information retrieval (Salton and Buckley, 1988; Baeza-Yates and Ribeiro-Neto, 1999): log(x i,j + 1) is the TF term and w j is the IDF term. 9. Apply SVD: After the log and entropy transformations have been applied to the matrix X, run SVDLIBC. SVD decomposes a matrix X into a product of three matrices UΣV T , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length: U T U = V T V = I) and Σ is a diagonal matrix of singular values (hence SVD) (Golub and Van Loan, 1996). If X is of rank r, then Σ is also of rank r. Let Σ k , where k < r, be the diagonal matrix formed from the top k singular values, and let U k and V k be the matrices produced by selecting the corresponding columns from U and V. The matrix U k Σ k V T k is the matrix of rank k that best approximates the original matrix X, in the sense that it minimizes the approximation errors. That
is,X = U k Σ k V T k minimizes X − X F over all matricesX of rank k, where
. . . F denotes the Frobenius norm (Golub and Van Loan, 1996). We may think of this matrix U k Σ k V T k as a "smoothed" or "compressed" version of the original matrix. In the subsequent steps, we will be calculating cosines for row vectors. For this purpose, we can simplify calculations by dropping V. The cosine of two vectors is their dot product, after they have been normalized to unit length. The matrix XX T contains the dot products of all of the row vectors. We can find the dot product of the i-th and j-th row vectors by looking at the cell in row i, column j of the matrix XX T . Since V T V = I, we have XX T = UΣV T (UΣV T ) T = UΣV T VΣ T U T = UΣ(UΣ) T , which means that we can calculate cosines with the smaller matrix UΣ, instead of using X = UΣV T (Deerwester et al., 1990).
10. Projection: Calculate U k Σ k (we use k = 300). This matrix has the same number of rows as X, but only k columns (instead of 2 × num patterns columns; in our experiments, that is 300 columns instead of 8,000). We can compare two word pairs by calculating the cosine of the corresponding row vectors in U k Σ k . The row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space. The value k = 300 is recommended by Landauer and Dumais (1997) for measuring the attributional similarity between words. We investigate other values in Section 6.4. Table 10 gives the cosines for the sixteen combinations.
Evaluate alternates: Let
12. Calculate relational similarity: The relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D.
The requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies, which may be introduced in step 1 and may have slipped through the filtering in step 2. Averaging the cosines, as opposed to taking their maximum, is intended to provide some resistance to noise. For quart:volume and mile:distance, the third column in Table 10 shows which alternates are used to calculate the average. For these two pairs, the average of the selected cosines is 0.677. In Table 7, we see that pumping:volume Table 10 The sixteen combinations and their cosines. A:B::C:D expresses the analogy "A is to B as C is to D". The third column indicates those combinations for which the cosine is greater than or equal to the cosine of the original analogy, quart:volume::mile:distance. has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. However, Table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12.
Steps 11 and 12 can be repeated for each two input pairs that are to be compared. This completes the description of LRA. Table 11 gives the cosines for the sample SAT question. The choice pair with the highest average cosine (the choice with the largest value in column #1), choice (b), is the solution for this question; LRA answers the question correctly. For comparison, column #2 gives the cosines for the original pairs and column #3 gives the highest cosine. For this particular SAT question, there is one choice that has the highest cosine for all three columns, choice (b), although this is not true in general. Note that the gap between the first choice (b) and the second choice (d) is largest for the average cosines (column #1). This suggests that the average of the cosines (column #1) is better at discriminating the correct choice than either the original cosine (column #2) or the highest cosine (column #3).
Experiments with Word Analogy Questions
This section presents various experiments with 374 multiple-choice SAT word analogy questions. Table 12 shows the performance of the baseline LRA system on the 374 SAT questions, using the parameter settings and configuration described in Section 5. LRA correctly answered 210 of the 374 questions. 160 questions were answered incorrectly and 4 questions were skipped, because the stem pair and its alternates were represented by zero Table 11 Cosines for the sample SAT question given in Table 6. Column #1 gives the averages of the cosines that are greater than or equal to the original cosines (e.g., the average of the cosines that are marked "yes" in Table 10 is 0.677; see choice (b) in column #1). Column #2 gives the cosine for the original pairs (e.g., the cosine for the first pair in Table 10 is 0.525; see choice (b) in column #2). Column #3 gives the maximum cosine for the sixteen possible analogies (e.g., the maximum cosine in Table 10 vectors. The performance of LRA is significantly better than the lexicon-based approach of Veale (2004) (see Section 3.1) and the best performance using attributional similarity (see Section 2.3), with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). As another point of reference, consider the simple strategy of always guessing the choice with the highest co-occurrence frequency. The idea here is that the words in the solution pair may occur together frequently, because there is presumably a clear and meaningful relation between the solution words, whereas the distractors may only occur together rarely, because they have no meaningful relation. This strategy is signifcantly worse than random guessing. The opposite strategy, always guessing the choice pair with the lowest co-occurrence frequency, is also worse than random guessing (but not significantly). It appears that the designers of the SAT questions deliberately chose distractors that would thwart these two strategies.
Baseline LRA System
With 374 questions and 6 word pairs per question (one stem and five choices), there are 2,244 pairs in the input set. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 8,976 pairs. In step 5, for each pair A:B, we add B:A, yielding 17,952 pairs. However, some pairs are dropped because they correspond to zero vectors (they do not appear together in a window of five words in the WMTS . Also, a few words do not appear in Lin's thesaurus, and some word pairs appear twice in the SAT questions (e.g., lion:cat). The sparse matrix (step 7) has 17,232 rows (word pairs) and 8,000 columns (patterns), with a density of 5.8% (percentage of nonzero values). Table 13 gives the time required for each step of LRA, a total of almost nine days. All of the steps used a single CPU on a desktop computer, except step 3, finding the phrases for each word pair, which used a 16 CPU Beowulf cluster. Most of the other steps are parallelizable; with a bit of programming effort, they could also be executed on the Beowulf cluster. All CPUs (both desktop and cluster) were 2.4 GHz Intel Xeons. The desktop computer had 2 GB of RAM and the cluster had a total of 16 GB of RAM. Table 14 compares LRA to the Vector Space Model with the 374 analogy questions. VSM-AV refers to the VSM using AltaVista's database as a corpus. The VSM-AV results are taken from Turney and Littman (2005). As mentioned in Section 4.2, we estimate this corpus contained about 5 × 10 11 English words at the time the VSM-AV experiments took place. VSM-WMTS refers to the VSM using the WMTS, which contains about 5 × 10 10 English words. We generated the VSM-WMTS results by adapting the VSM to the WMTS. The algorithm is slightly different from Turney and Littman (2005), because we used passage frequencies instead of document frequencies.
LRA versus VSM
All three pairwise differences in recall in Table 14 are statistically significant with 95% confidence, using the Fisher Exact Test (Agresti, 1990). The pairwise differences in Table 15 Comparison with human SAT performance. The last column in the table indicates whether (YES) or not (NO) the average human performance (57%) falls within the 95% confidence interval of the corresponding algorithm's performance. The confidence intervals are calculated using the Binomial Exact Test (Agresti, 1990 precision between LRA and the two VSM variations are also significant, but the difference in precision between the two VSM variations (42.4% versus 47.7%) is not significant. Although VSM-AV has a corpus ten times larger than LRA's, LRA still performs better than VSM-AV.
Comparing VSM-AV to VSM-WMTS, the smaller corpus has reduced the score of the VSM, but much of the drop is due to the larger number of questions that were skipped (34 for VSM-WMTS versus 5 for VSM-AV). With the smaller corpus, many more of the input word pairs simply do not appear together in short phrases in the corpus. LRA is able to answer as many questions as VSM-AV, although it uses the same corpus as VSM-WMTS, because Lin's thesaurus allows LRA to substitute synonyms for words that are not in the corpus.
VSM-AV required 17 days to process the 374 analogy questions (Turney and Littman, 2005), compared to 9 days for LRA. As a courtesy to AltaVista, Turney and Littman (2005) inserted a five second delay between each query. Since the WMTS is running locally, there is no need for delays. VSM-WMTS processed the questions in only one day.
Human Performance
The average performance of college-bound senior high school students on verbal SAT questions corresponds to a recall (percent correct) of about 57% (Turney and Littman, 2005). The SAT I test consists of 78 verbal questions and 60 math questions (there is also an SAT II test, covering specific subjects, such as chemistry). Analogy questions are only a subset of the 78 verbal SAT questions. If we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal SAT I questions, then we can estimate that the average college-bound senior would correctly answer about 57% of the 374 analogy questions.
Of our 374 SAT questions, 190 are from a collection of ten official SAT tests (Claman, 2000). On this subset of the questions, LRA has a recall of 61.1%, compared to a recall of 51.1% on the other 184 questions. The 184 questions that are not from Claman (2000) seem to be more difficult. This indicates that we may be underestimating how well LRA performs, relative to college-bound senior high school students. Claman (2000) suggests that the analogy questions may be somewhat harder than other verbal SAT questions, so we may be slightly overestimating the mean human score on the analogy questions. Table 15 gives the 95% confidence intervals for LRA, VSM-AV, and VSM-WMTS, calculated by the Binomial Exact Test (Agresti, 1990). There is no significant difference between LRA and human performance, but VSM-AV and VSM-WMTS are significantly below human-level performance.
Varying the Parameters in LRA
There are several parameters in the LRA algorithm (see Section 5.5). The parameter values were determined by trying a small number of possible values on a small set of questions that were set aside. Since LRA is intended to be an unsupervised learning algorithm, we did not attempt to tune the parameter values to maximize the precision and recall on the 374 SAT questions. We hypothesized that LRA is relatively insensitive to the values of the parameters. Table 16 shows the variation in the performance of LRA as the parameter values are adjusted. We take the baseline parameter settings (given in Section 5.5) and vary each parameter, one at a time, while holding the remaining parameters fixed at their baseline values. None of the precision and recall values are significantly different from the baseline, according to the Fisher Exact Test (Agresti, 1990), at the 95% confidence level. This supports the hypothesis that the algorithm is not sensitive to the parameter values.
Although a full run of LRA on the 374 SAT questions takes nine days, for some of the parameters it is possible to reuse cached data from previous runs. We limited the experiments with num sim and max phrase because caching was not as helpful for these parameters, so experimenting with them required several weeks.
Ablation Experiments
As mentioned in the introduction, LRA extends the VSM approach of Turney and Littman (2005) by (1) exploring variations on the analogies by replacing words with synonyms (step 1),
(2) automatically generating connecting patterns (step 4), and (3) smoothing the data with SVD (step 9). In this subsection, we ablate each of these three components to assess their contribution to the performance of LRA. Table 17 shows the results. Without SVD (compare column #1 to #2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, we hypothesize that the drop in performance would be significant with a larger set of word pairs. More word pairs would increase the sample size, which would decrease the 95% confidence interval, which would likely show that SVD is making a significant contribution. Furthermore, more word pairs would increase the matrix size, which would give SVD more leverage. For example, Landauer and Dumais (1997) apply SVD to a matrix of of 30,473 columns by 60,768 rows, but our matrix here is 8,000 columns by 17,232 rows. We are currently gathering more SAT questions, to test this hypothesis.
Without synonyms (compare column #1 to #3 in Table 17), recall drops significantly (from 56.1% to 49.5%), but the drop in precision is not significant. When the synonym component is dropped, the number of skipped questions rises from 4 to 22, which demonstrates the value of the synonym component of LRA for compensating for sparse data.
When both SVD and synonyms are dropped (compare column #1 to #4 in Table 17), the decrease in recall is significant, but the decrease in precision is not significant. Again, we believe that a larger sample size would show the drop in precision is significant.
If we eliminate both synonyms and SVD from LRA, all that distinguishes LRA from VSM-WMTS is the patterns (step 4). The VSM approach uses a fixed list of 64 patterns to generate 128 dimensional vectors (Turney and Littman, 2005), whereas LRA uses a dynamically generated set of 4,000 patterns, resulting in 8,000 dimensional vectors. We can see the value of the automatically generated patterns by comparing LRA without synonyms and SVD (column #4) to VSM-WMTS (column #5). The difference in both precision and recall is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
The ablation experiments support the value of the patterns (step 4) and synonyms (step 1) in LRA, but the contribution of SVD (step 9) has not been proven, although we believe more data will support its effectiveness. Nonetheless, the three components together result in a 16% increase in F (compare #1 to #5).
Matrix Symmetry
We know a priori that, if A:B::C:D, then B:A::D:C. For example, "mason is to stone as carpenter is to wood" implies "stone is to mason as wood is to carpenter". Therefore a good measure of relational similarity, sim r , should obey the following equation:
In steps 5 and 6 of the LRA algorithm (Section 5.5), we ensure that the matrix X is symmetrical, so that equation (8) is necessarily true for LRA. The matrix is designed so that the row vector for A:B is different from the row vector for B:A only by a permutation of the elements. The same permutation distinguishes the row vectors for C:D and D:C. Therefore the cosine of the angle between A:B and C:D must be identical to the cosine of the angle between B:A and D:C (see equation (7)).
To discover the consequences of this design decision, we altered steps 5 and 6 so that symmetry is no longer preserved. In step 5, for each word pair A:B that appears in the input set, we only have one row. There is no row for B:A unless B:A also appears in the input set. Thus the number of rows in the matrix dropped from 17,232 to 8,616.
In step 6, we no longer have two columns for each pattern P , one for "word 1 P word 2 " and another for "word 2 P word 1 ". However, to be fair, we kept the total number of columns at 8,000. In step 4, we selected the top 8,000 patterns (instead of the top 4,000), distinguishing the pattern "word 1 P word 2 " from the pattern "word 2 P word 1 " (instead of considering them equivalent). Thus a pattern P with a high frequency is likely to appear in two columns, in both possible orders, but a lower frequency pattern might appear in only one column, in only one possible order.
These changes resulted in a slight decrease in performance. Recall dropped from 56.1% to 55.3% and precision dropped from 56.8% to 55.9%. The decrease is not statistically significant. However, the modified algorithm no longer obeys equation (8).
Although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the SAT questions, we prefer to retain symmetry, to ensure that equation (8) is satisfied.
Note that, if A:B::C:D, it does not follow that B:A::C:D. For example, it is false that "stone is to mason as carpenter is to wood". In general (except when the semantic relations between A and B are symmetrical), we have the following inequality:
Therefore we do not want A:B and B:A to be represented by identical row vectors, although it would ensure that equation (8) is satisfied.
All Alternates versus Better Alternates
In step 12 of LRA, the relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D. That is, the average includes only those alternates that are "better" than the originals. Taking all alternates instead of the better alternates, recall drops from 56.1% to 40.4% and precision drops from 56.8% to 40.8%. Both decreases are statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
Interpreting Vectors
Suppose a word pair A:B corresponds to a vector r in the matrix X. It would be convenient if inspection of r gave us a simple explanation or description of the relation between A and B. For example, suppose the word pair ostrich:bird maps to the row vector r. It would be pleasing to look in r and find that the largest element corresponds to the pattern "is the largest" (i.e., "ostrich is the largest bird"). Unfortunately, inspection of r reveals no such convenient patterns. We hypothesize that the semantic content of a vector is distributed over the whole vector; it is not concentrated in a few elements. To test this hypothesis, we modified step 10 of LRA. Instead of projecting the 8,000 dimensional vectors into the 300 dimensional space U k Σ k , we use the matrix U k Σ k V T k . This matrix yields the same cosines as U k Σ k , but preserves the original 8,000 dimensions, making it easier to interpret the row vectors. For each row vector in U k Σ k V T k , we select the N largest values and set all other values to zero. The idea here is that we will only pay attention to the N most important patterns in r; the remaining patterns will be ignored. This reduces the length of the row vectors, but the cosine is the dot product of normalized vectors (all vectors are normalized to unit length; see equation (7)), so the change to the vector lengths has no impact; only the angle of the vectors is important. If most of the semantic content is in the N largest elements of r, then setting the remaining elements to zero should have relatively little impact. Table 18 shows the performance as N varies from 1 to 3,000. The precision and recall are significantly below the baseline LRA until N ≥ 300 (95% confidence, Fisher Exact Test). In other words, for a typical SAT analogy question, we need to examine the top 300 patterns to explain why LRA selected one choice instead of another.
We are currently working on an extension of LRA that will explain with a single pattern why one choice is better than another. We have had some promising results, but this work is not yet mature. However, we can confidently claim that interpreting the vectors is not trivial.
Manual Patterns versus Automatic Patterns
Turney and Littman (2005) used 64 manually generated patterns whereas LRA uses 4,000 automatically generated patterns. We know from Section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns. It may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns. If we require an exact match, 50 of the 64 manual patterns can be found in the automatic patterns. If we are lenient about wildcards, and count the pattern "not the" as matching "* not the" (for example), then 60 of the 64 manual patterns appear within the automatic patterns. This suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns, rather than a qualitative difference in the patterns. Turney and Littman (2005) point out that some of their 64 patterns have been used by other researchers. For example, Hearst (1992) used the pattern "such as" to discover hyponyms and Berland and Charniak (1999) used the pattern "of the" to discover meronyms. Both of these patterns are included in the 4,000 patterns automatically generated by LRA.
The novelty in Turney and Littman (2005) is that their patterns are not used to mine text for instances of word pairs that fit the patterns (Hearst, 1992; Berland and Charniak, 1999); instead, they are used to gather frequency data for building vectors that represent the relation between a given pair of words. The results in Section 6.8 show that a vector contains more information than any single pattern or small set of patterns; a vector is a distributed representation. LRA is distinct from Hearst (1992) and Berland and Charniak (1999) in its focus on distributed representations, which it shares with Turney and Littman (2005), but LRA goes beyond Turney and Littman (2005) by finding patterns automatically. Riloff and Jones (1999) and Yangarber (2003) also find patterns automatically, but their goal is to mine text for instances of word pairs; the same goal as Hearst (1992) and Berland and Charniak (1999). Because LRA uses patterns to build distributed vector representations, it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of Hearst (1992), Berland and Charniak (1999), Riloff and Jones (1999), and Yangarber (2003). Therefore LRA can simply select the highest frequency patterns (step 4 in Section 5.5); it does not need the more sophisticated selection algorithms of Riloff and Jones (1999) and Yangarber (2003).
Experiments with Noun-Modifier Relations
This section describes experiments with 600 noun-modifier pairs, hand-labeled with 30 classes of semantic relations (Nastase and Szpakowicz, 2003). In the following experiments, LRA is used with the baseline parameter values, exactly as described in Section 5.5. No adjustments were made to tune LRA to the noun-modifier pairs. LRA is used as a distance (nearness) measure in a single nearest neighbour supervised learning algorithm.
Classes of Relations
The following experiments use the 600 labeled noun-modifier pairs of Nastase and Szpakowicz (2003). This data set includes information about the part of speech and WordNet synset (synonym set; i.e., word sense tag) of each word, but our algorithm does not use this information. Table 19 lists the 30 classes of semantic relations. The table is based on Appendix A of Nastase and Szpakowicz (2003), with some simplifications. The original table listed several semantic relations for which there were no instances in the data set. These were relations that are typically expressed with longer phrases (three or more words), rather than noun-modifier word pairs. For clarity, we decided not to include these relations in Table 19.
In this table, H represents the head noun and M represents the modifier. For example, in "flu virus", the head noun (H) is "virus" and the modifier (M ) is "flu" (*). In English, the modifier (typically a noun or adjective) usually precedes the head noun. In the description of purpose, V represents an arbitrary verb. In "concert hall", the hall is for presenting concerts (V is "present") or holding concerts (V is "hold") ( †).
Nastase and Szpakowicz (2003) organized the relations into groups. The five capitalized terms in the "Relation" column of Table 19 are the names of five groups of semantic relations. (The original table had a sixth group, but there are no examples of this group in the data set.) We make use of this grouping in the following experiments.
Baseline LRA with Single Nearest Neighbour
The following experiments use single nearest neighbour classification with leave-oneout cross-validation. For leave-one-out cross-validation, the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers. The data set is split 600 times, so that each noun-modifier gets a turn as the testing word pair. The predicted class of the testing pair is the class of the single nearest neighbour in the training set. As the measure of nearness, we use LRA to calculate the relational similarity between the testing pair and the training pairs. The single nearest neighbour algorithm is a supervised learning algorithm (i.e., it requires a training set of labeled data), but we are using LRA to measure the distance between a pair and its potential neighbours, and LRA is itself determined in an unsupervised fashion (i.e., LRA does not need labeled data).
Each SAT question has five choices, so answering 374 SAT questions required calculating 374 × 5 × 16 = 29, 920 cosines. The factor of 16 comes from the alternate pairs, step 11 in LRA. With the noun-modifier pairs, using leave-one-out cross-validation, each test pair has 599 choices, so an exhaustive application of LRA would require calculating 600 × 599 × 16 = 5, 750, 400 cosines. To reduce the amount of computation required, we first find the 30 nearest neighbours for each pair, ignoring the alternate pairs (600 × 599 = 359, 400 cosines), and then apply the full LRA, including the alternates, to just those 30 neighbours (600 × 30 × 16 = 288, 000 cosines), which requires calculating only 359, 400 + 288, 000 = 647, 400 cosines.
There are 600 word pairs in the input set for LRA. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 2,400 pairs. In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. However, some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. The sparse matrix (step 7) has 4,748 rows and 8,000 columns, with a density of 8.4%.
Following Turney and Littman (2005), we evaluate the performance by accuracy and also by the macroaveraged F measure (Lewis, 1991). Macroaveraging calculates the precision, recall, and F for each class separately, and then calculates the average across all classes. Microaveraging combines the true positive, false positive, and false negative counts for all of the classes, and then calculates precision, recall, and F from the combined counts. Macroaveraging gives equal weight to all classes, but microaveraging gives more weight to larger classes. We use macroaveraging (giving equal weight to all classes), because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus.
Classification with 30 distinct classes is a hard problem. To make the task easier, we can collapse the 30 classes to 5 classes, using the grouping that is given in Table 19. For example, agent and beneficiary both collapse to participant. On the 30 class problem, LRA with the single nearest neighbour algorithm achieves an accuracy of 39.8% (239/600) and a macroaveraged F of 36.6%. Always guessing the majority class would result in an accuracy of 8.2% (49/600). On the 5 class problem, the accuracy is 58.0% (348/600) and the macroaveraged F is 54.6%. Always guessing the majority class would give an accuracy of 43.3% (260/600). For both the 30 class and 5 class problems, LRA's accuracy is significantly higher than guessing the majority class, with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). Table 20 shows the performance of LRA and VSM on the 30 class problem. VSM-AV is VSM with the AltaVista corpus and VSM-WMTS is VSM with the WMTS corpus. The results for VSM-AV are taken from Turney and Littman (2005). All three pairwise differences in the three F measures are statistically significant at the 95% level, according to the Paired T-Test (Feelders and Verkooijen, 1995). The accuracy of LRA is significantly higher than the accuracies of VSM-AV and VSM-WMTS, according to the Fisher Exact Test (Agresti, 1990), but the difference between the two VSM accuracies is not significant. Table 21 compares the performance of LRA and VSM on the 5 class problem. The accuracy and F measure of LRA are significantly higher than the accuracies and F measures of VSM-AV and VSM-WMTS, but the differences between the two VSM accuracies and F measures are not significant.
LRA versus VSM
Discussion
The experimental results in Sections 6 and 7 demonstrate that LRA performs significantly better than the VSM, but it is also clear that there is room for improvement. The accuracy might not yet be adequate for practical applications, although past work has shown that it is possible to adjust the tradeoff of precision versus recall (Turney and Littman, 2005). For some of the applications, such as information extraction, LRA might be suitable if it is adjusted for high precision, at the expense of low recall.
Another limitation is speed; it took almost nine days for LRA to answer 374 analogy questions. However, with progress in computer hardware, speed will gradually become less of a concern. Also, the software has not been optimized for speed; there are several places where the efficiency could be increased and many operations are parallelizable. It may also be possible to precompute much of the information for LRA, although this would require substantial changes to the algorithm.
The difference in performance between VSM-AV and VSM-WMTS shows that VSM is sensitive to the size of the corpus. Although LRA is able to surpass VSM-AV when the WMTS corpus is only about one tenth the size of the AV corpus, it seems likely that LRA would perform better with a larger corpus. The WMTS corpus requires one terabyte of hard disk space, but progress in hardware will likely make ten or even one hundred terabytes affordable in the relatively near future.
For noun-modifier classification, more labeled data should yield performance improvements. With 600 noun-modifier pairs and 30 classes, the average class has only 20 examples. We expect that the accuracy would improve substantially with five or ten times more examples. Unfortunately, it is time consuming and expensive to acquire hand-labeled data.
Another issue with noun-modifier classification is the choice of classification scheme for the semantic relations. The 30 classes of Nastase and Szpakowicz (2003) might not be the best scheme. Other researchers have proposed different schemes (Vanderwende, 1994;Barker and Szpakowicz, 1998;Rosario and Hearst, 2001;Rosario, Hearst, and Fillmore, 2002). It seems likely that some schemes are easier for machine learning than others. For some applications, 30 classes may not be necessary; the 5 class scheme may be sufficient.
LRA, like VSM, is a corpus-based approach to measuring relational similarity. Past work suggests that a hybrid approach, combining multiple modules, some corpusbased, some lexicon-based, will surpass any purebred approach (Turney et al., 2003). In future work, it would be natural to combine the corpus-based approach of LRA with the lexicon-based approach of Veale (2004), perhaps using the combination method of Turney et al. (2003).
The Singular Value Decomposition is only one of many methods for handling sparse, noisy data. We have also experimented with Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999), Kernel Principal Components Analysis (KPCA) (Scholkopf, Smola, and Muller, 1997), and Iterative Scaling (IS) (Ando, 2000). We had some interesting results with small matrices (around 2,000 rows by 1,000 columns), but none of these methods seemed substantially better than SVD and none of them scaled up to the matrix sizes we are using here (e.g., 17,232 rows and 8,000 columns; see Section 6.1).
In step 4 of LRA, we simply select the top num patterns most frequent patterns and discard the remaining patterns. Perhaps a more sophisticated selection algorithm would improve the performance of LRA. We have tried a variety of ways of selecting patterns, but it seems that the method of selection has little impact on performance. We hypothesize that the distributed vector representation is not sensitive to the selection method, but it is possible that future work will find a method that yields significant improvement in performance.
Conclusion
This paper has introduced a new method for calculating relational similarity, Latent Relational Analysis. The experiments demonstrate that LRA performs better than the VSM approach, when evaluated with SAT word analogy questions and with the task of classifying noun-modifier expressions. The VSM approach represents the relation between a pair of words with a vector, in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus. LRA extends this approach in three ways:
| 14,134 |
cs0608100
|
2951193962
|
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.
|
Metaphorical language is very common in our daily life; so common that we are usually unaware of it @cite_46 . gentner01 argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language @cite_46 . dolan95 describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors.
|
{
"abstract": [
"People use metaphors every time they speak. Some of those metaphors are literary - devices for making thoughts more vivid or entertaining. But most are much more basic than that - they're \"metaphors we live by\", metaphors we use without even realizing we're using them. In this book, George Lakoff and Mark Johnson suggest that these basic metaphors not only affect the way we communicate ideas, but actually structure our perceptions and understandings from the beginning. Bringing together the perspectives of linguistics and philosophy, Lakoff and Johnson offer an intriguing and surprising guide to some of the most common metaphors and what they can tell us about the human mind. And for this new edition, they supply an afterword both extending their arguments and offering a fascinating overview of the current state of thinking on the subject of the metaphor."
],
"cite_N": [
"@cite_46"
],
"mid": [
"2052417512"
]
}
|
Similarity of Semantic Relations
|
There are at least two kinds of similarity. Attributional similarity is correspondence between attributes and relational similarity is correspondence between relations (Medin, Goldstone, and Gentner, 1990). When two words have a high degree of attributional similarity, we call them synonyms. When two word pairs have a high degree of relational similarity, we say they are analogous.
Verbal analogies are often written in the form A:B::C:D, meaning A is to B as C is to D; for example, traffic:street::water:riverbed. Traffic flows over a street; water flows over a riverbed. A street carries traffic; a riverbed carries water. There is a high degree of relational similarity between the word pair traffic:street and the word pair water:riverbed. In fact, this analogy is the basis of several mathematical theories of traffic flow (Daganzo, 1994).
In Section 2, we look more closely at the connections between attributional and relational similarity. In analogies such as mason:stone::carpenter:wood, it seems that relational similarity can be reduced to attributional similarity, since mason and carpenter are attributionally similar, as are stone and wood. In general, this reduction fails. Consider the analogy traffic:street::water:riverbed. Traffic and water are not attributionally similar. Street and riverbed are only moderately attributionally similar.
Many algorithms have been proposed for measuring the attributional similarity between two words (Lesk, 1969;Resnik, 1995; Landauer and Dumais, 1997; Jiang and Conrath, 1997; Lin, 1998b;Turney, 2001;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003). Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais, 1997), information retrieval (Deerwester et al., 1990), determining semantic orientation (Turney, 2002), grading student essays (Rehder et al., 1998), measuring textual cohesion (Morris and Hirst, 1991), and word sense disambiguation (Lesk, 1986).
On the other hand, since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. Many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity. We discuss related problems in natural language processing, information retrieval, and information extraction in more detail in Section 3. This paper builds on the Vector Space Model (VSM) of information retrieval. Given a query, a search engine produces a ranked list of documents. The documents are ranked in order of decreasing attributional similarity between the query and each document. Almost all modern search engines measure attributional similarity using the VSM (Baeza-Yates and Ribeiro-Neto, 1999). Turney and Littman (2005) adapt the VSM approach to measuring relational similarity. They used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words. Section 4 presents the VSM approach to measuring similarity.
In Section 5, we present an algorithm for measuring relational similarity, which we call Latent Relational Analysis (LRA). The algorithm learns from a large corpus of unlabeled, unstructured text, without supervision. LRA extends the VSM approach of Turney and Littman (2005) in three ways: (1) The connecting patterns are derived automatically from the corpus, instead of using a fixed set of patterns.
(2) Singular Value Decomposition (SVD) is used to smooth the frequency data. (3) Given a word pair such as traffic:street, LRA considers transformations of the word pair, generated by replacing one of the words by synonyms, such as traffic:road, traffic:highway.
Section 6 presents our experimental evaluation of LRA with a collection of 374 multiple-choice word analogy questions from the SAT college entrance exam. 1 An example of a typical SAT question appears in Table 1. In the educational testing literature, the first pair (mason:stone) is called the stem of the analogy. The correct choice is called the solution and the incorrect choices are distractors. We evaluate LRA by testing its ability to select the solution and avoid the distractors. The average performance of collegebound senior high school students on verbal SAT questions corresponds to an accuracy of about 57%. LRA achieves an accuracy of about 56%. On these same questions, the VSM attained 47%.
One application for relational similarity is classifying semantic relations in nounmodifier pairs (Turney and Littman, 2005). In Section 7, we evaluate the performance of LRA with a set of 600 noun-modifier pairs from Nastase and Szpakowicz (2003). The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the head noun (printer) and the modifier (laser). The 600 pairs have been manually labeled with 30 classes of semantic relations. For example, "laser printer" is classified as instrument; the printer uses the laser as an instrument for We approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem. The 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbour in the training set. LRA is used to measure distance (i.e., similarity, nearness). LRA achieves an accuracy of 39.8% on the 30-class problem and 58.0% on the 5-class problem. On the same 600 noun-modifier pairs, the VSM had accuracies of 27.8% (30-class) and 45.7% (5-class) (Turney and Littman, 2005).
We discuss the experimental results, limitations of LRA, and future work in Section 8 and we conclude in Section 9.
Attributional and Relational Similarity
In this section, we explore connections between attributional and relational similarity.
Types of Similarity
Medin, Goldstone, and Gentner (1990) distinguish attributes and relations as follows:
Attributes are predicates taking one argument (e.g., X is red, X is large), whereas relations are predicates taking two or more arguments (e.g., X collides with Y , X is larger than Y ). Attributes are used to state properties of objects; relations express relations between objects or propositions. Gentner (1983) notes that what counts as an attribute or a relation can depend on the context. For example, large can be viewed as an attribute of X, LARGE(X ), or a relation between X and some standard Y , LARGER THAN(X , Y ).
The amount of attributional similarity between two words, A and B, depends on the degree of correspondence between the properties of A and B. A measure of attributional similarity is a function that maps two words, A and B, to a real number, sim a (A, B) ∈ ℜ. The more correspondence there is between the properties of A and B, the greater their attributional similarity. For example, dog and wolf have a relatively high degree of attributional similarity.
The amount of relational similarity between two pairs of words, A:B and C:D, depends on the degree of correspondence between the relations between A and B and the relations between C and D. A measure of relational similarity is a function that maps two pairs, A:B and C:D, to a real number, sim r (A : B, C : D) ∈ ℜ. The more correspondence there is between the relations of A:B and C:D, the greater their relational similarity. For example, dog:bark and cat:meow have a relatively high degree of relational
As these examples show, semantic relatedness is the same as attributional similarity (e.g., hot and cold are both kinds of temperature, pencil and paper are both used for writing). Here we prefer to use the term attributional similarity, because it emphasizes the contrast with relational similarity. The term semantic relatedness may lead to confusion when the term relational similarity is also under discussion.
Resnik (1995) describes semantic similarity as follows:
Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. (1989) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonimic (IS-A) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information.
Thus semantic similarity is a specific type of attributional similarity. The term semantic similarity is misleading, because it refers to a type of attributional similarity, yet relational similarity is not any less semantic than attributional similarity. To avoid confusion, we will use the terms attributional similarity and relational similarity, following Medin, Goldstone, and Gentner (1990). Instead of semantic similarity (Resnik, 1995) or semantically similar (Chiarello et al., 1990), we prefer the term taxonomical similarity, which we take to be a specific type of attributional similarity. We interpret synonymy as a high degree of attributional similarity. Analogy is a high degree of relational similarity.
Measuring Attributional Similarity
Algorithms for measuring attributional similarity can be lexicon-based (Lesk, 1986;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003), corpus-based (Lesk, 1969;Landauer and Dumais, 1997;Lin, 1998a;Turney, 2001), or a hybrid of the two (Resnik, 1995;Jiang and Conrath, 1997;Turney et al., 2003). Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpus-based algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. However, experiments do not support this intuition.
Several algorithms have been evaluated using 80 multiple-choice synonym ques- Table 2. Table 3 shows the best performance on the TOEFL questions for each type of attributional similarity algorithm. The results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy.
Using Attributional Similarity to Solve Analogies
We may distinguish near analogies (mason:stone::carpenter:wood) from far analogies (traffic:street::water:riverbed) (Gentner, 1983;Medin, Goldstone, and Gentner, 1990). In an analogy A:B::C:D, where there is a high degree of relational similarity between A:B and C:D, if there is also a high degree of attributional similarity between A and C, and between B and D, then A:B::C:D is a near analogy; otherwise, it is a far analogy. It seems possible that SAT analogy questions might consist largely of near analogies, in which case they can be solved using attributional similarity measures. We could score each candidate analogy by the average of the attributional similarity, sim a , between A and C and between B and D: score(A : B ::
C : D) = 1 2 (sim a (A, C) + sim a (B, D))(1)
This kind of approach was used in two of the thirteen modules in Turney et al. (2003) (see Section 3.1).
To evaluate this approach, we applied several measures of attributional similarity to our collection of 374 SAT questions. The performance of the algorithms was measured by precision, recall, and F , defined as follows: precision = number of correct guesses total number of guesses made
(2)
F = 2 × precision × recall precision + recall (4)
Note that recall is the same as percent correct (for multiple-choice questions, with only zero or one guesses allowed per question, but not in general). Table 4 shows the experimental results for our set of 374 analogy questions. For example, using the algorithm of Hirst and St-Onge (1998), 120 questions were answered correctly, 224 incorrectly, and 30 questions were skipped. When the algorithm assigned the same similarity to all of the choices for a given question, that question was skipped. The precision was 120/(120 + 224) and the recall was 120/(120 + 224 + 30).
The first five algorithms in Table 4 are implemented in Pedersen's WordNet-Similarity package. 2 The sixth algorithm (Turney, 2001) used the Waterloo MultiText System, as described in Terra and Clarke (2003).
The difference between the lowest performance (Jiang and Conrath, 1997) and random guessing is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, the difference between the highest performance (Turney, 2001) and the VSM approach (Turney and Littman, 2005) is also statistically significant with 95% confidence. We conclude that there are enough near analogies in the 374 SAT questions for attributional similarity to perform better than random guessing, but not enough near analogies for attributional similarity to perform as well as relational similarity.
Recognizing Word Analogies
The problem of recognizing word analogies is, given a stem word pair and a finite list of choice word pairs, select the choice that is most analogous to the stem. This problem was first attempted by a system called Argus (Reitman, 1965), using a small hand-built semantic network. Argus could only solve the limited set of analogy questions that its programmer had anticipated. Argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity. Turney et al. (2003) combined 13 independent modules to answer SAT questions. The final output of the system was based on a weighted combination of the outputs of each individual module. The best of the 13 modules was the VSM, which is described in detail in Turney and Littman (2005). The VSM was evaluated on a set of 374 SAT questions, achieving a score of 47%.
In contrast with the corpus-based approach of Turney and Littman (2005), Veale (2004) applied a lexicon-based approach to the same 374 SAT questions, attaining a score of 43%. Veale evaluated the quality of a candidate analogy A:B::C:D by looking for paths in WordNet, joining A to B and C to D. The quality measure was based on the similarity between the A:B paths and the C:D paths. Turney (2005) introduced Latent Relational Analysis (LRA), an enhanced version of the VSM approach, which reached 56% on the 374 SAT questions. Here we go beyond Turney (2005) by describing LRA in more detail, performing more extensive experiments, and analyzing the algorithm and related work in more depth. French (2002) cites Structure Mapping Theory (SMT) (Gentner, 1983) and its implementation in the Structure Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner, 1989) as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
Structure Mapping Theory
For example, there is an analogy between the solar system and Rutherford's model of the atom (Falkenhainer, Forbus, and Gentner, 1989). The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
Each individual connection (e.g., from revolve(planet, sun) to revolve(electron, nucleus)) in an analogical mapping implies that the connected relations are similar; thus, SMT requires a measure of relational similarity, in order to form maps. Early versions of SME only mapped identical relations, but later versions of SME allowed similar, non-identical relations to match (Falkenhainer, 1990). However, the focus of research in analogy-making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations, hence the similarity measures used in SME at the level of individual connections are somewhat rudimentary.
We believe that a more sophisticated measure of relational similarity, such as LRA, may enhance the performance of SME. Likewise, the focus of our work here is on the similarity between particular relations, and we ignore systematic mapping between sets animal:eat::inflation:reduce of relations, so LRA may also be enhanced by integration with SME.
Metaphor
Metaphorical language is very common in our daily life; so common that we are usually unaware of it (Lakoff and Johnson, 1980). argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language (Lakoff and Johnson, 1980). Dolan (1995) describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's (1995) algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors. Lakoff and Johnson (1980) give many examples of sentences in support of their claim that metaphorical language is ubiquitous. The metaphors in their sample sentences can be expressed using SAT-style verbal analogies of the form A:B::C:D. The first column in Table 5 is a list of sentences from Lakoff and Johnson (1980) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy.
Classifying Semantic Relations
The task of classifying semantic relations is to identify the relation between a pair of words. Often the pairs are restricted to noun-modifier pairs, but there are many interesting relations, such as antonymy, that do not occur in noun-modifier pairs. However, noun-modifier pairs are interesting due to their high frequency in English. For instance, WordNet 2.0 contains more than 26,000 noun-modifier pairs, although many common noun-modifiers are not in WordNet, especially technical terms. Rosario and Hearst (2001) and Rosario, Hearst, and Fillmore (2002) classify nounmodifier relations in the medical domain, using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources for representing each noun-modifier pair with a feature vector. They trained a neural network to distinguish 13 classes of semantic relations. Nastase and Szpakowicz (2003) explore a similar approach to classifying general noun-modifier pairs (i.e., not restricted to a particular domain, such as medicine), using WordNet and Roget's Thesaurus as lexical resources. Vanderwende (1994) used hand-built rules, together with a lexical knowledge base, to classify noun-modifier pairs.
None of these approaches explicitly involved measuring relational similarity, but any classification of semantic relations necessarily employs some implicit notion of relational similarity, since members of the same class must be relationally similar to some extent. Barker and Szpakowicz (1998) tried a corpus-based approach that explicitly used a measure of relational similarity, but their measure was based on literal matching, which limited its ability to generalize. Moldovan et al. (2004) also used a measure of relational similarity, based on mapping each noun and modifier into semantic classes in WordNet. The noun-modifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm, to improve the mapping of the noun and modifier into WordNet. Turney and Littman (2005) used the VSM (as a component in a single nearest neighbour learning algorithm) to measure relational similarity. We take the same approach here, substituting LRA for the VSM, in Section 7.
Lauer (1995) used a corpus-based approach (using the BNC) to paraphrase nounmodifier pairs, by inserting the prepositions of, for, in, at, on, from, with, and about. For example, reptile haven was paraphrased as haven for reptiles. Lapata and Keller (2004) achieved improved results on this task, by using the database of AltaVista's search engine as a corpus.
Word Sense Disambiguation
We believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text. If we can identify the semantic relations between the given word and its context, then we can disambiguate the given word. Yarowsky's (1993) observation that collocations are almost always monosemous is evidence for this view. Federici, Montemagni, and Pirrelli (1997) present an analogybased approach to word sense disambiguation.
For example, consider the word plant. Out of context, plant could refer to an industrial plant or a living organism. Suppose plant appears in some text near food. A typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism (Lesk, 1986;Banerjee and Pedersen, 2003). In this case, the decision may not be clear, since industrial plants often produce food and living organisms often serve as food. It would be very helpful to know the relation between food and plant in this example. In the phrase "food for the plant", the relation between food and plant strongly suggests that the plant is a living organism, since industrial plants do not need food. In the text "food at the plant", the relation strongly suggests that the plant is an industrial plant, since living organisms are not usually considered as locations. Thus an algorithm for classifying semantic relations (as in Section 7) should be helpful for word sense disambiguation.
Information Extraction
The problem of relation extraction is, given an input document and a specific relation R, extract all pairs of entities (if any) that have the relation R in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. Zelenko, Aone, and Richardella (2003) present a kernel method for extracting the relations person-affiliation and organization-location. For example, in the sentence "John Smith is the chief scientist of the Hardcom Corporation," there is a person-affiliation relation between "John Smith" and "Hardcom Corporation" (Zelenko, Aone, and Richardella, 2003). This is similar to the problem of classifying semantic relations (Section 3.4), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
In the VSM approach to classifying semantic relations (Turney and Littman, 2005), we would have a training set of labeled examples of the relation person-affiliation, for instance. Each example would be represented by a vector of pattern frequencies. Given a specific document discussing "John Smith" and "Hardcom Corporation", we could construct a vector representing the relation between these two entities, and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors. It would seem that there is a problem here, because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for "John Smith" and "Hardcom Corporation" would be very sparse, since these entities might be mentioned only once in the given document. However, this is not a new problem for the Vector Space Model; it is the standard situation when the VSM is used for information retrieval. A query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vector. There are well-known techniques in information retrieval for coping with this disparity, such as weighting schemes for query vectors that are different from the weighting schemes for document vectors (Salton and Buckley, 1988).
Question Answering
In their paper on classifying semantic relations, Moldovan et al. (2004) suggest that an important application of their work is Question Answering. As defined in the Text REtrieval Conference (TREC) Question Answering (QA) track, the task is to answer simple questions, such as "Where have nuclear incidents occurred?", by retrieving a relevant document from a large corpus and then extracting a short string from the document, such as "The Three Mile Island nuclear incident caused a DOE policy crisis." Moldovan et al. (2004) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text. They argue that the desired semantic relation can easily be inferred from the surface form of the question. A question of the form "Where ...?" is likely to be seeking for entities with a location relation and a question of the form "What did ... make?" is likely to be looking for entities with a product relation. In Section 7, we show how LRA can recognize relations such as location and product (see Table 19). Hearst (1992) presents an algorithm for learning hyponym (type of) relations from a corpus and Berland and Charniak (1999) describe how to learn meronym (part of) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words (Fellbaum, 1998) and Nastase and Szpakowicz (2003) list 30 semantic relations for noun-modifier pairs. Hearst (1992) and Berland and Charniak (1999) use manually generated rules to mine text for semantic relations. Turney and Littman (2005) also use a manually generated set of 64 patterns.
Automatic Thesaurus Generation
LRA does not use a predefined set of patterns; it learns patterns from a large corpus. Instead of manually generating new rules or patterns for each new semantic relation, it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations. A nearest neighbour algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations, given the appropriate labeled training data.
Girju, Badulescu, and Moldovan (2003) present an algorithm for learning meronym relations from a corpus. Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. However, they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. Veale (2003) has developed an algorithm for recognizing certain types of word analogies, based on information in WordNet. He proposes to use the algorithm for analogical information retrieval. For example, the query "Muslim church" should return "mosque" and the query "Hindu bible" should return "the Vedas". The algorithm was designed with a focus on analogies of the form adjective:noun::adjective:noun, such as Christian:church::Muslim:mosque.
Information Retrieval
A measure of relational similarity is applicable to this task. Given a pair of words, A and B, the task is to return another pair of words, X and Y , such that there is high relational similarity between the pair A:X and the pair Y :B. For example, given A = "Muslim" and B = "church", return X = "mosque" and Y = "Christian". (The pair Muslim:mosque has a high relational similarity to the pair Christian:church.)
Marx et al. (2002) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora. Each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus. For example, one experiment used a corpus of Buddhist documents and a corpus of Christian documents. A cluster of words such as {Hindu, Mahayana, Zen, ...} from the Buddhist corpus was coupled with a cluster of words such as {Catholic, Protestant, ...} from the Christian corpus. Thus the algorithm appears to have discovered an analogical mapping between Buddhist schools and traditions and Christian schools and traditions. This is interesting work, but it is not directly applicable to SAT analogies, because it discovers analogies between clusters of words, rather than individual words.
Identifying Semantic Roles
A semantic frame for an event such as judgement contains semantic roles such as judge, evaluee, and reason, whereas an event such as statement contains roles such as speaker, addressee, and message (Gildea and Jurafsky, 2002). The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. Moldovan et al. (2004) argue that semantic roles are merely a special case of semantic relations (Section 3.4), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
The Vector Space Model
This section examines past work on measuring attributional and relational similarity using the Vector Space Model (VSM).
Measuring Attributional Similarity with the Vector Space Model
The VSM was first developed for information retrieval (Salton and McGill, 1983;Salton and Buckley, 1988;Salton, 1989) and it is at the core of most modern search engines (Baeza-Yates and Ribeiro-Neto, 1999).
In the VSM approach to information retrieval, queries and documents are represented by vectors. Elements in these vectors are based on the frequencies of words in the corresponding queries and documents. The frequencies are usually transformed by various formulas and weights, tailored to improve the effectiveness of the search engine (Salton, 1989). The attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors. For a given query, the search engine sorts the matching documents in order of decreasing cosine.
The VSM approach has also been used to measure the attributional similarity of words (Lesk, 1969;Ruge, 1992;Pantel and Lin, 2002). Pantel and Lin (2002) clustered words according to their attributional similarity, as measured by a VSM. Their algorithm is able to discover the different senses of polysemous words, using unsupervised learning.
Latent Semantic Analysis enhances the VSM approach to information retrieval by using the Singular Value Decomposition (SVD) to smooth the vectors, which helps to handle noise and sparseness in the data (Deerwester et al., 1990;Dumais, 1993; Landauer and Dumais, 1997). SVD improves both document-query attributional similarity measures (Deerwester et al., 1990;Dumais, 1993) and word-word attributional similarity measures (Landauer and Dumais, 1997). LRA also uses SVD to smooth vectors, as we discuss in Section 5.
Measuring Relational Similarity with the Vector Space Model
Let R 1 be the semantic relation (or set of relations) between a pair of words, A and B, and let R 2 be the semantic relation (or set of relations) between another pair, C and D. We wish to measure the relational similarity between R 1 and R 2 . The relations R 1 and R 2 are not given to us; our task is to infer these hidden (latent) relations and then compare them.
In the VSM approach to relational similarity (Turney and Littman, 2005), we create vectors, r 1 and r 2 , that represent features of R 1 and R 2 , and then measure the similarity of R 1 and R 2 by the cosine of the angle θ between r 1 and r 2 :
r 1 = r 1,1 , . . . , r 1,n (5) r 2 = r 2,1 , . . . r 2,n (6) cosine(θ) = n i=1 r 1,i · r 2,i n i=1 (r 1,i ) 2 · n i=1 (r 2,i ) 2 = r 1 · r 2 √ r 1 · r 1 · √ r 2 · r 2 = r 1 · r 2 r 1 · r 2 (7)
We create a vector, r, to characterize the relationship between two words, X and Y , by counting the frequencies of various short phrases containing X and Y . Turney and Littman (2005) use a list of 64 joining terms, such as "of", "for", and "to", to form 128 phrases that contain X and Y , such as "X of Y ", "Y of X", "X for Y ", "Y for X", "X to Y ", and "Y to X". These phrases are then used as queries for a search engine and the number of hits (matching documents) is recorded for each query. This process yields a vector of 128 numbers. If the number of hits for a query is x, then the corresponding element in the vector r is log(x + 1). Several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures (Salton and Buckley, 1988;Ruge, 1992;Lin, 1998b).
Turney and Littman (2005) evaluated the VSM approach by its performance on 374 SAT analogy questions, achieving a score of 47%. Since there are five choices for each question, the expected score for random guessing is 20%. To answer a multiple-choice analogy question, vectors are created for the stem pair and each choice pair, and then cosines are calculated for the angles between the stem pair and each choice pair. The best guess is the choice pair with the highest cosine. We use the same set of analogy questions to evaluate LRA in Section 6.
The VSM was also evaluated by its performance as a distance (nearness) measure in a supervised nearest neighbour classifier for noun-modifier semantic relations (Turney and Littman, 2005). The evaluation used 600 hand-labeled noun-modifier pairs from Nastase and Szpakowicz (2003). A testing pair is classified by searching for its single nearest neighbour in the labeled training data. The best guess is the label for the training pair with the highest cosine. LRA is evaluated with the same set of nounmodifier pairs in Section 7. Turney and Littman (2005) used the AltaVista search engine to obtain the frequency information required to build vectors for the VSM. Thus their corpus was the set of all web pages indexed by AltaVista. At the time, the English subset of this corpus consisted of about 5 × 10 11 words. Around April 2004, AltaVista made substantial changes to their search engine, removing their advanced search operators. Their search engine no longer supports the asterisk operator, which was used by Turney and Littman (2005) for stemming and wild-card searching. AltaVista also changed their policy towards automated searching, which is now forbidden. 3 Turney and Littman (2005) used AltaVista's hit count, which is the number of documents (web pages) matching a given query, but LRA uses the number of passages (strings) matching a query. In our experiments with LRA (Sections 6 and 7), we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003), running on a 16 CPU Beowulf Cluster, with a corpus of about 5 × 10 10 English words. The Waterloo MultiText System (WMTS) is a distributed (multiprocessor) search engine, designed primarily for passage retrieval (although document retrieval is possible, as a special case of passage retrieval). The text and index require approximately one terabyte of disk space. Although AltaVista only gives a rough estimate of the number of matching documents, the Waterloo MultiText System gives exact counts of the number of matching passages. Turney et al. (2003) combine 13 independent modules to answer SAT questions. The performance of LRA significantly surpasses this combined system, but there is no real contest between these approaches, because we can simply add LRA to the combination, as a fourteenth module. Since the VSM module had the best performance of the thirteen modules (Turney et al., 2003), the following experiments focus on comparing VSM and LRA.
Latent Relational Analysis
LRA takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs. LRA relies on three resources, a search engine with a very large corpus of text, a broad-coverage thesaurus of synonyms, and an efficient implementation of SVD.
We first present a short description of the core algorithm. Later, in the following subsections, we will give a detailed description of the algorithm, as it is applied in the experiments in Sections 6 and 7.
• Given a set of word pairs as input, look in a thesaurus for synonyms for each word in each word pair. For each input pair, make alternate pairs by replacing the original words with their synonyms. The alternate pairs are intended to form near analogies with the corresponding original pairs (see Section 2.3).
• Filter out alternate pairs that do not form near analogies, by dropping alternate pairs that co-occur rarely in the corpus. In the preceding step, if a synonym replaced an ambiguous original word, but the synonym captures the wrong sense of the original word, it is likely that there is no significant relation between the words in the alternate pair, so they will rarely co-occur.
• For each original and alternate pair, search in the corpus for short phrases that begin with one member of the pair and end with the other. These phrases characterize the relation between the words in each pair.
• For each phrase from the previous step, create several patterns, by replacing words in the phrase with wild cards.
• Build a pair-pattern frequency matrix, in which each cell represents the number of times that the corresponding pair (row) appears in the corpus with the corresponding pattern (column). The number will usually be zero, resulting in a sparse matrix.
• Apply the Singular Value Decomposition to the matrix. This reduces noise in the matrix and helps with sparse data.
• Suppose that we wish to calculate the relational similarity between any two of the original pairs. Start by looking for the two row vectors in the pair-pattern frequency matrix that correspond to the two original pairs. Calculate the cosine of the angle between these two row vectors. Then merge the cosine of the two original pairs with the cosines of their corresponding alternate pairs, as follows. If an analogy formed with alternate pairs has a higher cosine than the original pairs, we assume that we have found a better way to express the analogy, but we have not significantly changed its meaning. If the cosine is lower, we assume that we may have changed the meaning, by inappropriately replacing words with synonyms. Filter out inappropriate alternates by dropping all analogies formed of alternates, such that the cosines are less than the cosine for the original pairs. The relational similarity between the two original pairs is then calculated as the average of all of the remaining cosines.
The motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus. The hope is that we can find near analogies for the original pairs, such that the near analogies co-occur more frequently in the corpus. The danger is that the alternates may have different relations from the originals. The filtering steps above aim to reduce this risk.
Input and Output
In our experiments, the input set contains from 600 to 2,244 word pairs. The output similarity measure is based on cosines, so the degree of similarity can range from −1 (dissimilar; θ = 180 • ) to +1 (similar; θ = 0 • ). Before applying SVD, the vectors are completely nonnegative, which implies that the cosine can only range from 0 to +1, but SVD introduces negative values, so it is possible for the cosine to be negative, although we have never observed this in our experiments.
Search Engine and Corpus
In the following experiments, we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003). 4 The corpus consists of about 5 × 10 10 English words, gathered by a web crawler, mainly from US academic web sites. The web pages cover a very wide range of topics, styles, genres, quality, and writing skill. The WMTS is well suited to LRA, because the WMTS scales well to large corpora (one terabyte, in our case), it gives exact frequency counts (unlike most web search engines), it is designed for passage retrieval (rather than document retrieval), and it has a powerful query syntax.
Thesaurus
As a source of synonyms, we use Lin's (1998a) automatically generated thesaurus. This thesaurus is available through an online interactive demonstration or it can be downloaded. 5 We used the online demonstration, since the downloadable version seems to contain fewer words. For each word in the input set of word pairs, we automatically query the online demonstration and fetch the resulting list of synonyms. As a courtesy to other users of Lin's online system, we insert a 20 second delay between each query.
Lin's thesaurus was generated by parsing a corpus of about 5 × 10 7 English words, consisting of text from the Wall Street Journal, San Jose Mercury, and AP Newswire (Lin, 1998a). The parser was used to extract pairs of words and their grammatical relations. Words were then clustered into synonym sets, based on the similarity of their grammatical relations. Two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words. Given a word and its part of speech, Lin's thesaurus provides a list of words, sorted in order of decreasing attributional similarity. This sorting is convenient for LRA, since it makes it possible to focus on words with higher attributional similarity and ignore the rest. WordNet, in contrast, given a word and its part of speech, provides a list of words grouped by the possible senses of the given word, with groups sorted by the frequencies of the senses. WordNet's sorting does not directly correspond to sorting by degree of attributional similarity, although various algorithms have been proposed for deriving attributional similarity from WordNet (Resnik, 1995; Jiang and Conrath, 1997; Budanitsky and Hirst, 2001; Banerjee and Pedersen, 2003).
Singular Value Decomposition
We use Rohde's SVDLIBC implementation of the Singular Value Decomposition, which is based on SVDPACKC (Berry, 1992). 6 In LRA, SVD is used to reduce noise and compensate for sparseness.
The Algorithm
We will go through each step of LRA, using an example to illustrate the steps. Assume that the input to LRA is the 374 multiple-choice SAT word analogy questions of Turney and Littman (2005). Since there are six word pairs per question (the stem and five choices), the input consists of 2,244 word pairs. Let's suppose that we wish to calculate the relational similarity between the pair quart:volume and the pair mile:distance, taken from the SAT question in Table 6. The LRA algorithm consists of the following twelve steps:
1. Find alternates: For each word pair A:B in the input set, look in Lin's (1998a) thesaurus for the top num sim words (in the following experiments, num sim is 10) that are most similar to A. For each A ′ that is similar to A, make a new word pair A ′ :B. Likewise, look for the top num sim words that are most similar to B, and for each B ′ , make a new word pair A:B ′ . A:B is called the original pair and each A ′ :B or A:B ′ is an alternate pair. The intent is that alternates should have almost the same semantic relations as the original. For each input pair, there will now be 2 × num sim alternate pairs. When looking for similar words in Lin's (1998a) thesaurus, avoid words that seem unusual (e.g., hyphenated words, words with three characters or less, words with non-alphabetical char- Table 6 This SAT question, from Claman (2000), is used to illustrate the steps in the LRA algorithm.
Stem: quart:volume Choices: (a) day:night (b) mile:distance (c) decade:century (d) friction:heat (e) part:whole Solution: (b) mile:distance acters, multi-word phrases, and capitalized words). The first column in Table 7 shows the alternate pairs that are generated for the original pair quart:volume.
Filter alternates:
For each original pair A:B, filter the 2 × num sim alternates as follows. For each alternate pair, send a query to the WMTS, to find the frequency of phrases that begin with one member of the pair and end with the other. The phrases cannot have more than max phrase words (we use max phrase = 5). Sort the alternate pairs by the frequency of their phrases.
Select the top num f ilter most frequent alternates and discard the remainder (we use num f ilter = 3, so 17 alternates are dropped). This step tends to eliminate alternates that have no clear semantic relation. The third column in Table 7 shows the frequency with which each pair co-occurs in a window of max phrase words. The last column in Table 7 shows the pairs that are selected.
Find phrases:
For each pair (originals and alternates), make a list of phrases in the corpus that contain the pair. Query the WMTS for all phrases that begin with one member of the pair and end with the other (in either order). We ignore suffixes when searching for phrases that match a given pair. The phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pair. These phrases give us information about the semantic relations between the words in each pair. A phrase with no words between the two members of the word pair would give us very little information about the semantic relations (other than that the words occur together with a certain frequency in a certain order). Table 8 gives some examples of phrases in the corpus that match the pair quart:volume.
Find patterns:
For each phrase found in the previous step, build patterns from the intervening words. A pattern is constructed by replacing any or all or none of the intervening words with wild cards (one wild card can only replace one word). If a phrase is n words long, there are n − 2 intervening words between the members of the given word pair (e.g., between quart and volume). Thus a phrase with n words generates 2 (n−2) patterns. (We use max phrase = 5, so a phrase generates at most eight patterns.) For each pattern, count the number of pairs (originals and alternates) with phrases that match the pattern (a wild card must match exactly one word). Keep the top num patterns most frequent patterns and discard the rest (we use num patterns = 4, 000). Typically there will be millions of patterns, so it is not feasible to keep them all.
Map pairs to rows:
In preparation for building the matrix X, create a mapping of word pairs to row numbers. For each pair A:B, create a row for A:B and Table 7 Alternate forms of the original pair quart:volume. The first column shows the original pair and the alternate pairs. The second column shows Lin's similarity score for the alternate word compared to the original word. For example, the similarity between quart and pint is 0.210. The third column shows the frequency of the pair in the WMTS corpus. The fourth column shows the pairs that pass the filtering step (i.e., step 2). "quarts liquid volume" "volume in quarts" "quarts of volume" "volume capacity quarts" "quarts in volume"
Word pair
"volume being about two quarts" "quart total volume" "volume of milk in quarts" "quart of spray volume" "volume include measures like quart" Table 9 Frequencies of various patterns for quart:volume. The asterisk "*" represents the wildcard. Suffixes are ignored, so "quart" matches "quarts". For example, "quarts in volume" is one of the four phrases that match "quart P volume" when P is "in". P = "in" P = "* of" P = "of *" P = "* *" freq("quart P volume") 4 1 5 19 freq("volume P quart") 10 0 2 16 another row for B:A. This will make the matrix more symmetrical, reflecting our knowledge that the relational similarity between A:B and C:D should be the same as the relational similarity between B:A and D:C. This duplication of rows is examined in Section 6.6.
6. Map patterns to columns: Create a mapping of the top num patterns patterns to column numbers. For each pattern P , create a column for "word 1 P word 2 " and another column for "word 2 P word 1 ". Thus there will be 2 × num patterns columns in X. This duplication of columns is examined in Section 6.6.
7. Generate a sparse matrix: Generate a matrix X in sparse matrix format, suitable for input to SVDLIBC. The value for the cell in row i and column j is the frequency of the j-th pattern (see step 6) in phrases that contain the i-th word pair (see step 5). Table 9 gives some examples of pattern frequencies for quart:volume.
8. Calculate entropy: Apply log and entropy transformations to the sparse matrix (Landauer and Dumais, 1997). These transformations have been found to be very helpful for information retrieval (Harman, 1986;. Let x i,j be the cell in row i and column j of the matrix X from step 7. Let m be the number of rows in X and let n be the number of columns. We wish to weight the cell x i,j by the entropy of the j-th column. To calculate the entropy of the column, we need to convert the column into a vector of probabilities. Let p i,j be the probability of x i,j , calculated by normalizing the column vector so that the sum of the elements is one, p i,j = x i,j / m k=1 x k,j . The entropy of the j-th column is then H j = − m k=1 p k,j log(p k,j ). Entropy is at its maximum when p i,j is a uniform distribution, p i,j = 1/m, in which case H j = log(m). Entropy is at its minimum when p i,j is 1 for some value of i and 0 for all other values of i, in which case H j = 0. We want to give more weight to columns (patterns) with frequencies that vary substantially from one row (word pair) to the next, and less weight to columns that are uniform. Therefore we weight the cell x i,j by w j = 1 − H j / log(m), which varies from 0 when p i,j is uniform to 1 when entropy is minimal. We also apply the log transformation to frequencies, log(x i,j + 1). (Entropy is calculated with the original frequency values, before the log transformation is applied.) For all i and all j, replace the original value x i,j in X by the new value w j log(x i,j + 1). This is an instance of the TF-IDF (Term Frequency-Inverse Document Frequency) family of transformations, which is familiar in information retrieval (Salton and Buckley, 1988; Baeza-Yates and Ribeiro-Neto, 1999): log(x i,j + 1) is the TF term and w j is the IDF term. 9. Apply SVD: After the log and entropy transformations have been applied to the matrix X, run SVDLIBC. SVD decomposes a matrix X into a product of three matrices UΣV T , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length: U T U = V T V = I) and Σ is a diagonal matrix of singular values (hence SVD) (Golub and Van Loan, 1996). If X is of rank r, then Σ is also of rank r. Let Σ k , where k < r, be the diagonal matrix formed from the top k singular values, and let U k and V k be the matrices produced by selecting the corresponding columns from U and V. The matrix U k Σ k V T k is the matrix of rank k that best approximates the original matrix X, in the sense that it minimizes the approximation errors. That
is,X = U k Σ k V T k minimizes X − X F over all matricesX of rank k, where
. . . F denotes the Frobenius norm (Golub and Van Loan, 1996). We may think of this matrix U k Σ k V T k as a "smoothed" or "compressed" version of the original matrix. In the subsequent steps, we will be calculating cosines for row vectors. For this purpose, we can simplify calculations by dropping V. The cosine of two vectors is their dot product, after they have been normalized to unit length. The matrix XX T contains the dot products of all of the row vectors. We can find the dot product of the i-th and j-th row vectors by looking at the cell in row i, column j of the matrix XX T . Since V T V = I, we have XX T = UΣV T (UΣV T ) T = UΣV T VΣ T U T = UΣ(UΣ) T , which means that we can calculate cosines with the smaller matrix UΣ, instead of using X = UΣV T (Deerwester et al., 1990).
10. Projection: Calculate U k Σ k (we use k = 300). This matrix has the same number of rows as X, but only k columns (instead of 2 × num patterns columns; in our experiments, that is 300 columns instead of 8,000). We can compare two word pairs by calculating the cosine of the corresponding row vectors in U k Σ k . The row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space. The value k = 300 is recommended by Landauer and Dumais (1997) for measuring the attributional similarity between words. We investigate other values in Section 6.4. Table 10 gives the cosines for the sixteen combinations.
Evaluate alternates: Let
12. Calculate relational similarity: The relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D.
The requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies, which may be introduced in step 1 and may have slipped through the filtering in step 2. Averaging the cosines, as opposed to taking their maximum, is intended to provide some resistance to noise. For quart:volume and mile:distance, the third column in Table 10 shows which alternates are used to calculate the average. For these two pairs, the average of the selected cosines is 0.677. In Table 7, we see that pumping:volume Table 10 The sixteen combinations and their cosines. A:B::C:D expresses the analogy "A is to B as C is to D". The third column indicates those combinations for which the cosine is greater than or equal to the cosine of the original analogy, quart:volume::mile:distance. has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. However, Table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12.
Steps 11 and 12 can be repeated for each two input pairs that are to be compared. This completes the description of LRA. Table 11 gives the cosines for the sample SAT question. The choice pair with the highest average cosine (the choice with the largest value in column #1), choice (b), is the solution for this question; LRA answers the question correctly. For comparison, column #2 gives the cosines for the original pairs and column #3 gives the highest cosine. For this particular SAT question, there is one choice that has the highest cosine for all three columns, choice (b), although this is not true in general. Note that the gap between the first choice (b) and the second choice (d) is largest for the average cosines (column #1). This suggests that the average of the cosines (column #1) is better at discriminating the correct choice than either the original cosine (column #2) or the highest cosine (column #3).
Experiments with Word Analogy Questions
This section presents various experiments with 374 multiple-choice SAT word analogy questions. Table 12 shows the performance of the baseline LRA system on the 374 SAT questions, using the parameter settings and configuration described in Section 5. LRA correctly answered 210 of the 374 questions. 160 questions were answered incorrectly and 4 questions were skipped, because the stem pair and its alternates were represented by zero Table 11 Cosines for the sample SAT question given in Table 6. Column #1 gives the averages of the cosines that are greater than or equal to the original cosines (e.g., the average of the cosines that are marked "yes" in Table 10 is 0.677; see choice (b) in column #1). Column #2 gives the cosine for the original pairs (e.g., the cosine for the first pair in Table 10 is 0.525; see choice (b) in column #2). Column #3 gives the maximum cosine for the sixteen possible analogies (e.g., the maximum cosine in Table 10 vectors. The performance of LRA is significantly better than the lexicon-based approach of Veale (2004) (see Section 3.1) and the best performance using attributional similarity (see Section 2.3), with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). As another point of reference, consider the simple strategy of always guessing the choice with the highest co-occurrence frequency. The idea here is that the words in the solution pair may occur together frequently, because there is presumably a clear and meaningful relation between the solution words, whereas the distractors may only occur together rarely, because they have no meaningful relation. This strategy is signifcantly worse than random guessing. The opposite strategy, always guessing the choice pair with the lowest co-occurrence frequency, is also worse than random guessing (but not significantly). It appears that the designers of the SAT questions deliberately chose distractors that would thwart these two strategies.
Baseline LRA System
With 374 questions and 6 word pairs per question (one stem and five choices), there are 2,244 pairs in the input set. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 8,976 pairs. In step 5, for each pair A:B, we add B:A, yielding 17,952 pairs. However, some pairs are dropped because they correspond to zero vectors (they do not appear together in a window of five words in the WMTS . Also, a few words do not appear in Lin's thesaurus, and some word pairs appear twice in the SAT questions (e.g., lion:cat). The sparse matrix (step 7) has 17,232 rows (word pairs) and 8,000 columns (patterns), with a density of 5.8% (percentage of nonzero values). Table 13 gives the time required for each step of LRA, a total of almost nine days. All of the steps used a single CPU on a desktop computer, except step 3, finding the phrases for each word pair, which used a 16 CPU Beowulf cluster. Most of the other steps are parallelizable; with a bit of programming effort, they could also be executed on the Beowulf cluster. All CPUs (both desktop and cluster) were 2.4 GHz Intel Xeons. The desktop computer had 2 GB of RAM and the cluster had a total of 16 GB of RAM. Table 14 compares LRA to the Vector Space Model with the 374 analogy questions. VSM-AV refers to the VSM using AltaVista's database as a corpus. The VSM-AV results are taken from Turney and Littman (2005). As mentioned in Section 4.2, we estimate this corpus contained about 5 × 10 11 English words at the time the VSM-AV experiments took place. VSM-WMTS refers to the VSM using the WMTS, which contains about 5 × 10 10 English words. We generated the VSM-WMTS results by adapting the VSM to the WMTS. The algorithm is slightly different from Turney and Littman (2005), because we used passage frequencies instead of document frequencies.
LRA versus VSM
All three pairwise differences in recall in Table 14 are statistically significant with 95% confidence, using the Fisher Exact Test (Agresti, 1990). The pairwise differences in Table 15 Comparison with human SAT performance. The last column in the table indicates whether (YES) or not (NO) the average human performance (57%) falls within the 95% confidence interval of the corresponding algorithm's performance. The confidence intervals are calculated using the Binomial Exact Test (Agresti, 1990 precision between LRA and the two VSM variations are also significant, but the difference in precision between the two VSM variations (42.4% versus 47.7%) is not significant. Although VSM-AV has a corpus ten times larger than LRA's, LRA still performs better than VSM-AV.
Comparing VSM-AV to VSM-WMTS, the smaller corpus has reduced the score of the VSM, but much of the drop is due to the larger number of questions that were skipped (34 for VSM-WMTS versus 5 for VSM-AV). With the smaller corpus, many more of the input word pairs simply do not appear together in short phrases in the corpus. LRA is able to answer as many questions as VSM-AV, although it uses the same corpus as VSM-WMTS, because Lin's thesaurus allows LRA to substitute synonyms for words that are not in the corpus.
VSM-AV required 17 days to process the 374 analogy questions (Turney and Littman, 2005), compared to 9 days for LRA. As a courtesy to AltaVista, Turney and Littman (2005) inserted a five second delay between each query. Since the WMTS is running locally, there is no need for delays. VSM-WMTS processed the questions in only one day.
Human Performance
The average performance of college-bound senior high school students on verbal SAT questions corresponds to a recall (percent correct) of about 57% (Turney and Littman, 2005). The SAT I test consists of 78 verbal questions and 60 math questions (there is also an SAT II test, covering specific subjects, such as chemistry). Analogy questions are only a subset of the 78 verbal SAT questions. If we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal SAT I questions, then we can estimate that the average college-bound senior would correctly answer about 57% of the 374 analogy questions.
Of our 374 SAT questions, 190 are from a collection of ten official SAT tests (Claman, 2000). On this subset of the questions, LRA has a recall of 61.1%, compared to a recall of 51.1% on the other 184 questions. The 184 questions that are not from Claman (2000) seem to be more difficult. This indicates that we may be underestimating how well LRA performs, relative to college-bound senior high school students. Claman (2000) suggests that the analogy questions may be somewhat harder than other verbal SAT questions, so we may be slightly overestimating the mean human score on the analogy questions. Table 15 gives the 95% confidence intervals for LRA, VSM-AV, and VSM-WMTS, calculated by the Binomial Exact Test (Agresti, 1990). There is no significant difference between LRA and human performance, but VSM-AV and VSM-WMTS are significantly below human-level performance.
Varying the Parameters in LRA
There are several parameters in the LRA algorithm (see Section 5.5). The parameter values were determined by trying a small number of possible values on a small set of questions that were set aside. Since LRA is intended to be an unsupervised learning algorithm, we did not attempt to tune the parameter values to maximize the precision and recall on the 374 SAT questions. We hypothesized that LRA is relatively insensitive to the values of the parameters. Table 16 shows the variation in the performance of LRA as the parameter values are adjusted. We take the baseline parameter settings (given in Section 5.5) and vary each parameter, one at a time, while holding the remaining parameters fixed at their baseline values. None of the precision and recall values are significantly different from the baseline, according to the Fisher Exact Test (Agresti, 1990), at the 95% confidence level. This supports the hypothesis that the algorithm is not sensitive to the parameter values.
Although a full run of LRA on the 374 SAT questions takes nine days, for some of the parameters it is possible to reuse cached data from previous runs. We limited the experiments with num sim and max phrase because caching was not as helpful for these parameters, so experimenting with them required several weeks.
Ablation Experiments
As mentioned in the introduction, LRA extends the VSM approach of Turney and Littman (2005) by (1) exploring variations on the analogies by replacing words with synonyms (step 1),
(2) automatically generating connecting patterns (step 4), and (3) smoothing the data with SVD (step 9). In this subsection, we ablate each of these three components to assess their contribution to the performance of LRA. Table 17 shows the results. Without SVD (compare column #1 to #2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, we hypothesize that the drop in performance would be significant with a larger set of word pairs. More word pairs would increase the sample size, which would decrease the 95% confidence interval, which would likely show that SVD is making a significant contribution. Furthermore, more word pairs would increase the matrix size, which would give SVD more leverage. For example, Landauer and Dumais (1997) apply SVD to a matrix of of 30,473 columns by 60,768 rows, but our matrix here is 8,000 columns by 17,232 rows. We are currently gathering more SAT questions, to test this hypothesis.
Without synonyms (compare column #1 to #3 in Table 17), recall drops significantly (from 56.1% to 49.5%), but the drop in precision is not significant. When the synonym component is dropped, the number of skipped questions rises from 4 to 22, which demonstrates the value of the synonym component of LRA for compensating for sparse data.
When both SVD and synonyms are dropped (compare column #1 to #4 in Table 17), the decrease in recall is significant, but the decrease in precision is not significant. Again, we believe that a larger sample size would show the drop in precision is significant.
If we eliminate both synonyms and SVD from LRA, all that distinguishes LRA from VSM-WMTS is the patterns (step 4). The VSM approach uses a fixed list of 64 patterns to generate 128 dimensional vectors (Turney and Littman, 2005), whereas LRA uses a dynamically generated set of 4,000 patterns, resulting in 8,000 dimensional vectors. We can see the value of the automatically generated patterns by comparing LRA without synonyms and SVD (column #4) to VSM-WMTS (column #5). The difference in both precision and recall is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
The ablation experiments support the value of the patterns (step 4) and synonyms (step 1) in LRA, but the contribution of SVD (step 9) has not been proven, although we believe more data will support its effectiveness. Nonetheless, the three components together result in a 16% increase in F (compare #1 to #5).
Matrix Symmetry
We know a priori that, if A:B::C:D, then B:A::D:C. For example, "mason is to stone as carpenter is to wood" implies "stone is to mason as wood is to carpenter". Therefore a good measure of relational similarity, sim r , should obey the following equation:
In steps 5 and 6 of the LRA algorithm (Section 5.5), we ensure that the matrix X is symmetrical, so that equation (8) is necessarily true for LRA. The matrix is designed so that the row vector for A:B is different from the row vector for B:A only by a permutation of the elements. The same permutation distinguishes the row vectors for C:D and D:C. Therefore the cosine of the angle between A:B and C:D must be identical to the cosine of the angle between B:A and D:C (see equation (7)).
To discover the consequences of this design decision, we altered steps 5 and 6 so that symmetry is no longer preserved. In step 5, for each word pair A:B that appears in the input set, we only have one row. There is no row for B:A unless B:A also appears in the input set. Thus the number of rows in the matrix dropped from 17,232 to 8,616.
In step 6, we no longer have two columns for each pattern P , one for "word 1 P word 2 " and another for "word 2 P word 1 ". However, to be fair, we kept the total number of columns at 8,000. In step 4, we selected the top 8,000 patterns (instead of the top 4,000), distinguishing the pattern "word 1 P word 2 " from the pattern "word 2 P word 1 " (instead of considering them equivalent). Thus a pattern P with a high frequency is likely to appear in two columns, in both possible orders, but a lower frequency pattern might appear in only one column, in only one possible order.
These changes resulted in a slight decrease in performance. Recall dropped from 56.1% to 55.3% and precision dropped from 56.8% to 55.9%. The decrease is not statistically significant. However, the modified algorithm no longer obeys equation (8).
Although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the SAT questions, we prefer to retain symmetry, to ensure that equation (8) is satisfied.
Note that, if A:B::C:D, it does not follow that B:A::C:D. For example, it is false that "stone is to mason as carpenter is to wood". In general (except when the semantic relations between A and B are symmetrical), we have the following inequality:
Therefore we do not want A:B and B:A to be represented by identical row vectors, although it would ensure that equation (8) is satisfied.
All Alternates versus Better Alternates
In step 12 of LRA, the relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D. That is, the average includes only those alternates that are "better" than the originals. Taking all alternates instead of the better alternates, recall drops from 56.1% to 40.4% and precision drops from 56.8% to 40.8%. Both decreases are statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
Interpreting Vectors
Suppose a word pair A:B corresponds to a vector r in the matrix X. It would be convenient if inspection of r gave us a simple explanation or description of the relation between A and B. For example, suppose the word pair ostrich:bird maps to the row vector r. It would be pleasing to look in r and find that the largest element corresponds to the pattern "is the largest" (i.e., "ostrich is the largest bird"). Unfortunately, inspection of r reveals no such convenient patterns. We hypothesize that the semantic content of a vector is distributed over the whole vector; it is not concentrated in a few elements. To test this hypothesis, we modified step 10 of LRA. Instead of projecting the 8,000 dimensional vectors into the 300 dimensional space U k Σ k , we use the matrix U k Σ k V T k . This matrix yields the same cosines as U k Σ k , but preserves the original 8,000 dimensions, making it easier to interpret the row vectors. For each row vector in U k Σ k V T k , we select the N largest values and set all other values to zero. The idea here is that we will only pay attention to the N most important patterns in r; the remaining patterns will be ignored. This reduces the length of the row vectors, but the cosine is the dot product of normalized vectors (all vectors are normalized to unit length; see equation (7)), so the change to the vector lengths has no impact; only the angle of the vectors is important. If most of the semantic content is in the N largest elements of r, then setting the remaining elements to zero should have relatively little impact. Table 18 shows the performance as N varies from 1 to 3,000. The precision and recall are significantly below the baseline LRA until N ≥ 300 (95% confidence, Fisher Exact Test). In other words, for a typical SAT analogy question, we need to examine the top 300 patterns to explain why LRA selected one choice instead of another.
We are currently working on an extension of LRA that will explain with a single pattern why one choice is better than another. We have had some promising results, but this work is not yet mature. However, we can confidently claim that interpreting the vectors is not trivial.
Manual Patterns versus Automatic Patterns
Turney and Littman (2005) used 64 manually generated patterns whereas LRA uses 4,000 automatically generated patterns. We know from Section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns. It may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns. If we require an exact match, 50 of the 64 manual patterns can be found in the automatic patterns. If we are lenient about wildcards, and count the pattern "not the" as matching "* not the" (for example), then 60 of the 64 manual patterns appear within the automatic patterns. This suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns, rather than a qualitative difference in the patterns. Turney and Littman (2005) point out that some of their 64 patterns have been used by other researchers. For example, Hearst (1992) used the pattern "such as" to discover hyponyms and Berland and Charniak (1999) used the pattern "of the" to discover meronyms. Both of these patterns are included in the 4,000 patterns automatically generated by LRA.
The novelty in Turney and Littman (2005) is that their patterns are not used to mine text for instances of word pairs that fit the patterns (Hearst, 1992; Berland and Charniak, 1999); instead, they are used to gather frequency data for building vectors that represent the relation between a given pair of words. The results in Section 6.8 show that a vector contains more information than any single pattern or small set of patterns; a vector is a distributed representation. LRA is distinct from Hearst (1992) and Berland and Charniak (1999) in its focus on distributed representations, which it shares with Turney and Littman (2005), but LRA goes beyond Turney and Littman (2005) by finding patterns automatically. Riloff and Jones (1999) and Yangarber (2003) also find patterns automatically, but their goal is to mine text for instances of word pairs; the same goal as Hearst (1992) and Berland and Charniak (1999). Because LRA uses patterns to build distributed vector representations, it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of Hearst (1992), Berland and Charniak (1999), Riloff and Jones (1999), and Yangarber (2003). Therefore LRA can simply select the highest frequency patterns (step 4 in Section 5.5); it does not need the more sophisticated selection algorithms of Riloff and Jones (1999) and Yangarber (2003).
Experiments with Noun-Modifier Relations
This section describes experiments with 600 noun-modifier pairs, hand-labeled with 30 classes of semantic relations (Nastase and Szpakowicz, 2003). In the following experiments, LRA is used with the baseline parameter values, exactly as described in Section 5.5. No adjustments were made to tune LRA to the noun-modifier pairs. LRA is used as a distance (nearness) measure in a single nearest neighbour supervised learning algorithm.
Classes of Relations
The following experiments use the 600 labeled noun-modifier pairs of Nastase and Szpakowicz (2003). This data set includes information about the part of speech and WordNet synset (synonym set; i.e., word sense tag) of each word, but our algorithm does not use this information. Table 19 lists the 30 classes of semantic relations. The table is based on Appendix A of Nastase and Szpakowicz (2003), with some simplifications. The original table listed several semantic relations for which there were no instances in the data set. These were relations that are typically expressed with longer phrases (three or more words), rather than noun-modifier word pairs. For clarity, we decided not to include these relations in Table 19.
In this table, H represents the head noun and M represents the modifier. For example, in "flu virus", the head noun (H) is "virus" and the modifier (M ) is "flu" (*). In English, the modifier (typically a noun or adjective) usually precedes the head noun. In the description of purpose, V represents an arbitrary verb. In "concert hall", the hall is for presenting concerts (V is "present") or holding concerts (V is "hold") ( †).
Nastase and Szpakowicz (2003) organized the relations into groups. The five capitalized terms in the "Relation" column of Table 19 are the names of five groups of semantic relations. (The original table had a sixth group, but there are no examples of this group in the data set.) We make use of this grouping in the following experiments.
Baseline LRA with Single Nearest Neighbour
The following experiments use single nearest neighbour classification with leave-oneout cross-validation. For leave-one-out cross-validation, the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers. The data set is split 600 times, so that each noun-modifier gets a turn as the testing word pair. The predicted class of the testing pair is the class of the single nearest neighbour in the training set. As the measure of nearness, we use LRA to calculate the relational similarity between the testing pair and the training pairs. The single nearest neighbour algorithm is a supervised learning algorithm (i.e., it requires a training set of labeled data), but we are using LRA to measure the distance between a pair and its potential neighbours, and LRA is itself determined in an unsupervised fashion (i.e., LRA does not need labeled data).
Each SAT question has five choices, so answering 374 SAT questions required calculating 374 × 5 × 16 = 29, 920 cosines. The factor of 16 comes from the alternate pairs, step 11 in LRA. With the noun-modifier pairs, using leave-one-out cross-validation, each test pair has 599 choices, so an exhaustive application of LRA would require calculating 600 × 599 × 16 = 5, 750, 400 cosines. To reduce the amount of computation required, we first find the 30 nearest neighbours for each pair, ignoring the alternate pairs (600 × 599 = 359, 400 cosines), and then apply the full LRA, including the alternates, to just those 30 neighbours (600 × 30 × 16 = 288, 000 cosines), which requires calculating only 359, 400 + 288, 000 = 647, 400 cosines.
There are 600 word pairs in the input set for LRA. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 2,400 pairs. In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. However, some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. The sparse matrix (step 7) has 4,748 rows and 8,000 columns, with a density of 8.4%.
Following Turney and Littman (2005), we evaluate the performance by accuracy and also by the macroaveraged F measure (Lewis, 1991). Macroaveraging calculates the precision, recall, and F for each class separately, and then calculates the average across all classes. Microaveraging combines the true positive, false positive, and false negative counts for all of the classes, and then calculates precision, recall, and F from the combined counts. Macroaveraging gives equal weight to all classes, but microaveraging gives more weight to larger classes. We use macroaveraging (giving equal weight to all classes), because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus.
Classification with 30 distinct classes is a hard problem. To make the task easier, we can collapse the 30 classes to 5 classes, using the grouping that is given in Table 19. For example, agent and beneficiary both collapse to participant. On the 30 class problem, LRA with the single nearest neighbour algorithm achieves an accuracy of 39.8% (239/600) and a macroaveraged F of 36.6%. Always guessing the majority class would result in an accuracy of 8.2% (49/600). On the 5 class problem, the accuracy is 58.0% (348/600) and the macroaveraged F is 54.6%. Always guessing the majority class would give an accuracy of 43.3% (260/600). For both the 30 class and 5 class problems, LRA's accuracy is significantly higher than guessing the majority class, with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). Table 20 shows the performance of LRA and VSM on the 30 class problem. VSM-AV is VSM with the AltaVista corpus and VSM-WMTS is VSM with the WMTS corpus. The results for VSM-AV are taken from Turney and Littman (2005). All three pairwise differences in the three F measures are statistically significant at the 95% level, according to the Paired T-Test (Feelders and Verkooijen, 1995). The accuracy of LRA is significantly higher than the accuracies of VSM-AV and VSM-WMTS, according to the Fisher Exact Test (Agresti, 1990), but the difference between the two VSM accuracies is not significant. Table 21 compares the performance of LRA and VSM on the 5 class problem. The accuracy and F measure of LRA are significantly higher than the accuracies and F measures of VSM-AV and VSM-WMTS, but the differences between the two VSM accuracies and F measures are not significant.
LRA versus VSM
Discussion
The experimental results in Sections 6 and 7 demonstrate that LRA performs significantly better than the VSM, but it is also clear that there is room for improvement. The accuracy might not yet be adequate for practical applications, although past work has shown that it is possible to adjust the tradeoff of precision versus recall (Turney and Littman, 2005). For some of the applications, such as information extraction, LRA might be suitable if it is adjusted for high precision, at the expense of low recall.
Another limitation is speed; it took almost nine days for LRA to answer 374 analogy questions. However, with progress in computer hardware, speed will gradually become less of a concern. Also, the software has not been optimized for speed; there are several places where the efficiency could be increased and many operations are parallelizable. It may also be possible to precompute much of the information for LRA, although this would require substantial changes to the algorithm.
The difference in performance between VSM-AV and VSM-WMTS shows that VSM is sensitive to the size of the corpus. Although LRA is able to surpass VSM-AV when the WMTS corpus is only about one tenth the size of the AV corpus, it seems likely that LRA would perform better with a larger corpus. The WMTS corpus requires one terabyte of hard disk space, but progress in hardware will likely make ten or even one hundred terabytes affordable in the relatively near future.
For noun-modifier classification, more labeled data should yield performance improvements. With 600 noun-modifier pairs and 30 classes, the average class has only 20 examples. We expect that the accuracy would improve substantially with five or ten times more examples. Unfortunately, it is time consuming and expensive to acquire hand-labeled data.
Another issue with noun-modifier classification is the choice of classification scheme for the semantic relations. The 30 classes of Nastase and Szpakowicz (2003) might not be the best scheme. Other researchers have proposed different schemes (Vanderwende, 1994;Barker and Szpakowicz, 1998;Rosario and Hearst, 2001;Rosario, Hearst, and Fillmore, 2002). It seems likely that some schemes are easier for machine learning than others. For some applications, 30 classes may not be necessary; the 5 class scheme may be sufficient.
LRA, like VSM, is a corpus-based approach to measuring relational similarity. Past work suggests that a hybrid approach, combining multiple modules, some corpusbased, some lexicon-based, will surpass any purebred approach (Turney et al., 2003). In future work, it would be natural to combine the corpus-based approach of LRA with the lexicon-based approach of Veale (2004), perhaps using the combination method of Turney et al. (2003).
The Singular Value Decomposition is only one of many methods for handling sparse, noisy data. We have also experimented with Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999), Kernel Principal Components Analysis (KPCA) (Scholkopf, Smola, and Muller, 1997), and Iterative Scaling (IS) (Ando, 2000). We had some interesting results with small matrices (around 2,000 rows by 1,000 columns), but none of these methods seemed substantially better than SVD and none of them scaled up to the matrix sizes we are using here (e.g., 17,232 rows and 8,000 columns; see Section 6.1).
In step 4 of LRA, we simply select the top num patterns most frequent patterns and discard the remaining patterns. Perhaps a more sophisticated selection algorithm would improve the performance of LRA. We have tried a variety of ways of selecting patterns, but it seems that the method of selection has little impact on performance. We hypothesize that the distributed vector representation is not sensitive to the selection method, but it is possible that future work will find a method that yields significant improvement in performance.
Conclusion
This paper has introduced a new method for calculating relational similarity, Latent Relational Analysis. The experiments demonstrate that LRA performs better than the VSM approach, when evaluated with SAT word analogy questions and with the task of classifying noun-modifier expressions. The VSM approach represents the relation between a pair of words with a vector, in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus. LRA extends this approach in three ways:
| 14,134 |
cs0608100
|
2951193962
|
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.
|
The problem of relation extraction is, given an input document and a specific relation @math , extract all pairs of entities (if any) that have the relation @math in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. zelenko03 present a kernel method for extracting the relations person-affiliation and organization-location . For example, in the sentence John Smith is the chief scientist of the Hardcom Corporation,'' there is a person-affiliation relation between John Smith'' and Hardcom Corporation'' @cite_12 . This is similar to the problem of classifying semantic relations (), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
|
{
"abstract": [
"We present an application of kernel methods to extracting relations from unstructured natural language sources. We introduce kernels defined over shallow parse representations of text, and design efficient algorithms for computing the kernels. We use the devised kernels in conjunction with Support Vector Machine and Voted Perceptron learning algorithms for the task of extracting person-affiliation and organization-location relations from text. We experimentally evaluate the proposed methods and compare them with feature-based learning algorithms, with promising results."
],
"cite_N": [
"@cite_12"
],
"mid": [
"2162590473"
]
}
|
Similarity of Semantic Relations
|
There are at least two kinds of similarity. Attributional similarity is correspondence between attributes and relational similarity is correspondence between relations (Medin, Goldstone, and Gentner, 1990). When two words have a high degree of attributional similarity, we call them synonyms. When two word pairs have a high degree of relational similarity, we say they are analogous.
Verbal analogies are often written in the form A:B::C:D, meaning A is to B as C is to D; for example, traffic:street::water:riverbed. Traffic flows over a street; water flows over a riverbed. A street carries traffic; a riverbed carries water. There is a high degree of relational similarity between the word pair traffic:street and the word pair water:riverbed. In fact, this analogy is the basis of several mathematical theories of traffic flow (Daganzo, 1994).
In Section 2, we look more closely at the connections between attributional and relational similarity. In analogies such as mason:stone::carpenter:wood, it seems that relational similarity can be reduced to attributional similarity, since mason and carpenter are attributionally similar, as are stone and wood. In general, this reduction fails. Consider the analogy traffic:street::water:riverbed. Traffic and water are not attributionally similar. Street and riverbed are only moderately attributionally similar.
Many algorithms have been proposed for measuring the attributional similarity between two words (Lesk, 1969;Resnik, 1995; Landauer and Dumais, 1997; Jiang and Conrath, 1997; Lin, 1998b;Turney, 2001;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003). Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais, 1997), information retrieval (Deerwester et al., 1990), determining semantic orientation (Turney, 2002), grading student essays (Rehder et al., 1998), measuring textual cohesion (Morris and Hirst, 1991), and word sense disambiguation (Lesk, 1986).
On the other hand, since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. Many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity. We discuss related problems in natural language processing, information retrieval, and information extraction in more detail in Section 3. This paper builds on the Vector Space Model (VSM) of information retrieval. Given a query, a search engine produces a ranked list of documents. The documents are ranked in order of decreasing attributional similarity between the query and each document. Almost all modern search engines measure attributional similarity using the VSM (Baeza-Yates and Ribeiro-Neto, 1999). Turney and Littman (2005) adapt the VSM approach to measuring relational similarity. They used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words. Section 4 presents the VSM approach to measuring similarity.
In Section 5, we present an algorithm for measuring relational similarity, which we call Latent Relational Analysis (LRA). The algorithm learns from a large corpus of unlabeled, unstructured text, without supervision. LRA extends the VSM approach of Turney and Littman (2005) in three ways: (1) The connecting patterns are derived automatically from the corpus, instead of using a fixed set of patterns.
(2) Singular Value Decomposition (SVD) is used to smooth the frequency data. (3) Given a word pair such as traffic:street, LRA considers transformations of the word pair, generated by replacing one of the words by synonyms, such as traffic:road, traffic:highway.
Section 6 presents our experimental evaluation of LRA with a collection of 374 multiple-choice word analogy questions from the SAT college entrance exam. 1 An example of a typical SAT question appears in Table 1. In the educational testing literature, the first pair (mason:stone) is called the stem of the analogy. The correct choice is called the solution and the incorrect choices are distractors. We evaluate LRA by testing its ability to select the solution and avoid the distractors. The average performance of collegebound senior high school students on verbal SAT questions corresponds to an accuracy of about 57%. LRA achieves an accuracy of about 56%. On these same questions, the VSM attained 47%.
One application for relational similarity is classifying semantic relations in nounmodifier pairs (Turney and Littman, 2005). In Section 7, we evaluate the performance of LRA with a set of 600 noun-modifier pairs from Nastase and Szpakowicz (2003). The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the head noun (printer) and the modifier (laser). The 600 pairs have been manually labeled with 30 classes of semantic relations. For example, "laser printer" is classified as instrument; the printer uses the laser as an instrument for We approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem. The 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbour in the training set. LRA is used to measure distance (i.e., similarity, nearness). LRA achieves an accuracy of 39.8% on the 30-class problem and 58.0% on the 5-class problem. On the same 600 noun-modifier pairs, the VSM had accuracies of 27.8% (30-class) and 45.7% (5-class) (Turney and Littman, 2005).
We discuss the experimental results, limitations of LRA, and future work in Section 8 and we conclude in Section 9.
Attributional and Relational Similarity
In this section, we explore connections between attributional and relational similarity.
Types of Similarity
Medin, Goldstone, and Gentner (1990) distinguish attributes and relations as follows:
Attributes are predicates taking one argument (e.g., X is red, X is large), whereas relations are predicates taking two or more arguments (e.g., X collides with Y , X is larger than Y ). Attributes are used to state properties of objects; relations express relations between objects or propositions. Gentner (1983) notes that what counts as an attribute or a relation can depend on the context. For example, large can be viewed as an attribute of X, LARGE(X ), or a relation between X and some standard Y , LARGER THAN(X , Y ).
The amount of attributional similarity between two words, A and B, depends on the degree of correspondence between the properties of A and B. A measure of attributional similarity is a function that maps two words, A and B, to a real number, sim a (A, B) ∈ ℜ. The more correspondence there is between the properties of A and B, the greater their attributional similarity. For example, dog and wolf have a relatively high degree of attributional similarity.
The amount of relational similarity between two pairs of words, A:B and C:D, depends on the degree of correspondence between the relations between A and B and the relations between C and D. A measure of relational similarity is a function that maps two pairs, A:B and C:D, to a real number, sim r (A : B, C : D) ∈ ℜ. The more correspondence there is between the relations of A:B and C:D, the greater their relational similarity. For example, dog:bark and cat:meow have a relatively high degree of relational
As these examples show, semantic relatedness is the same as attributional similarity (e.g., hot and cold are both kinds of temperature, pencil and paper are both used for writing). Here we prefer to use the term attributional similarity, because it emphasizes the contrast with relational similarity. The term semantic relatedness may lead to confusion when the term relational similarity is also under discussion.
Resnik (1995) describes semantic similarity as follows:
Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. (1989) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonimic (IS-A) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information.
Thus semantic similarity is a specific type of attributional similarity. The term semantic similarity is misleading, because it refers to a type of attributional similarity, yet relational similarity is not any less semantic than attributional similarity. To avoid confusion, we will use the terms attributional similarity and relational similarity, following Medin, Goldstone, and Gentner (1990). Instead of semantic similarity (Resnik, 1995) or semantically similar (Chiarello et al., 1990), we prefer the term taxonomical similarity, which we take to be a specific type of attributional similarity. We interpret synonymy as a high degree of attributional similarity. Analogy is a high degree of relational similarity.
Measuring Attributional Similarity
Algorithms for measuring attributional similarity can be lexicon-based (Lesk, 1986;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003), corpus-based (Lesk, 1969;Landauer and Dumais, 1997;Lin, 1998a;Turney, 2001), or a hybrid of the two (Resnik, 1995;Jiang and Conrath, 1997;Turney et al., 2003). Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpus-based algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. However, experiments do not support this intuition.
Several algorithms have been evaluated using 80 multiple-choice synonym ques- Table 2. Table 3 shows the best performance on the TOEFL questions for each type of attributional similarity algorithm. The results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy.
Using Attributional Similarity to Solve Analogies
We may distinguish near analogies (mason:stone::carpenter:wood) from far analogies (traffic:street::water:riverbed) (Gentner, 1983;Medin, Goldstone, and Gentner, 1990). In an analogy A:B::C:D, where there is a high degree of relational similarity between A:B and C:D, if there is also a high degree of attributional similarity between A and C, and between B and D, then A:B::C:D is a near analogy; otherwise, it is a far analogy. It seems possible that SAT analogy questions might consist largely of near analogies, in which case they can be solved using attributional similarity measures. We could score each candidate analogy by the average of the attributional similarity, sim a , between A and C and between B and D: score(A : B ::
C : D) = 1 2 (sim a (A, C) + sim a (B, D))(1)
This kind of approach was used in two of the thirteen modules in Turney et al. (2003) (see Section 3.1).
To evaluate this approach, we applied several measures of attributional similarity to our collection of 374 SAT questions. The performance of the algorithms was measured by precision, recall, and F , defined as follows: precision = number of correct guesses total number of guesses made
(2)
F = 2 × precision × recall precision + recall (4)
Note that recall is the same as percent correct (for multiple-choice questions, with only zero or one guesses allowed per question, but not in general). Table 4 shows the experimental results for our set of 374 analogy questions. For example, using the algorithm of Hirst and St-Onge (1998), 120 questions were answered correctly, 224 incorrectly, and 30 questions were skipped. When the algorithm assigned the same similarity to all of the choices for a given question, that question was skipped. The precision was 120/(120 + 224) and the recall was 120/(120 + 224 + 30).
The first five algorithms in Table 4 are implemented in Pedersen's WordNet-Similarity package. 2 The sixth algorithm (Turney, 2001) used the Waterloo MultiText System, as described in Terra and Clarke (2003).
The difference between the lowest performance (Jiang and Conrath, 1997) and random guessing is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, the difference between the highest performance (Turney, 2001) and the VSM approach (Turney and Littman, 2005) is also statistically significant with 95% confidence. We conclude that there are enough near analogies in the 374 SAT questions for attributional similarity to perform better than random guessing, but not enough near analogies for attributional similarity to perform as well as relational similarity.
Recognizing Word Analogies
The problem of recognizing word analogies is, given a stem word pair and a finite list of choice word pairs, select the choice that is most analogous to the stem. This problem was first attempted by a system called Argus (Reitman, 1965), using a small hand-built semantic network. Argus could only solve the limited set of analogy questions that its programmer had anticipated. Argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity. Turney et al. (2003) combined 13 independent modules to answer SAT questions. The final output of the system was based on a weighted combination of the outputs of each individual module. The best of the 13 modules was the VSM, which is described in detail in Turney and Littman (2005). The VSM was evaluated on a set of 374 SAT questions, achieving a score of 47%.
In contrast with the corpus-based approach of Turney and Littman (2005), Veale (2004) applied a lexicon-based approach to the same 374 SAT questions, attaining a score of 43%. Veale evaluated the quality of a candidate analogy A:B::C:D by looking for paths in WordNet, joining A to B and C to D. The quality measure was based on the similarity between the A:B paths and the C:D paths. Turney (2005) introduced Latent Relational Analysis (LRA), an enhanced version of the VSM approach, which reached 56% on the 374 SAT questions. Here we go beyond Turney (2005) by describing LRA in more detail, performing more extensive experiments, and analyzing the algorithm and related work in more depth. French (2002) cites Structure Mapping Theory (SMT) (Gentner, 1983) and its implementation in the Structure Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner, 1989) as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
Structure Mapping Theory
For example, there is an analogy between the solar system and Rutherford's model of the atom (Falkenhainer, Forbus, and Gentner, 1989). The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
Each individual connection (e.g., from revolve(planet, sun) to revolve(electron, nucleus)) in an analogical mapping implies that the connected relations are similar; thus, SMT requires a measure of relational similarity, in order to form maps. Early versions of SME only mapped identical relations, but later versions of SME allowed similar, non-identical relations to match (Falkenhainer, 1990). However, the focus of research in analogy-making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations, hence the similarity measures used in SME at the level of individual connections are somewhat rudimentary.
We believe that a more sophisticated measure of relational similarity, such as LRA, may enhance the performance of SME. Likewise, the focus of our work here is on the similarity between particular relations, and we ignore systematic mapping between sets animal:eat::inflation:reduce of relations, so LRA may also be enhanced by integration with SME.
Metaphor
Metaphorical language is very common in our daily life; so common that we are usually unaware of it (Lakoff and Johnson, 1980). argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language (Lakoff and Johnson, 1980). Dolan (1995) describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's (1995) algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors. Lakoff and Johnson (1980) give many examples of sentences in support of their claim that metaphorical language is ubiquitous. The metaphors in their sample sentences can be expressed using SAT-style verbal analogies of the form A:B::C:D. The first column in Table 5 is a list of sentences from Lakoff and Johnson (1980) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy.
Classifying Semantic Relations
The task of classifying semantic relations is to identify the relation between a pair of words. Often the pairs are restricted to noun-modifier pairs, but there are many interesting relations, such as antonymy, that do not occur in noun-modifier pairs. However, noun-modifier pairs are interesting due to their high frequency in English. For instance, WordNet 2.0 contains more than 26,000 noun-modifier pairs, although many common noun-modifiers are not in WordNet, especially technical terms. Rosario and Hearst (2001) and Rosario, Hearst, and Fillmore (2002) classify nounmodifier relations in the medical domain, using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources for representing each noun-modifier pair with a feature vector. They trained a neural network to distinguish 13 classes of semantic relations. Nastase and Szpakowicz (2003) explore a similar approach to classifying general noun-modifier pairs (i.e., not restricted to a particular domain, such as medicine), using WordNet and Roget's Thesaurus as lexical resources. Vanderwende (1994) used hand-built rules, together with a lexical knowledge base, to classify noun-modifier pairs.
None of these approaches explicitly involved measuring relational similarity, but any classification of semantic relations necessarily employs some implicit notion of relational similarity, since members of the same class must be relationally similar to some extent. Barker and Szpakowicz (1998) tried a corpus-based approach that explicitly used a measure of relational similarity, but their measure was based on literal matching, which limited its ability to generalize. Moldovan et al. (2004) also used a measure of relational similarity, based on mapping each noun and modifier into semantic classes in WordNet. The noun-modifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm, to improve the mapping of the noun and modifier into WordNet. Turney and Littman (2005) used the VSM (as a component in a single nearest neighbour learning algorithm) to measure relational similarity. We take the same approach here, substituting LRA for the VSM, in Section 7.
Lauer (1995) used a corpus-based approach (using the BNC) to paraphrase nounmodifier pairs, by inserting the prepositions of, for, in, at, on, from, with, and about. For example, reptile haven was paraphrased as haven for reptiles. Lapata and Keller (2004) achieved improved results on this task, by using the database of AltaVista's search engine as a corpus.
Word Sense Disambiguation
We believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text. If we can identify the semantic relations between the given word and its context, then we can disambiguate the given word. Yarowsky's (1993) observation that collocations are almost always monosemous is evidence for this view. Federici, Montemagni, and Pirrelli (1997) present an analogybased approach to word sense disambiguation.
For example, consider the word plant. Out of context, plant could refer to an industrial plant or a living organism. Suppose plant appears in some text near food. A typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism (Lesk, 1986;Banerjee and Pedersen, 2003). In this case, the decision may not be clear, since industrial plants often produce food and living organisms often serve as food. It would be very helpful to know the relation between food and plant in this example. In the phrase "food for the plant", the relation between food and plant strongly suggests that the plant is a living organism, since industrial plants do not need food. In the text "food at the plant", the relation strongly suggests that the plant is an industrial plant, since living organisms are not usually considered as locations. Thus an algorithm for classifying semantic relations (as in Section 7) should be helpful for word sense disambiguation.
Information Extraction
The problem of relation extraction is, given an input document and a specific relation R, extract all pairs of entities (if any) that have the relation R in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. Zelenko, Aone, and Richardella (2003) present a kernel method for extracting the relations person-affiliation and organization-location. For example, in the sentence "John Smith is the chief scientist of the Hardcom Corporation," there is a person-affiliation relation between "John Smith" and "Hardcom Corporation" (Zelenko, Aone, and Richardella, 2003). This is similar to the problem of classifying semantic relations (Section 3.4), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
In the VSM approach to classifying semantic relations (Turney and Littman, 2005), we would have a training set of labeled examples of the relation person-affiliation, for instance. Each example would be represented by a vector of pattern frequencies. Given a specific document discussing "John Smith" and "Hardcom Corporation", we could construct a vector representing the relation between these two entities, and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors. It would seem that there is a problem here, because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for "John Smith" and "Hardcom Corporation" would be very sparse, since these entities might be mentioned only once in the given document. However, this is not a new problem for the Vector Space Model; it is the standard situation when the VSM is used for information retrieval. A query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vector. There are well-known techniques in information retrieval for coping with this disparity, such as weighting schemes for query vectors that are different from the weighting schemes for document vectors (Salton and Buckley, 1988).
Question Answering
In their paper on classifying semantic relations, Moldovan et al. (2004) suggest that an important application of their work is Question Answering. As defined in the Text REtrieval Conference (TREC) Question Answering (QA) track, the task is to answer simple questions, such as "Where have nuclear incidents occurred?", by retrieving a relevant document from a large corpus and then extracting a short string from the document, such as "The Three Mile Island nuclear incident caused a DOE policy crisis." Moldovan et al. (2004) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text. They argue that the desired semantic relation can easily be inferred from the surface form of the question. A question of the form "Where ...?" is likely to be seeking for entities with a location relation and a question of the form "What did ... make?" is likely to be looking for entities with a product relation. In Section 7, we show how LRA can recognize relations such as location and product (see Table 19). Hearst (1992) presents an algorithm for learning hyponym (type of) relations from a corpus and Berland and Charniak (1999) describe how to learn meronym (part of) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words (Fellbaum, 1998) and Nastase and Szpakowicz (2003) list 30 semantic relations for noun-modifier pairs. Hearst (1992) and Berland and Charniak (1999) use manually generated rules to mine text for semantic relations. Turney and Littman (2005) also use a manually generated set of 64 patterns.
Automatic Thesaurus Generation
LRA does not use a predefined set of patterns; it learns patterns from a large corpus. Instead of manually generating new rules or patterns for each new semantic relation, it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations. A nearest neighbour algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations, given the appropriate labeled training data.
Girju, Badulescu, and Moldovan (2003) present an algorithm for learning meronym relations from a corpus. Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. However, they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. Veale (2003) has developed an algorithm for recognizing certain types of word analogies, based on information in WordNet. He proposes to use the algorithm for analogical information retrieval. For example, the query "Muslim church" should return "mosque" and the query "Hindu bible" should return "the Vedas". The algorithm was designed with a focus on analogies of the form adjective:noun::adjective:noun, such as Christian:church::Muslim:mosque.
Information Retrieval
A measure of relational similarity is applicable to this task. Given a pair of words, A and B, the task is to return another pair of words, X and Y , such that there is high relational similarity between the pair A:X and the pair Y :B. For example, given A = "Muslim" and B = "church", return X = "mosque" and Y = "Christian". (The pair Muslim:mosque has a high relational similarity to the pair Christian:church.)
Marx et al. (2002) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora. Each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus. For example, one experiment used a corpus of Buddhist documents and a corpus of Christian documents. A cluster of words such as {Hindu, Mahayana, Zen, ...} from the Buddhist corpus was coupled with a cluster of words such as {Catholic, Protestant, ...} from the Christian corpus. Thus the algorithm appears to have discovered an analogical mapping between Buddhist schools and traditions and Christian schools and traditions. This is interesting work, but it is not directly applicable to SAT analogies, because it discovers analogies between clusters of words, rather than individual words.
Identifying Semantic Roles
A semantic frame for an event such as judgement contains semantic roles such as judge, evaluee, and reason, whereas an event such as statement contains roles such as speaker, addressee, and message (Gildea and Jurafsky, 2002). The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. Moldovan et al. (2004) argue that semantic roles are merely a special case of semantic relations (Section 3.4), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
The Vector Space Model
This section examines past work on measuring attributional and relational similarity using the Vector Space Model (VSM).
Measuring Attributional Similarity with the Vector Space Model
The VSM was first developed for information retrieval (Salton and McGill, 1983;Salton and Buckley, 1988;Salton, 1989) and it is at the core of most modern search engines (Baeza-Yates and Ribeiro-Neto, 1999).
In the VSM approach to information retrieval, queries and documents are represented by vectors. Elements in these vectors are based on the frequencies of words in the corresponding queries and documents. The frequencies are usually transformed by various formulas and weights, tailored to improve the effectiveness of the search engine (Salton, 1989). The attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors. For a given query, the search engine sorts the matching documents in order of decreasing cosine.
The VSM approach has also been used to measure the attributional similarity of words (Lesk, 1969;Ruge, 1992;Pantel and Lin, 2002). Pantel and Lin (2002) clustered words according to their attributional similarity, as measured by a VSM. Their algorithm is able to discover the different senses of polysemous words, using unsupervised learning.
Latent Semantic Analysis enhances the VSM approach to information retrieval by using the Singular Value Decomposition (SVD) to smooth the vectors, which helps to handle noise and sparseness in the data (Deerwester et al., 1990;Dumais, 1993; Landauer and Dumais, 1997). SVD improves both document-query attributional similarity measures (Deerwester et al., 1990;Dumais, 1993) and word-word attributional similarity measures (Landauer and Dumais, 1997). LRA also uses SVD to smooth vectors, as we discuss in Section 5.
Measuring Relational Similarity with the Vector Space Model
Let R 1 be the semantic relation (or set of relations) between a pair of words, A and B, and let R 2 be the semantic relation (or set of relations) between another pair, C and D. We wish to measure the relational similarity between R 1 and R 2 . The relations R 1 and R 2 are not given to us; our task is to infer these hidden (latent) relations and then compare them.
In the VSM approach to relational similarity (Turney and Littman, 2005), we create vectors, r 1 and r 2 , that represent features of R 1 and R 2 , and then measure the similarity of R 1 and R 2 by the cosine of the angle θ between r 1 and r 2 :
r 1 = r 1,1 , . . . , r 1,n (5) r 2 = r 2,1 , . . . r 2,n (6) cosine(θ) = n i=1 r 1,i · r 2,i n i=1 (r 1,i ) 2 · n i=1 (r 2,i ) 2 = r 1 · r 2 √ r 1 · r 1 · √ r 2 · r 2 = r 1 · r 2 r 1 · r 2 (7)
We create a vector, r, to characterize the relationship between two words, X and Y , by counting the frequencies of various short phrases containing X and Y . Turney and Littman (2005) use a list of 64 joining terms, such as "of", "for", and "to", to form 128 phrases that contain X and Y , such as "X of Y ", "Y of X", "X for Y ", "Y for X", "X to Y ", and "Y to X". These phrases are then used as queries for a search engine and the number of hits (matching documents) is recorded for each query. This process yields a vector of 128 numbers. If the number of hits for a query is x, then the corresponding element in the vector r is log(x + 1). Several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures (Salton and Buckley, 1988;Ruge, 1992;Lin, 1998b).
Turney and Littman (2005) evaluated the VSM approach by its performance on 374 SAT analogy questions, achieving a score of 47%. Since there are five choices for each question, the expected score for random guessing is 20%. To answer a multiple-choice analogy question, vectors are created for the stem pair and each choice pair, and then cosines are calculated for the angles between the stem pair and each choice pair. The best guess is the choice pair with the highest cosine. We use the same set of analogy questions to evaluate LRA in Section 6.
The VSM was also evaluated by its performance as a distance (nearness) measure in a supervised nearest neighbour classifier for noun-modifier semantic relations (Turney and Littman, 2005). The evaluation used 600 hand-labeled noun-modifier pairs from Nastase and Szpakowicz (2003). A testing pair is classified by searching for its single nearest neighbour in the labeled training data. The best guess is the label for the training pair with the highest cosine. LRA is evaluated with the same set of nounmodifier pairs in Section 7. Turney and Littman (2005) used the AltaVista search engine to obtain the frequency information required to build vectors for the VSM. Thus their corpus was the set of all web pages indexed by AltaVista. At the time, the English subset of this corpus consisted of about 5 × 10 11 words. Around April 2004, AltaVista made substantial changes to their search engine, removing their advanced search operators. Their search engine no longer supports the asterisk operator, which was used by Turney and Littman (2005) for stemming and wild-card searching. AltaVista also changed their policy towards automated searching, which is now forbidden. 3 Turney and Littman (2005) used AltaVista's hit count, which is the number of documents (web pages) matching a given query, but LRA uses the number of passages (strings) matching a query. In our experiments with LRA (Sections 6 and 7), we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003), running on a 16 CPU Beowulf Cluster, with a corpus of about 5 × 10 10 English words. The Waterloo MultiText System (WMTS) is a distributed (multiprocessor) search engine, designed primarily for passage retrieval (although document retrieval is possible, as a special case of passage retrieval). The text and index require approximately one terabyte of disk space. Although AltaVista only gives a rough estimate of the number of matching documents, the Waterloo MultiText System gives exact counts of the number of matching passages. Turney et al. (2003) combine 13 independent modules to answer SAT questions. The performance of LRA significantly surpasses this combined system, but there is no real contest between these approaches, because we can simply add LRA to the combination, as a fourteenth module. Since the VSM module had the best performance of the thirteen modules (Turney et al., 2003), the following experiments focus on comparing VSM and LRA.
Latent Relational Analysis
LRA takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs. LRA relies on three resources, a search engine with a very large corpus of text, a broad-coverage thesaurus of synonyms, and an efficient implementation of SVD.
We first present a short description of the core algorithm. Later, in the following subsections, we will give a detailed description of the algorithm, as it is applied in the experiments in Sections 6 and 7.
• Given a set of word pairs as input, look in a thesaurus for synonyms for each word in each word pair. For each input pair, make alternate pairs by replacing the original words with their synonyms. The alternate pairs are intended to form near analogies with the corresponding original pairs (see Section 2.3).
• Filter out alternate pairs that do not form near analogies, by dropping alternate pairs that co-occur rarely in the corpus. In the preceding step, if a synonym replaced an ambiguous original word, but the synonym captures the wrong sense of the original word, it is likely that there is no significant relation between the words in the alternate pair, so they will rarely co-occur.
• For each original and alternate pair, search in the corpus for short phrases that begin with one member of the pair and end with the other. These phrases characterize the relation between the words in each pair.
• For each phrase from the previous step, create several patterns, by replacing words in the phrase with wild cards.
• Build a pair-pattern frequency matrix, in which each cell represents the number of times that the corresponding pair (row) appears in the corpus with the corresponding pattern (column). The number will usually be zero, resulting in a sparse matrix.
• Apply the Singular Value Decomposition to the matrix. This reduces noise in the matrix and helps with sparse data.
• Suppose that we wish to calculate the relational similarity between any two of the original pairs. Start by looking for the two row vectors in the pair-pattern frequency matrix that correspond to the two original pairs. Calculate the cosine of the angle between these two row vectors. Then merge the cosine of the two original pairs with the cosines of their corresponding alternate pairs, as follows. If an analogy formed with alternate pairs has a higher cosine than the original pairs, we assume that we have found a better way to express the analogy, but we have not significantly changed its meaning. If the cosine is lower, we assume that we may have changed the meaning, by inappropriately replacing words with synonyms. Filter out inappropriate alternates by dropping all analogies formed of alternates, such that the cosines are less than the cosine for the original pairs. The relational similarity between the two original pairs is then calculated as the average of all of the remaining cosines.
The motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus. The hope is that we can find near analogies for the original pairs, such that the near analogies co-occur more frequently in the corpus. The danger is that the alternates may have different relations from the originals. The filtering steps above aim to reduce this risk.
Input and Output
In our experiments, the input set contains from 600 to 2,244 word pairs. The output similarity measure is based on cosines, so the degree of similarity can range from −1 (dissimilar; θ = 180 • ) to +1 (similar; θ = 0 • ). Before applying SVD, the vectors are completely nonnegative, which implies that the cosine can only range from 0 to +1, but SVD introduces negative values, so it is possible for the cosine to be negative, although we have never observed this in our experiments.
Search Engine and Corpus
In the following experiments, we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003). 4 The corpus consists of about 5 × 10 10 English words, gathered by a web crawler, mainly from US academic web sites. The web pages cover a very wide range of topics, styles, genres, quality, and writing skill. The WMTS is well suited to LRA, because the WMTS scales well to large corpora (one terabyte, in our case), it gives exact frequency counts (unlike most web search engines), it is designed for passage retrieval (rather than document retrieval), and it has a powerful query syntax.
Thesaurus
As a source of synonyms, we use Lin's (1998a) automatically generated thesaurus. This thesaurus is available through an online interactive demonstration or it can be downloaded. 5 We used the online demonstration, since the downloadable version seems to contain fewer words. For each word in the input set of word pairs, we automatically query the online demonstration and fetch the resulting list of synonyms. As a courtesy to other users of Lin's online system, we insert a 20 second delay between each query.
Lin's thesaurus was generated by parsing a corpus of about 5 × 10 7 English words, consisting of text from the Wall Street Journal, San Jose Mercury, and AP Newswire (Lin, 1998a). The parser was used to extract pairs of words and their grammatical relations. Words were then clustered into synonym sets, based on the similarity of their grammatical relations. Two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words. Given a word and its part of speech, Lin's thesaurus provides a list of words, sorted in order of decreasing attributional similarity. This sorting is convenient for LRA, since it makes it possible to focus on words with higher attributional similarity and ignore the rest. WordNet, in contrast, given a word and its part of speech, provides a list of words grouped by the possible senses of the given word, with groups sorted by the frequencies of the senses. WordNet's sorting does not directly correspond to sorting by degree of attributional similarity, although various algorithms have been proposed for deriving attributional similarity from WordNet (Resnik, 1995; Jiang and Conrath, 1997; Budanitsky and Hirst, 2001; Banerjee and Pedersen, 2003).
Singular Value Decomposition
We use Rohde's SVDLIBC implementation of the Singular Value Decomposition, which is based on SVDPACKC (Berry, 1992). 6 In LRA, SVD is used to reduce noise and compensate for sparseness.
The Algorithm
We will go through each step of LRA, using an example to illustrate the steps. Assume that the input to LRA is the 374 multiple-choice SAT word analogy questions of Turney and Littman (2005). Since there are six word pairs per question (the stem and five choices), the input consists of 2,244 word pairs. Let's suppose that we wish to calculate the relational similarity between the pair quart:volume and the pair mile:distance, taken from the SAT question in Table 6. The LRA algorithm consists of the following twelve steps:
1. Find alternates: For each word pair A:B in the input set, look in Lin's (1998a) thesaurus for the top num sim words (in the following experiments, num sim is 10) that are most similar to A. For each A ′ that is similar to A, make a new word pair A ′ :B. Likewise, look for the top num sim words that are most similar to B, and for each B ′ , make a new word pair A:B ′ . A:B is called the original pair and each A ′ :B or A:B ′ is an alternate pair. The intent is that alternates should have almost the same semantic relations as the original. For each input pair, there will now be 2 × num sim alternate pairs. When looking for similar words in Lin's (1998a) thesaurus, avoid words that seem unusual (e.g., hyphenated words, words with three characters or less, words with non-alphabetical char- Table 6 This SAT question, from Claman (2000), is used to illustrate the steps in the LRA algorithm.
Stem: quart:volume Choices: (a) day:night (b) mile:distance (c) decade:century (d) friction:heat (e) part:whole Solution: (b) mile:distance acters, multi-word phrases, and capitalized words). The first column in Table 7 shows the alternate pairs that are generated for the original pair quart:volume.
Filter alternates:
For each original pair A:B, filter the 2 × num sim alternates as follows. For each alternate pair, send a query to the WMTS, to find the frequency of phrases that begin with one member of the pair and end with the other. The phrases cannot have more than max phrase words (we use max phrase = 5). Sort the alternate pairs by the frequency of their phrases.
Select the top num f ilter most frequent alternates and discard the remainder (we use num f ilter = 3, so 17 alternates are dropped). This step tends to eliminate alternates that have no clear semantic relation. The third column in Table 7 shows the frequency with which each pair co-occurs in a window of max phrase words. The last column in Table 7 shows the pairs that are selected.
Find phrases:
For each pair (originals and alternates), make a list of phrases in the corpus that contain the pair. Query the WMTS for all phrases that begin with one member of the pair and end with the other (in either order). We ignore suffixes when searching for phrases that match a given pair. The phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pair. These phrases give us information about the semantic relations between the words in each pair. A phrase with no words between the two members of the word pair would give us very little information about the semantic relations (other than that the words occur together with a certain frequency in a certain order). Table 8 gives some examples of phrases in the corpus that match the pair quart:volume.
Find patterns:
For each phrase found in the previous step, build patterns from the intervening words. A pattern is constructed by replacing any or all or none of the intervening words with wild cards (one wild card can only replace one word). If a phrase is n words long, there are n − 2 intervening words between the members of the given word pair (e.g., between quart and volume). Thus a phrase with n words generates 2 (n−2) patterns. (We use max phrase = 5, so a phrase generates at most eight patterns.) For each pattern, count the number of pairs (originals and alternates) with phrases that match the pattern (a wild card must match exactly one word). Keep the top num patterns most frequent patterns and discard the rest (we use num patterns = 4, 000). Typically there will be millions of patterns, so it is not feasible to keep them all.
Map pairs to rows:
In preparation for building the matrix X, create a mapping of word pairs to row numbers. For each pair A:B, create a row for A:B and Table 7 Alternate forms of the original pair quart:volume. The first column shows the original pair and the alternate pairs. The second column shows Lin's similarity score for the alternate word compared to the original word. For example, the similarity between quart and pint is 0.210. The third column shows the frequency of the pair in the WMTS corpus. The fourth column shows the pairs that pass the filtering step (i.e., step 2). "quarts liquid volume" "volume in quarts" "quarts of volume" "volume capacity quarts" "quarts in volume"
Word pair
"volume being about two quarts" "quart total volume" "volume of milk in quarts" "quart of spray volume" "volume include measures like quart" Table 9 Frequencies of various patterns for quart:volume. The asterisk "*" represents the wildcard. Suffixes are ignored, so "quart" matches "quarts". For example, "quarts in volume" is one of the four phrases that match "quart P volume" when P is "in". P = "in" P = "* of" P = "of *" P = "* *" freq("quart P volume") 4 1 5 19 freq("volume P quart") 10 0 2 16 another row for B:A. This will make the matrix more symmetrical, reflecting our knowledge that the relational similarity between A:B and C:D should be the same as the relational similarity between B:A and D:C. This duplication of rows is examined in Section 6.6.
6. Map patterns to columns: Create a mapping of the top num patterns patterns to column numbers. For each pattern P , create a column for "word 1 P word 2 " and another column for "word 2 P word 1 ". Thus there will be 2 × num patterns columns in X. This duplication of columns is examined in Section 6.6.
7. Generate a sparse matrix: Generate a matrix X in sparse matrix format, suitable for input to SVDLIBC. The value for the cell in row i and column j is the frequency of the j-th pattern (see step 6) in phrases that contain the i-th word pair (see step 5). Table 9 gives some examples of pattern frequencies for quart:volume.
8. Calculate entropy: Apply log and entropy transformations to the sparse matrix (Landauer and Dumais, 1997). These transformations have been found to be very helpful for information retrieval (Harman, 1986;. Let x i,j be the cell in row i and column j of the matrix X from step 7. Let m be the number of rows in X and let n be the number of columns. We wish to weight the cell x i,j by the entropy of the j-th column. To calculate the entropy of the column, we need to convert the column into a vector of probabilities. Let p i,j be the probability of x i,j , calculated by normalizing the column vector so that the sum of the elements is one, p i,j = x i,j / m k=1 x k,j . The entropy of the j-th column is then H j = − m k=1 p k,j log(p k,j ). Entropy is at its maximum when p i,j is a uniform distribution, p i,j = 1/m, in which case H j = log(m). Entropy is at its minimum when p i,j is 1 for some value of i and 0 for all other values of i, in which case H j = 0. We want to give more weight to columns (patterns) with frequencies that vary substantially from one row (word pair) to the next, and less weight to columns that are uniform. Therefore we weight the cell x i,j by w j = 1 − H j / log(m), which varies from 0 when p i,j is uniform to 1 when entropy is minimal. We also apply the log transformation to frequencies, log(x i,j + 1). (Entropy is calculated with the original frequency values, before the log transformation is applied.) For all i and all j, replace the original value x i,j in X by the new value w j log(x i,j + 1). This is an instance of the TF-IDF (Term Frequency-Inverse Document Frequency) family of transformations, which is familiar in information retrieval (Salton and Buckley, 1988; Baeza-Yates and Ribeiro-Neto, 1999): log(x i,j + 1) is the TF term and w j is the IDF term. 9. Apply SVD: After the log and entropy transformations have been applied to the matrix X, run SVDLIBC. SVD decomposes a matrix X into a product of three matrices UΣV T , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length: U T U = V T V = I) and Σ is a diagonal matrix of singular values (hence SVD) (Golub and Van Loan, 1996). If X is of rank r, then Σ is also of rank r. Let Σ k , where k < r, be the diagonal matrix formed from the top k singular values, and let U k and V k be the matrices produced by selecting the corresponding columns from U and V. The matrix U k Σ k V T k is the matrix of rank k that best approximates the original matrix X, in the sense that it minimizes the approximation errors. That
is,X = U k Σ k V T k minimizes X − X F over all matricesX of rank k, where
. . . F denotes the Frobenius norm (Golub and Van Loan, 1996). We may think of this matrix U k Σ k V T k as a "smoothed" or "compressed" version of the original matrix. In the subsequent steps, we will be calculating cosines for row vectors. For this purpose, we can simplify calculations by dropping V. The cosine of two vectors is their dot product, after they have been normalized to unit length. The matrix XX T contains the dot products of all of the row vectors. We can find the dot product of the i-th and j-th row vectors by looking at the cell in row i, column j of the matrix XX T . Since V T V = I, we have XX T = UΣV T (UΣV T ) T = UΣV T VΣ T U T = UΣ(UΣ) T , which means that we can calculate cosines with the smaller matrix UΣ, instead of using X = UΣV T (Deerwester et al., 1990).
10. Projection: Calculate U k Σ k (we use k = 300). This matrix has the same number of rows as X, but only k columns (instead of 2 × num patterns columns; in our experiments, that is 300 columns instead of 8,000). We can compare two word pairs by calculating the cosine of the corresponding row vectors in U k Σ k . The row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space. The value k = 300 is recommended by Landauer and Dumais (1997) for measuring the attributional similarity between words. We investigate other values in Section 6.4. Table 10 gives the cosines for the sixteen combinations.
Evaluate alternates: Let
12. Calculate relational similarity: The relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D.
The requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies, which may be introduced in step 1 and may have slipped through the filtering in step 2. Averaging the cosines, as opposed to taking their maximum, is intended to provide some resistance to noise. For quart:volume and mile:distance, the third column in Table 10 shows which alternates are used to calculate the average. For these two pairs, the average of the selected cosines is 0.677. In Table 7, we see that pumping:volume Table 10 The sixteen combinations and their cosines. A:B::C:D expresses the analogy "A is to B as C is to D". The third column indicates those combinations for which the cosine is greater than or equal to the cosine of the original analogy, quart:volume::mile:distance. has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. However, Table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12.
Steps 11 and 12 can be repeated for each two input pairs that are to be compared. This completes the description of LRA. Table 11 gives the cosines for the sample SAT question. The choice pair with the highest average cosine (the choice with the largest value in column #1), choice (b), is the solution for this question; LRA answers the question correctly. For comparison, column #2 gives the cosines for the original pairs and column #3 gives the highest cosine. For this particular SAT question, there is one choice that has the highest cosine for all three columns, choice (b), although this is not true in general. Note that the gap between the first choice (b) and the second choice (d) is largest for the average cosines (column #1). This suggests that the average of the cosines (column #1) is better at discriminating the correct choice than either the original cosine (column #2) or the highest cosine (column #3).
Experiments with Word Analogy Questions
This section presents various experiments with 374 multiple-choice SAT word analogy questions. Table 12 shows the performance of the baseline LRA system on the 374 SAT questions, using the parameter settings and configuration described in Section 5. LRA correctly answered 210 of the 374 questions. 160 questions were answered incorrectly and 4 questions were skipped, because the stem pair and its alternates were represented by zero Table 11 Cosines for the sample SAT question given in Table 6. Column #1 gives the averages of the cosines that are greater than or equal to the original cosines (e.g., the average of the cosines that are marked "yes" in Table 10 is 0.677; see choice (b) in column #1). Column #2 gives the cosine for the original pairs (e.g., the cosine for the first pair in Table 10 is 0.525; see choice (b) in column #2). Column #3 gives the maximum cosine for the sixteen possible analogies (e.g., the maximum cosine in Table 10 vectors. The performance of LRA is significantly better than the lexicon-based approach of Veale (2004) (see Section 3.1) and the best performance using attributional similarity (see Section 2.3), with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). As another point of reference, consider the simple strategy of always guessing the choice with the highest co-occurrence frequency. The idea here is that the words in the solution pair may occur together frequently, because there is presumably a clear and meaningful relation between the solution words, whereas the distractors may only occur together rarely, because they have no meaningful relation. This strategy is signifcantly worse than random guessing. The opposite strategy, always guessing the choice pair with the lowest co-occurrence frequency, is also worse than random guessing (but not significantly). It appears that the designers of the SAT questions deliberately chose distractors that would thwart these two strategies.
Baseline LRA System
With 374 questions and 6 word pairs per question (one stem and five choices), there are 2,244 pairs in the input set. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 8,976 pairs. In step 5, for each pair A:B, we add B:A, yielding 17,952 pairs. However, some pairs are dropped because they correspond to zero vectors (they do not appear together in a window of five words in the WMTS . Also, a few words do not appear in Lin's thesaurus, and some word pairs appear twice in the SAT questions (e.g., lion:cat). The sparse matrix (step 7) has 17,232 rows (word pairs) and 8,000 columns (patterns), with a density of 5.8% (percentage of nonzero values). Table 13 gives the time required for each step of LRA, a total of almost nine days. All of the steps used a single CPU on a desktop computer, except step 3, finding the phrases for each word pair, which used a 16 CPU Beowulf cluster. Most of the other steps are parallelizable; with a bit of programming effort, they could also be executed on the Beowulf cluster. All CPUs (both desktop and cluster) were 2.4 GHz Intel Xeons. The desktop computer had 2 GB of RAM and the cluster had a total of 16 GB of RAM. Table 14 compares LRA to the Vector Space Model with the 374 analogy questions. VSM-AV refers to the VSM using AltaVista's database as a corpus. The VSM-AV results are taken from Turney and Littman (2005). As mentioned in Section 4.2, we estimate this corpus contained about 5 × 10 11 English words at the time the VSM-AV experiments took place. VSM-WMTS refers to the VSM using the WMTS, which contains about 5 × 10 10 English words. We generated the VSM-WMTS results by adapting the VSM to the WMTS. The algorithm is slightly different from Turney and Littman (2005), because we used passage frequencies instead of document frequencies.
LRA versus VSM
All three pairwise differences in recall in Table 14 are statistically significant with 95% confidence, using the Fisher Exact Test (Agresti, 1990). The pairwise differences in Table 15 Comparison with human SAT performance. The last column in the table indicates whether (YES) or not (NO) the average human performance (57%) falls within the 95% confidence interval of the corresponding algorithm's performance. The confidence intervals are calculated using the Binomial Exact Test (Agresti, 1990 precision between LRA and the two VSM variations are also significant, but the difference in precision between the two VSM variations (42.4% versus 47.7%) is not significant. Although VSM-AV has a corpus ten times larger than LRA's, LRA still performs better than VSM-AV.
Comparing VSM-AV to VSM-WMTS, the smaller corpus has reduced the score of the VSM, but much of the drop is due to the larger number of questions that were skipped (34 for VSM-WMTS versus 5 for VSM-AV). With the smaller corpus, many more of the input word pairs simply do not appear together in short phrases in the corpus. LRA is able to answer as many questions as VSM-AV, although it uses the same corpus as VSM-WMTS, because Lin's thesaurus allows LRA to substitute synonyms for words that are not in the corpus.
VSM-AV required 17 days to process the 374 analogy questions (Turney and Littman, 2005), compared to 9 days for LRA. As a courtesy to AltaVista, Turney and Littman (2005) inserted a five second delay between each query. Since the WMTS is running locally, there is no need for delays. VSM-WMTS processed the questions in only one day.
Human Performance
The average performance of college-bound senior high school students on verbal SAT questions corresponds to a recall (percent correct) of about 57% (Turney and Littman, 2005). The SAT I test consists of 78 verbal questions and 60 math questions (there is also an SAT II test, covering specific subjects, such as chemistry). Analogy questions are only a subset of the 78 verbal SAT questions. If we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal SAT I questions, then we can estimate that the average college-bound senior would correctly answer about 57% of the 374 analogy questions.
Of our 374 SAT questions, 190 are from a collection of ten official SAT tests (Claman, 2000). On this subset of the questions, LRA has a recall of 61.1%, compared to a recall of 51.1% on the other 184 questions. The 184 questions that are not from Claman (2000) seem to be more difficult. This indicates that we may be underestimating how well LRA performs, relative to college-bound senior high school students. Claman (2000) suggests that the analogy questions may be somewhat harder than other verbal SAT questions, so we may be slightly overestimating the mean human score on the analogy questions. Table 15 gives the 95% confidence intervals for LRA, VSM-AV, and VSM-WMTS, calculated by the Binomial Exact Test (Agresti, 1990). There is no significant difference between LRA and human performance, but VSM-AV and VSM-WMTS are significantly below human-level performance.
Varying the Parameters in LRA
There are several parameters in the LRA algorithm (see Section 5.5). The parameter values were determined by trying a small number of possible values on a small set of questions that were set aside. Since LRA is intended to be an unsupervised learning algorithm, we did not attempt to tune the parameter values to maximize the precision and recall on the 374 SAT questions. We hypothesized that LRA is relatively insensitive to the values of the parameters. Table 16 shows the variation in the performance of LRA as the parameter values are adjusted. We take the baseline parameter settings (given in Section 5.5) and vary each parameter, one at a time, while holding the remaining parameters fixed at their baseline values. None of the precision and recall values are significantly different from the baseline, according to the Fisher Exact Test (Agresti, 1990), at the 95% confidence level. This supports the hypothesis that the algorithm is not sensitive to the parameter values.
Although a full run of LRA on the 374 SAT questions takes nine days, for some of the parameters it is possible to reuse cached data from previous runs. We limited the experiments with num sim and max phrase because caching was not as helpful for these parameters, so experimenting with them required several weeks.
Ablation Experiments
As mentioned in the introduction, LRA extends the VSM approach of Turney and Littman (2005) by (1) exploring variations on the analogies by replacing words with synonyms (step 1),
(2) automatically generating connecting patterns (step 4), and (3) smoothing the data with SVD (step 9). In this subsection, we ablate each of these three components to assess their contribution to the performance of LRA. Table 17 shows the results. Without SVD (compare column #1 to #2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, we hypothesize that the drop in performance would be significant with a larger set of word pairs. More word pairs would increase the sample size, which would decrease the 95% confidence interval, which would likely show that SVD is making a significant contribution. Furthermore, more word pairs would increase the matrix size, which would give SVD more leverage. For example, Landauer and Dumais (1997) apply SVD to a matrix of of 30,473 columns by 60,768 rows, but our matrix here is 8,000 columns by 17,232 rows. We are currently gathering more SAT questions, to test this hypothesis.
Without synonyms (compare column #1 to #3 in Table 17), recall drops significantly (from 56.1% to 49.5%), but the drop in precision is not significant. When the synonym component is dropped, the number of skipped questions rises from 4 to 22, which demonstrates the value of the synonym component of LRA for compensating for sparse data.
When both SVD and synonyms are dropped (compare column #1 to #4 in Table 17), the decrease in recall is significant, but the decrease in precision is not significant. Again, we believe that a larger sample size would show the drop in precision is significant.
If we eliminate both synonyms and SVD from LRA, all that distinguishes LRA from VSM-WMTS is the patterns (step 4). The VSM approach uses a fixed list of 64 patterns to generate 128 dimensional vectors (Turney and Littman, 2005), whereas LRA uses a dynamically generated set of 4,000 patterns, resulting in 8,000 dimensional vectors. We can see the value of the automatically generated patterns by comparing LRA without synonyms and SVD (column #4) to VSM-WMTS (column #5). The difference in both precision and recall is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
The ablation experiments support the value of the patterns (step 4) and synonyms (step 1) in LRA, but the contribution of SVD (step 9) has not been proven, although we believe more data will support its effectiveness. Nonetheless, the three components together result in a 16% increase in F (compare #1 to #5).
Matrix Symmetry
We know a priori that, if A:B::C:D, then B:A::D:C. For example, "mason is to stone as carpenter is to wood" implies "stone is to mason as wood is to carpenter". Therefore a good measure of relational similarity, sim r , should obey the following equation:
In steps 5 and 6 of the LRA algorithm (Section 5.5), we ensure that the matrix X is symmetrical, so that equation (8) is necessarily true for LRA. The matrix is designed so that the row vector for A:B is different from the row vector for B:A only by a permutation of the elements. The same permutation distinguishes the row vectors for C:D and D:C. Therefore the cosine of the angle between A:B and C:D must be identical to the cosine of the angle between B:A and D:C (see equation (7)).
To discover the consequences of this design decision, we altered steps 5 and 6 so that symmetry is no longer preserved. In step 5, for each word pair A:B that appears in the input set, we only have one row. There is no row for B:A unless B:A also appears in the input set. Thus the number of rows in the matrix dropped from 17,232 to 8,616.
In step 6, we no longer have two columns for each pattern P , one for "word 1 P word 2 " and another for "word 2 P word 1 ". However, to be fair, we kept the total number of columns at 8,000. In step 4, we selected the top 8,000 patterns (instead of the top 4,000), distinguishing the pattern "word 1 P word 2 " from the pattern "word 2 P word 1 " (instead of considering them equivalent). Thus a pattern P with a high frequency is likely to appear in two columns, in both possible orders, but a lower frequency pattern might appear in only one column, in only one possible order.
These changes resulted in a slight decrease in performance. Recall dropped from 56.1% to 55.3% and precision dropped from 56.8% to 55.9%. The decrease is not statistically significant. However, the modified algorithm no longer obeys equation (8).
Although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the SAT questions, we prefer to retain symmetry, to ensure that equation (8) is satisfied.
Note that, if A:B::C:D, it does not follow that B:A::C:D. For example, it is false that "stone is to mason as carpenter is to wood". In general (except when the semantic relations between A and B are symmetrical), we have the following inequality:
Therefore we do not want A:B and B:A to be represented by identical row vectors, although it would ensure that equation (8) is satisfied.
All Alternates versus Better Alternates
In step 12 of LRA, the relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D. That is, the average includes only those alternates that are "better" than the originals. Taking all alternates instead of the better alternates, recall drops from 56.1% to 40.4% and precision drops from 56.8% to 40.8%. Both decreases are statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
Interpreting Vectors
Suppose a word pair A:B corresponds to a vector r in the matrix X. It would be convenient if inspection of r gave us a simple explanation or description of the relation between A and B. For example, suppose the word pair ostrich:bird maps to the row vector r. It would be pleasing to look in r and find that the largest element corresponds to the pattern "is the largest" (i.e., "ostrich is the largest bird"). Unfortunately, inspection of r reveals no such convenient patterns. We hypothesize that the semantic content of a vector is distributed over the whole vector; it is not concentrated in a few elements. To test this hypothesis, we modified step 10 of LRA. Instead of projecting the 8,000 dimensional vectors into the 300 dimensional space U k Σ k , we use the matrix U k Σ k V T k . This matrix yields the same cosines as U k Σ k , but preserves the original 8,000 dimensions, making it easier to interpret the row vectors. For each row vector in U k Σ k V T k , we select the N largest values and set all other values to zero. The idea here is that we will only pay attention to the N most important patterns in r; the remaining patterns will be ignored. This reduces the length of the row vectors, but the cosine is the dot product of normalized vectors (all vectors are normalized to unit length; see equation (7)), so the change to the vector lengths has no impact; only the angle of the vectors is important. If most of the semantic content is in the N largest elements of r, then setting the remaining elements to zero should have relatively little impact. Table 18 shows the performance as N varies from 1 to 3,000. The precision and recall are significantly below the baseline LRA until N ≥ 300 (95% confidence, Fisher Exact Test). In other words, for a typical SAT analogy question, we need to examine the top 300 patterns to explain why LRA selected one choice instead of another.
We are currently working on an extension of LRA that will explain with a single pattern why one choice is better than another. We have had some promising results, but this work is not yet mature. However, we can confidently claim that interpreting the vectors is not trivial.
Manual Patterns versus Automatic Patterns
Turney and Littman (2005) used 64 manually generated patterns whereas LRA uses 4,000 automatically generated patterns. We know from Section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns. It may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns. If we require an exact match, 50 of the 64 manual patterns can be found in the automatic patterns. If we are lenient about wildcards, and count the pattern "not the" as matching "* not the" (for example), then 60 of the 64 manual patterns appear within the automatic patterns. This suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns, rather than a qualitative difference in the patterns. Turney and Littman (2005) point out that some of their 64 patterns have been used by other researchers. For example, Hearst (1992) used the pattern "such as" to discover hyponyms and Berland and Charniak (1999) used the pattern "of the" to discover meronyms. Both of these patterns are included in the 4,000 patterns automatically generated by LRA.
The novelty in Turney and Littman (2005) is that their patterns are not used to mine text for instances of word pairs that fit the patterns (Hearst, 1992; Berland and Charniak, 1999); instead, they are used to gather frequency data for building vectors that represent the relation between a given pair of words. The results in Section 6.8 show that a vector contains more information than any single pattern or small set of patterns; a vector is a distributed representation. LRA is distinct from Hearst (1992) and Berland and Charniak (1999) in its focus on distributed representations, which it shares with Turney and Littman (2005), but LRA goes beyond Turney and Littman (2005) by finding patterns automatically. Riloff and Jones (1999) and Yangarber (2003) also find patterns automatically, but their goal is to mine text for instances of word pairs; the same goal as Hearst (1992) and Berland and Charniak (1999). Because LRA uses patterns to build distributed vector representations, it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of Hearst (1992), Berland and Charniak (1999), Riloff and Jones (1999), and Yangarber (2003). Therefore LRA can simply select the highest frequency patterns (step 4 in Section 5.5); it does not need the more sophisticated selection algorithms of Riloff and Jones (1999) and Yangarber (2003).
Experiments with Noun-Modifier Relations
This section describes experiments with 600 noun-modifier pairs, hand-labeled with 30 classes of semantic relations (Nastase and Szpakowicz, 2003). In the following experiments, LRA is used with the baseline parameter values, exactly as described in Section 5.5. No adjustments were made to tune LRA to the noun-modifier pairs. LRA is used as a distance (nearness) measure in a single nearest neighbour supervised learning algorithm.
Classes of Relations
The following experiments use the 600 labeled noun-modifier pairs of Nastase and Szpakowicz (2003). This data set includes information about the part of speech and WordNet synset (synonym set; i.e., word sense tag) of each word, but our algorithm does not use this information. Table 19 lists the 30 classes of semantic relations. The table is based on Appendix A of Nastase and Szpakowicz (2003), with some simplifications. The original table listed several semantic relations for which there were no instances in the data set. These were relations that are typically expressed with longer phrases (three or more words), rather than noun-modifier word pairs. For clarity, we decided not to include these relations in Table 19.
In this table, H represents the head noun and M represents the modifier. For example, in "flu virus", the head noun (H) is "virus" and the modifier (M ) is "flu" (*). In English, the modifier (typically a noun or adjective) usually precedes the head noun. In the description of purpose, V represents an arbitrary verb. In "concert hall", the hall is for presenting concerts (V is "present") or holding concerts (V is "hold") ( †).
Nastase and Szpakowicz (2003) organized the relations into groups. The five capitalized terms in the "Relation" column of Table 19 are the names of five groups of semantic relations. (The original table had a sixth group, but there are no examples of this group in the data set.) We make use of this grouping in the following experiments.
Baseline LRA with Single Nearest Neighbour
The following experiments use single nearest neighbour classification with leave-oneout cross-validation. For leave-one-out cross-validation, the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers. The data set is split 600 times, so that each noun-modifier gets a turn as the testing word pair. The predicted class of the testing pair is the class of the single nearest neighbour in the training set. As the measure of nearness, we use LRA to calculate the relational similarity between the testing pair and the training pairs. The single nearest neighbour algorithm is a supervised learning algorithm (i.e., it requires a training set of labeled data), but we are using LRA to measure the distance between a pair and its potential neighbours, and LRA is itself determined in an unsupervised fashion (i.e., LRA does not need labeled data).
Each SAT question has five choices, so answering 374 SAT questions required calculating 374 × 5 × 16 = 29, 920 cosines. The factor of 16 comes from the alternate pairs, step 11 in LRA. With the noun-modifier pairs, using leave-one-out cross-validation, each test pair has 599 choices, so an exhaustive application of LRA would require calculating 600 × 599 × 16 = 5, 750, 400 cosines. To reduce the amount of computation required, we first find the 30 nearest neighbours for each pair, ignoring the alternate pairs (600 × 599 = 359, 400 cosines), and then apply the full LRA, including the alternates, to just those 30 neighbours (600 × 30 × 16 = 288, 000 cosines), which requires calculating only 359, 400 + 288, 000 = 647, 400 cosines.
There are 600 word pairs in the input set for LRA. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 2,400 pairs. In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. However, some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. The sparse matrix (step 7) has 4,748 rows and 8,000 columns, with a density of 8.4%.
Following Turney and Littman (2005), we evaluate the performance by accuracy and also by the macroaveraged F measure (Lewis, 1991). Macroaveraging calculates the precision, recall, and F for each class separately, and then calculates the average across all classes. Microaveraging combines the true positive, false positive, and false negative counts for all of the classes, and then calculates precision, recall, and F from the combined counts. Macroaveraging gives equal weight to all classes, but microaveraging gives more weight to larger classes. We use macroaveraging (giving equal weight to all classes), because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus.
Classification with 30 distinct classes is a hard problem. To make the task easier, we can collapse the 30 classes to 5 classes, using the grouping that is given in Table 19. For example, agent and beneficiary both collapse to participant. On the 30 class problem, LRA with the single nearest neighbour algorithm achieves an accuracy of 39.8% (239/600) and a macroaveraged F of 36.6%. Always guessing the majority class would result in an accuracy of 8.2% (49/600). On the 5 class problem, the accuracy is 58.0% (348/600) and the macroaveraged F is 54.6%. Always guessing the majority class would give an accuracy of 43.3% (260/600). For both the 30 class and 5 class problems, LRA's accuracy is significantly higher than guessing the majority class, with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). Table 20 shows the performance of LRA and VSM on the 30 class problem. VSM-AV is VSM with the AltaVista corpus and VSM-WMTS is VSM with the WMTS corpus. The results for VSM-AV are taken from Turney and Littman (2005). All three pairwise differences in the three F measures are statistically significant at the 95% level, according to the Paired T-Test (Feelders and Verkooijen, 1995). The accuracy of LRA is significantly higher than the accuracies of VSM-AV and VSM-WMTS, according to the Fisher Exact Test (Agresti, 1990), but the difference between the two VSM accuracies is not significant. Table 21 compares the performance of LRA and VSM on the 5 class problem. The accuracy and F measure of LRA are significantly higher than the accuracies and F measures of VSM-AV and VSM-WMTS, but the differences between the two VSM accuracies and F measures are not significant.
LRA versus VSM
Discussion
The experimental results in Sections 6 and 7 demonstrate that LRA performs significantly better than the VSM, but it is also clear that there is room for improvement. The accuracy might not yet be adequate for practical applications, although past work has shown that it is possible to adjust the tradeoff of precision versus recall (Turney and Littman, 2005). For some of the applications, such as information extraction, LRA might be suitable if it is adjusted for high precision, at the expense of low recall.
Another limitation is speed; it took almost nine days for LRA to answer 374 analogy questions. However, with progress in computer hardware, speed will gradually become less of a concern. Also, the software has not been optimized for speed; there are several places where the efficiency could be increased and many operations are parallelizable. It may also be possible to precompute much of the information for LRA, although this would require substantial changes to the algorithm.
The difference in performance between VSM-AV and VSM-WMTS shows that VSM is sensitive to the size of the corpus. Although LRA is able to surpass VSM-AV when the WMTS corpus is only about one tenth the size of the AV corpus, it seems likely that LRA would perform better with a larger corpus. The WMTS corpus requires one terabyte of hard disk space, but progress in hardware will likely make ten or even one hundred terabytes affordable in the relatively near future.
For noun-modifier classification, more labeled data should yield performance improvements. With 600 noun-modifier pairs and 30 classes, the average class has only 20 examples. We expect that the accuracy would improve substantially with five or ten times more examples. Unfortunately, it is time consuming and expensive to acquire hand-labeled data.
Another issue with noun-modifier classification is the choice of classification scheme for the semantic relations. The 30 classes of Nastase and Szpakowicz (2003) might not be the best scheme. Other researchers have proposed different schemes (Vanderwende, 1994;Barker and Szpakowicz, 1998;Rosario and Hearst, 2001;Rosario, Hearst, and Fillmore, 2002). It seems likely that some schemes are easier for machine learning than others. For some applications, 30 classes may not be necessary; the 5 class scheme may be sufficient.
LRA, like VSM, is a corpus-based approach to measuring relational similarity. Past work suggests that a hybrid approach, combining multiple modules, some corpusbased, some lexicon-based, will surpass any purebred approach (Turney et al., 2003). In future work, it would be natural to combine the corpus-based approach of LRA with the lexicon-based approach of Veale (2004), perhaps using the combination method of Turney et al. (2003).
The Singular Value Decomposition is only one of many methods for handling sparse, noisy data. We have also experimented with Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999), Kernel Principal Components Analysis (KPCA) (Scholkopf, Smola, and Muller, 1997), and Iterative Scaling (IS) (Ando, 2000). We had some interesting results with small matrices (around 2,000 rows by 1,000 columns), but none of these methods seemed substantially better than SVD and none of them scaled up to the matrix sizes we are using here (e.g., 17,232 rows and 8,000 columns; see Section 6.1).
In step 4 of LRA, we simply select the top num patterns most frequent patterns and discard the remaining patterns. Perhaps a more sophisticated selection algorithm would improve the performance of LRA. We have tried a variety of ways of selecting patterns, but it seems that the method of selection has little impact on performance. We hypothesize that the distributed vector representation is not sensitive to the selection method, but it is possible that future work will find a method that yields significant improvement in performance.
Conclusion
This paper has introduced a new method for calculating relational similarity, Latent Relational Analysis. The experiments demonstrate that LRA performs better than the VSM approach, when evaluated with SAT word analogy questions and with the task of classifying noun-modifier expressions. The VSM approach represents the relation between a pair of words with a vector, in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus. LRA extends this approach in three ways:
| 14,134 |
cs0608100
|
2951193962
|
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.
|
hearst92a presents an algorithm for learning hyponym ( type of ) relations from a corpus and berland99 describe how to learn meronym ( part of ) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words @cite_26 and nastase03 list 30 semantic relations for noun-modifier pairs. hearst92a and berland99 use manually generated rules to mine text for semantic relations. turneylittman05 also use a manually generated set of 64 patterns.
|
{
"abstract": [
"Part 1 The lexical database: nouns in WordNet, George A. Miller modifiers in WordNet, Katherine J. Miller a semantic network of English verbs, Christiane Fellbaum design and implementation of the WordNet lexical database and searching software, Randee I. Tengi. Part 2: automated discovery of WordNet relations, Marti A. Hearst representing verb alterations in WordNet, Karen T. the formalization of WordNet by methods of relational concept analysis, Uta E. Priss. Part 3 Applications of WordNet: building semantic concordances, Shari performance and confidence in a semantic annotation task, Christiane WordNet and class-based probabilities, Philip Resnik combining local context and WordNet similarity for word sense identification, Claudia Leacock and Martin Chodorow using WordNet for text retrieval, Ellen M. Voorhees lexical chains as representations of context for the detection and correction of malapropisms, Graeme Hirst and David St-Onge temporal indexing through lexical chaining, Reem Al-Halimi and Rick Kazman COLOR-X - using knowledge from WordNet for conceptual modelling, J.F.M. Burg and R.P. van de Riet knowledge processing on an extended WordNet, Sanda M. Harabagiu and Dan I Moldovan appendix - obtaining and using WordNet."
],
"cite_N": [
"@cite_26"
],
"mid": [
"2038721957"
]
}
|
Similarity of Semantic Relations
|
There are at least two kinds of similarity. Attributional similarity is correspondence between attributes and relational similarity is correspondence between relations (Medin, Goldstone, and Gentner, 1990). When two words have a high degree of attributional similarity, we call them synonyms. When two word pairs have a high degree of relational similarity, we say they are analogous.
Verbal analogies are often written in the form A:B::C:D, meaning A is to B as C is to D; for example, traffic:street::water:riverbed. Traffic flows over a street; water flows over a riverbed. A street carries traffic; a riverbed carries water. There is a high degree of relational similarity between the word pair traffic:street and the word pair water:riverbed. In fact, this analogy is the basis of several mathematical theories of traffic flow (Daganzo, 1994).
In Section 2, we look more closely at the connections between attributional and relational similarity. In analogies such as mason:stone::carpenter:wood, it seems that relational similarity can be reduced to attributional similarity, since mason and carpenter are attributionally similar, as are stone and wood. In general, this reduction fails. Consider the analogy traffic:street::water:riverbed. Traffic and water are not attributionally similar. Street and riverbed are only moderately attributionally similar.
Many algorithms have been proposed for measuring the attributional similarity between two words (Lesk, 1969;Resnik, 1995; Landauer and Dumais, 1997; Jiang and Conrath, 1997; Lin, 1998b;Turney, 2001;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003). Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais, 1997), information retrieval (Deerwester et al., 1990), determining semantic orientation (Turney, 2002), grading student essays (Rehder et al., 1998), measuring textual cohesion (Morris and Hirst, 1991), and word sense disambiguation (Lesk, 1986).
On the other hand, since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. Many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity. We discuss related problems in natural language processing, information retrieval, and information extraction in more detail in Section 3. This paper builds on the Vector Space Model (VSM) of information retrieval. Given a query, a search engine produces a ranked list of documents. The documents are ranked in order of decreasing attributional similarity between the query and each document. Almost all modern search engines measure attributional similarity using the VSM (Baeza-Yates and Ribeiro-Neto, 1999). Turney and Littman (2005) adapt the VSM approach to measuring relational similarity. They used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words. Section 4 presents the VSM approach to measuring similarity.
In Section 5, we present an algorithm for measuring relational similarity, which we call Latent Relational Analysis (LRA). The algorithm learns from a large corpus of unlabeled, unstructured text, without supervision. LRA extends the VSM approach of Turney and Littman (2005) in three ways: (1) The connecting patterns are derived automatically from the corpus, instead of using a fixed set of patterns.
(2) Singular Value Decomposition (SVD) is used to smooth the frequency data. (3) Given a word pair such as traffic:street, LRA considers transformations of the word pair, generated by replacing one of the words by synonyms, such as traffic:road, traffic:highway.
Section 6 presents our experimental evaluation of LRA with a collection of 374 multiple-choice word analogy questions from the SAT college entrance exam. 1 An example of a typical SAT question appears in Table 1. In the educational testing literature, the first pair (mason:stone) is called the stem of the analogy. The correct choice is called the solution and the incorrect choices are distractors. We evaluate LRA by testing its ability to select the solution and avoid the distractors. The average performance of collegebound senior high school students on verbal SAT questions corresponds to an accuracy of about 57%. LRA achieves an accuracy of about 56%. On these same questions, the VSM attained 47%.
One application for relational similarity is classifying semantic relations in nounmodifier pairs (Turney and Littman, 2005). In Section 7, we evaluate the performance of LRA with a set of 600 noun-modifier pairs from Nastase and Szpakowicz (2003). The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the head noun (printer) and the modifier (laser). The 600 pairs have been manually labeled with 30 classes of semantic relations. For example, "laser printer" is classified as instrument; the printer uses the laser as an instrument for We approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem. The 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbour in the training set. LRA is used to measure distance (i.e., similarity, nearness). LRA achieves an accuracy of 39.8% on the 30-class problem and 58.0% on the 5-class problem. On the same 600 noun-modifier pairs, the VSM had accuracies of 27.8% (30-class) and 45.7% (5-class) (Turney and Littman, 2005).
We discuss the experimental results, limitations of LRA, and future work in Section 8 and we conclude in Section 9.
Attributional and Relational Similarity
In this section, we explore connections between attributional and relational similarity.
Types of Similarity
Medin, Goldstone, and Gentner (1990) distinguish attributes and relations as follows:
Attributes are predicates taking one argument (e.g., X is red, X is large), whereas relations are predicates taking two or more arguments (e.g., X collides with Y , X is larger than Y ). Attributes are used to state properties of objects; relations express relations between objects or propositions. Gentner (1983) notes that what counts as an attribute or a relation can depend on the context. For example, large can be viewed as an attribute of X, LARGE(X ), or a relation between X and some standard Y , LARGER THAN(X , Y ).
The amount of attributional similarity between two words, A and B, depends on the degree of correspondence between the properties of A and B. A measure of attributional similarity is a function that maps two words, A and B, to a real number, sim a (A, B) ∈ ℜ. The more correspondence there is between the properties of A and B, the greater their attributional similarity. For example, dog and wolf have a relatively high degree of attributional similarity.
The amount of relational similarity between two pairs of words, A:B and C:D, depends on the degree of correspondence between the relations between A and B and the relations between C and D. A measure of relational similarity is a function that maps two pairs, A:B and C:D, to a real number, sim r (A : B, C : D) ∈ ℜ. The more correspondence there is between the relations of A:B and C:D, the greater their relational similarity. For example, dog:bark and cat:meow have a relatively high degree of relational
As these examples show, semantic relatedness is the same as attributional similarity (e.g., hot and cold are both kinds of temperature, pencil and paper are both used for writing). Here we prefer to use the term attributional similarity, because it emphasizes the contrast with relational similarity. The term semantic relatedness may lead to confusion when the term relational similarity is also under discussion.
Resnik (1995) describes semantic similarity as follows:
Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. (1989) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonimic (IS-A) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information.
Thus semantic similarity is a specific type of attributional similarity. The term semantic similarity is misleading, because it refers to a type of attributional similarity, yet relational similarity is not any less semantic than attributional similarity. To avoid confusion, we will use the terms attributional similarity and relational similarity, following Medin, Goldstone, and Gentner (1990). Instead of semantic similarity (Resnik, 1995) or semantically similar (Chiarello et al., 1990), we prefer the term taxonomical similarity, which we take to be a specific type of attributional similarity. We interpret synonymy as a high degree of attributional similarity. Analogy is a high degree of relational similarity.
Measuring Attributional Similarity
Algorithms for measuring attributional similarity can be lexicon-based (Lesk, 1986;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003), corpus-based (Lesk, 1969;Landauer and Dumais, 1997;Lin, 1998a;Turney, 2001), or a hybrid of the two (Resnik, 1995;Jiang and Conrath, 1997;Turney et al., 2003). Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpus-based algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. However, experiments do not support this intuition.
Several algorithms have been evaluated using 80 multiple-choice synonym ques- Table 2. Table 3 shows the best performance on the TOEFL questions for each type of attributional similarity algorithm. The results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy.
Using Attributional Similarity to Solve Analogies
We may distinguish near analogies (mason:stone::carpenter:wood) from far analogies (traffic:street::water:riverbed) (Gentner, 1983;Medin, Goldstone, and Gentner, 1990). In an analogy A:B::C:D, where there is a high degree of relational similarity between A:B and C:D, if there is also a high degree of attributional similarity between A and C, and between B and D, then A:B::C:D is a near analogy; otherwise, it is a far analogy. It seems possible that SAT analogy questions might consist largely of near analogies, in which case they can be solved using attributional similarity measures. We could score each candidate analogy by the average of the attributional similarity, sim a , between A and C and between B and D: score(A : B ::
C : D) = 1 2 (sim a (A, C) + sim a (B, D))(1)
This kind of approach was used in two of the thirteen modules in Turney et al. (2003) (see Section 3.1).
To evaluate this approach, we applied several measures of attributional similarity to our collection of 374 SAT questions. The performance of the algorithms was measured by precision, recall, and F , defined as follows: precision = number of correct guesses total number of guesses made
(2)
F = 2 × precision × recall precision + recall (4)
Note that recall is the same as percent correct (for multiple-choice questions, with only zero or one guesses allowed per question, but not in general). Table 4 shows the experimental results for our set of 374 analogy questions. For example, using the algorithm of Hirst and St-Onge (1998), 120 questions were answered correctly, 224 incorrectly, and 30 questions were skipped. When the algorithm assigned the same similarity to all of the choices for a given question, that question was skipped. The precision was 120/(120 + 224) and the recall was 120/(120 + 224 + 30).
The first five algorithms in Table 4 are implemented in Pedersen's WordNet-Similarity package. 2 The sixth algorithm (Turney, 2001) used the Waterloo MultiText System, as described in Terra and Clarke (2003).
The difference between the lowest performance (Jiang and Conrath, 1997) and random guessing is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, the difference between the highest performance (Turney, 2001) and the VSM approach (Turney and Littman, 2005) is also statistically significant with 95% confidence. We conclude that there are enough near analogies in the 374 SAT questions for attributional similarity to perform better than random guessing, but not enough near analogies for attributional similarity to perform as well as relational similarity.
Recognizing Word Analogies
The problem of recognizing word analogies is, given a stem word pair and a finite list of choice word pairs, select the choice that is most analogous to the stem. This problem was first attempted by a system called Argus (Reitman, 1965), using a small hand-built semantic network. Argus could only solve the limited set of analogy questions that its programmer had anticipated. Argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity. Turney et al. (2003) combined 13 independent modules to answer SAT questions. The final output of the system was based on a weighted combination of the outputs of each individual module. The best of the 13 modules was the VSM, which is described in detail in Turney and Littman (2005). The VSM was evaluated on a set of 374 SAT questions, achieving a score of 47%.
In contrast with the corpus-based approach of Turney and Littman (2005), Veale (2004) applied a lexicon-based approach to the same 374 SAT questions, attaining a score of 43%. Veale evaluated the quality of a candidate analogy A:B::C:D by looking for paths in WordNet, joining A to B and C to D. The quality measure was based on the similarity between the A:B paths and the C:D paths. Turney (2005) introduced Latent Relational Analysis (LRA), an enhanced version of the VSM approach, which reached 56% on the 374 SAT questions. Here we go beyond Turney (2005) by describing LRA in more detail, performing more extensive experiments, and analyzing the algorithm and related work in more depth. French (2002) cites Structure Mapping Theory (SMT) (Gentner, 1983) and its implementation in the Structure Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner, 1989) as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
Structure Mapping Theory
For example, there is an analogy between the solar system and Rutherford's model of the atom (Falkenhainer, Forbus, and Gentner, 1989). The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
Each individual connection (e.g., from revolve(planet, sun) to revolve(electron, nucleus)) in an analogical mapping implies that the connected relations are similar; thus, SMT requires a measure of relational similarity, in order to form maps. Early versions of SME only mapped identical relations, but later versions of SME allowed similar, non-identical relations to match (Falkenhainer, 1990). However, the focus of research in analogy-making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations, hence the similarity measures used in SME at the level of individual connections are somewhat rudimentary.
We believe that a more sophisticated measure of relational similarity, such as LRA, may enhance the performance of SME. Likewise, the focus of our work here is on the similarity between particular relations, and we ignore systematic mapping between sets animal:eat::inflation:reduce of relations, so LRA may also be enhanced by integration with SME.
Metaphor
Metaphorical language is very common in our daily life; so common that we are usually unaware of it (Lakoff and Johnson, 1980). argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language (Lakoff and Johnson, 1980). Dolan (1995) describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's (1995) algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors. Lakoff and Johnson (1980) give many examples of sentences in support of their claim that metaphorical language is ubiquitous. The metaphors in their sample sentences can be expressed using SAT-style verbal analogies of the form A:B::C:D. The first column in Table 5 is a list of sentences from Lakoff and Johnson (1980) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy.
Classifying Semantic Relations
The task of classifying semantic relations is to identify the relation between a pair of words. Often the pairs are restricted to noun-modifier pairs, but there are many interesting relations, such as antonymy, that do not occur in noun-modifier pairs. However, noun-modifier pairs are interesting due to their high frequency in English. For instance, WordNet 2.0 contains more than 26,000 noun-modifier pairs, although many common noun-modifiers are not in WordNet, especially technical terms. Rosario and Hearst (2001) and Rosario, Hearst, and Fillmore (2002) classify nounmodifier relations in the medical domain, using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources for representing each noun-modifier pair with a feature vector. They trained a neural network to distinguish 13 classes of semantic relations. Nastase and Szpakowicz (2003) explore a similar approach to classifying general noun-modifier pairs (i.e., not restricted to a particular domain, such as medicine), using WordNet and Roget's Thesaurus as lexical resources. Vanderwende (1994) used hand-built rules, together with a lexical knowledge base, to classify noun-modifier pairs.
None of these approaches explicitly involved measuring relational similarity, but any classification of semantic relations necessarily employs some implicit notion of relational similarity, since members of the same class must be relationally similar to some extent. Barker and Szpakowicz (1998) tried a corpus-based approach that explicitly used a measure of relational similarity, but their measure was based on literal matching, which limited its ability to generalize. Moldovan et al. (2004) also used a measure of relational similarity, based on mapping each noun and modifier into semantic classes in WordNet. The noun-modifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm, to improve the mapping of the noun and modifier into WordNet. Turney and Littman (2005) used the VSM (as a component in a single nearest neighbour learning algorithm) to measure relational similarity. We take the same approach here, substituting LRA for the VSM, in Section 7.
Lauer (1995) used a corpus-based approach (using the BNC) to paraphrase nounmodifier pairs, by inserting the prepositions of, for, in, at, on, from, with, and about. For example, reptile haven was paraphrased as haven for reptiles. Lapata and Keller (2004) achieved improved results on this task, by using the database of AltaVista's search engine as a corpus.
Word Sense Disambiguation
We believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text. If we can identify the semantic relations between the given word and its context, then we can disambiguate the given word. Yarowsky's (1993) observation that collocations are almost always monosemous is evidence for this view. Federici, Montemagni, and Pirrelli (1997) present an analogybased approach to word sense disambiguation.
For example, consider the word plant. Out of context, plant could refer to an industrial plant or a living organism. Suppose plant appears in some text near food. A typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism (Lesk, 1986;Banerjee and Pedersen, 2003). In this case, the decision may not be clear, since industrial plants often produce food and living organisms often serve as food. It would be very helpful to know the relation between food and plant in this example. In the phrase "food for the plant", the relation between food and plant strongly suggests that the plant is a living organism, since industrial plants do not need food. In the text "food at the plant", the relation strongly suggests that the plant is an industrial plant, since living organisms are not usually considered as locations. Thus an algorithm for classifying semantic relations (as in Section 7) should be helpful for word sense disambiguation.
Information Extraction
The problem of relation extraction is, given an input document and a specific relation R, extract all pairs of entities (if any) that have the relation R in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. Zelenko, Aone, and Richardella (2003) present a kernel method for extracting the relations person-affiliation and organization-location. For example, in the sentence "John Smith is the chief scientist of the Hardcom Corporation," there is a person-affiliation relation between "John Smith" and "Hardcom Corporation" (Zelenko, Aone, and Richardella, 2003). This is similar to the problem of classifying semantic relations (Section 3.4), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
In the VSM approach to classifying semantic relations (Turney and Littman, 2005), we would have a training set of labeled examples of the relation person-affiliation, for instance. Each example would be represented by a vector of pattern frequencies. Given a specific document discussing "John Smith" and "Hardcom Corporation", we could construct a vector representing the relation between these two entities, and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors. It would seem that there is a problem here, because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for "John Smith" and "Hardcom Corporation" would be very sparse, since these entities might be mentioned only once in the given document. However, this is not a new problem for the Vector Space Model; it is the standard situation when the VSM is used for information retrieval. A query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vector. There are well-known techniques in information retrieval for coping with this disparity, such as weighting schemes for query vectors that are different from the weighting schemes for document vectors (Salton and Buckley, 1988).
Question Answering
In their paper on classifying semantic relations, Moldovan et al. (2004) suggest that an important application of their work is Question Answering. As defined in the Text REtrieval Conference (TREC) Question Answering (QA) track, the task is to answer simple questions, such as "Where have nuclear incidents occurred?", by retrieving a relevant document from a large corpus and then extracting a short string from the document, such as "The Three Mile Island nuclear incident caused a DOE policy crisis." Moldovan et al. (2004) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text. They argue that the desired semantic relation can easily be inferred from the surface form of the question. A question of the form "Where ...?" is likely to be seeking for entities with a location relation and a question of the form "What did ... make?" is likely to be looking for entities with a product relation. In Section 7, we show how LRA can recognize relations such as location and product (see Table 19). Hearst (1992) presents an algorithm for learning hyponym (type of) relations from a corpus and Berland and Charniak (1999) describe how to learn meronym (part of) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words (Fellbaum, 1998) and Nastase and Szpakowicz (2003) list 30 semantic relations for noun-modifier pairs. Hearst (1992) and Berland and Charniak (1999) use manually generated rules to mine text for semantic relations. Turney and Littman (2005) also use a manually generated set of 64 patterns.
Automatic Thesaurus Generation
LRA does not use a predefined set of patterns; it learns patterns from a large corpus. Instead of manually generating new rules or patterns for each new semantic relation, it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations. A nearest neighbour algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations, given the appropriate labeled training data.
Girju, Badulescu, and Moldovan (2003) present an algorithm for learning meronym relations from a corpus. Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. However, they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. Veale (2003) has developed an algorithm for recognizing certain types of word analogies, based on information in WordNet. He proposes to use the algorithm for analogical information retrieval. For example, the query "Muslim church" should return "mosque" and the query "Hindu bible" should return "the Vedas". The algorithm was designed with a focus on analogies of the form adjective:noun::adjective:noun, such as Christian:church::Muslim:mosque.
Information Retrieval
A measure of relational similarity is applicable to this task. Given a pair of words, A and B, the task is to return another pair of words, X and Y , such that there is high relational similarity between the pair A:X and the pair Y :B. For example, given A = "Muslim" and B = "church", return X = "mosque" and Y = "Christian". (The pair Muslim:mosque has a high relational similarity to the pair Christian:church.)
Marx et al. (2002) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora. Each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus. For example, one experiment used a corpus of Buddhist documents and a corpus of Christian documents. A cluster of words such as {Hindu, Mahayana, Zen, ...} from the Buddhist corpus was coupled with a cluster of words such as {Catholic, Protestant, ...} from the Christian corpus. Thus the algorithm appears to have discovered an analogical mapping between Buddhist schools and traditions and Christian schools and traditions. This is interesting work, but it is not directly applicable to SAT analogies, because it discovers analogies between clusters of words, rather than individual words.
Identifying Semantic Roles
A semantic frame for an event such as judgement contains semantic roles such as judge, evaluee, and reason, whereas an event such as statement contains roles such as speaker, addressee, and message (Gildea and Jurafsky, 2002). The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. Moldovan et al. (2004) argue that semantic roles are merely a special case of semantic relations (Section 3.4), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
The Vector Space Model
This section examines past work on measuring attributional and relational similarity using the Vector Space Model (VSM).
Measuring Attributional Similarity with the Vector Space Model
The VSM was first developed for information retrieval (Salton and McGill, 1983;Salton and Buckley, 1988;Salton, 1989) and it is at the core of most modern search engines (Baeza-Yates and Ribeiro-Neto, 1999).
In the VSM approach to information retrieval, queries and documents are represented by vectors. Elements in these vectors are based on the frequencies of words in the corresponding queries and documents. The frequencies are usually transformed by various formulas and weights, tailored to improve the effectiveness of the search engine (Salton, 1989). The attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors. For a given query, the search engine sorts the matching documents in order of decreasing cosine.
The VSM approach has also been used to measure the attributional similarity of words (Lesk, 1969;Ruge, 1992;Pantel and Lin, 2002). Pantel and Lin (2002) clustered words according to their attributional similarity, as measured by a VSM. Their algorithm is able to discover the different senses of polysemous words, using unsupervised learning.
Latent Semantic Analysis enhances the VSM approach to information retrieval by using the Singular Value Decomposition (SVD) to smooth the vectors, which helps to handle noise and sparseness in the data (Deerwester et al., 1990;Dumais, 1993; Landauer and Dumais, 1997). SVD improves both document-query attributional similarity measures (Deerwester et al., 1990;Dumais, 1993) and word-word attributional similarity measures (Landauer and Dumais, 1997). LRA also uses SVD to smooth vectors, as we discuss in Section 5.
Measuring Relational Similarity with the Vector Space Model
Let R 1 be the semantic relation (or set of relations) between a pair of words, A and B, and let R 2 be the semantic relation (or set of relations) between another pair, C and D. We wish to measure the relational similarity between R 1 and R 2 . The relations R 1 and R 2 are not given to us; our task is to infer these hidden (latent) relations and then compare them.
In the VSM approach to relational similarity (Turney and Littman, 2005), we create vectors, r 1 and r 2 , that represent features of R 1 and R 2 , and then measure the similarity of R 1 and R 2 by the cosine of the angle θ between r 1 and r 2 :
r 1 = r 1,1 , . . . , r 1,n (5) r 2 = r 2,1 , . . . r 2,n (6) cosine(θ) = n i=1 r 1,i · r 2,i n i=1 (r 1,i ) 2 · n i=1 (r 2,i ) 2 = r 1 · r 2 √ r 1 · r 1 · √ r 2 · r 2 = r 1 · r 2 r 1 · r 2 (7)
We create a vector, r, to characterize the relationship between two words, X and Y , by counting the frequencies of various short phrases containing X and Y . Turney and Littman (2005) use a list of 64 joining terms, such as "of", "for", and "to", to form 128 phrases that contain X and Y , such as "X of Y ", "Y of X", "X for Y ", "Y for X", "X to Y ", and "Y to X". These phrases are then used as queries for a search engine and the number of hits (matching documents) is recorded for each query. This process yields a vector of 128 numbers. If the number of hits for a query is x, then the corresponding element in the vector r is log(x + 1). Several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures (Salton and Buckley, 1988;Ruge, 1992;Lin, 1998b).
Turney and Littman (2005) evaluated the VSM approach by its performance on 374 SAT analogy questions, achieving a score of 47%. Since there are five choices for each question, the expected score for random guessing is 20%. To answer a multiple-choice analogy question, vectors are created for the stem pair and each choice pair, and then cosines are calculated for the angles between the stem pair and each choice pair. The best guess is the choice pair with the highest cosine. We use the same set of analogy questions to evaluate LRA in Section 6.
The VSM was also evaluated by its performance as a distance (nearness) measure in a supervised nearest neighbour classifier for noun-modifier semantic relations (Turney and Littman, 2005). The evaluation used 600 hand-labeled noun-modifier pairs from Nastase and Szpakowicz (2003). A testing pair is classified by searching for its single nearest neighbour in the labeled training data. The best guess is the label for the training pair with the highest cosine. LRA is evaluated with the same set of nounmodifier pairs in Section 7. Turney and Littman (2005) used the AltaVista search engine to obtain the frequency information required to build vectors for the VSM. Thus their corpus was the set of all web pages indexed by AltaVista. At the time, the English subset of this corpus consisted of about 5 × 10 11 words. Around April 2004, AltaVista made substantial changes to their search engine, removing their advanced search operators. Their search engine no longer supports the asterisk operator, which was used by Turney and Littman (2005) for stemming and wild-card searching. AltaVista also changed their policy towards automated searching, which is now forbidden. 3 Turney and Littman (2005) used AltaVista's hit count, which is the number of documents (web pages) matching a given query, but LRA uses the number of passages (strings) matching a query. In our experiments with LRA (Sections 6 and 7), we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003), running on a 16 CPU Beowulf Cluster, with a corpus of about 5 × 10 10 English words. The Waterloo MultiText System (WMTS) is a distributed (multiprocessor) search engine, designed primarily for passage retrieval (although document retrieval is possible, as a special case of passage retrieval). The text and index require approximately one terabyte of disk space. Although AltaVista only gives a rough estimate of the number of matching documents, the Waterloo MultiText System gives exact counts of the number of matching passages. Turney et al. (2003) combine 13 independent modules to answer SAT questions. The performance of LRA significantly surpasses this combined system, but there is no real contest between these approaches, because we can simply add LRA to the combination, as a fourteenth module. Since the VSM module had the best performance of the thirteen modules (Turney et al., 2003), the following experiments focus on comparing VSM and LRA.
Latent Relational Analysis
LRA takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs. LRA relies on three resources, a search engine with a very large corpus of text, a broad-coverage thesaurus of synonyms, and an efficient implementation of SVD.
We first present a short description of the core algorithm. Later, in the following subsections, we will give a detailed description of the algorithm, as it is applied in the experiments in Sections 6 and 7.
• Given a set of word pairs as input, look in a thesaurus for synonyms for each word in each word pair. For each input pair, make alternate pairs by replacing the original words with their synonyms. The alternate pairs are intended to form near analogies with the corresponding original pairs (see Section 2.3).
• Filter out alternate pairs that do not form near analogies, by dropping alternate pairs that co-occur rarely in the corpus. In the preceding step, if a synonym replaced an ambiguous original word, but the synonym captures the wrong sense of the original word, it is likely that there is no significant relation between the words in the alternate pair, so they will rarely co-occur.
• For each original and alternate pair, search in the corpus for short phrases that begin with one member of the pair and end with the other. These phrases characterize the relation between the words in each pair.
• For each phrase from the previous step, create several patterns, by replacing words in the phrase with wild cards.
• Build a pair-pattern frequency matrix, in which each cell represents the number of times that the corresponding pair (row) appears in the corpus with the corresponding pattern (column). The number will usually be zero, resulting in a sparse matrix.
• Apply the Singular Value Decomposition to the matrix. This reduces noise in the matrix and helps with sparse data.
• Suppose that we wish to calculate the relational similarity between any two of the original pairs. Start by looking for the two row vectors in the pair-pattern frequency matrix that correspond to the two original pairs. Calculate the cosine of the angle between these two row vectors. Then merge the cosine of the two original pairs with the cosines of their corresponding alternate pairs, as follows. If an analogy formed with alternate pairs has a higher cosine than the original pairs, we assume that we have found a better way to express the analogy, but we have not significantly changed its meaning. If the cosine is lower, we assume that we may have changed the meaning, by inappropriately replacing words with synonyms. Filter out inappropriate alternates by dropping all analogies formed of alternates, such that the cosines are less than the cosine for the original pairs. The relational similarity between the two original pairs is then calculated as the average of all of the remaining cosines.
The motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus. The hope is that we can find near analogies for the original pairs, such that the near analogies co-occur more frequently in the corpus. The danger is that the alternates may have different relations from the originals. The filtering steps above aim to reduce this risk.
Input and Output
In our experiments, the input set contains from 600 to 2,244 word pairs. The output similarity measure is based on cosines, so the degree of similarity can range from −1 (dissimilar; θ = 180 • ) to +1 (similar; θ = 0 • ). Before applying SVD, the vectors are completely nonnegative, which implies that the cosine can only range from 0 to +1, but SVD introduces negative values, so it is possible for the cosine to be negative, although we have never observed this in our experiments.
Search Engine and Corpus
In the following experiments, we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003). 4 The corpus consists of about 5 × 10 10 English words, gathered by a web crawler, mainly from US academic web sites. The web pages cover a very wide range of topics, styles, genres, quality, and writing skill. The WMTS is well suited to LRA, because the WMTS scales well to large corpora (one terabyte, in our case), it gives exact frequency counts (unlike most web search engines), it is designed for passage retrieval (rather than document retrieval), and it has a powerful query syntax.
Thesaurus
As a source of synonyms, we use Lin's (1998a) automatically generated thesaurus. This thesaurus is available through an online interactive demonstration or it can be downloaded. 5 We used the online demonstration, since the downloadable version seems to contain fewer words. For each word in the input set of word pairs, we automatically query the online demonstration and fetch the resulting list of synonyms. As a courtesy to other users of Lin's online system, we insert a 20 second delay between each query.
Lin's thesaurus was generated by parsing a corpus of about 5 × 10 7 English words, consisting of text from the Wall Street Journal, San Jose Mercury, and AP Newswire (Lin, 1998a). The parser was used to extract pairs of words and their grammatical relations. Words were then clustered into synonym sets, based on the similarity of their grammatical relations. Two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words. Given a word and its part of speech, Lin's thesaurus provides a list of words, sorted in order of decreasing attributional similarity. This sorting is convenient for LRA, since it makes it possible to focus on words with higher attributional similarity and ignore the rest. WordNet, in contrast, given a word and its part of speech, provides a list of words grouped by the possible senses of the given word, with groups sorted by the frequencies of the senses. WordNet's sorting does not directly correspond to sorting by degree of attributional similarity, although various algorithms have been proposed for deriving attributional similarity from WordNet (Resnik, 1995; Jiang and Conrath, 1997; Budanitsky and Hirst, 2001; Banerjee and Pedersen, 2003).
Singular Value Decomposition
We use Rohde's SVDLIBC implementation of the Singular Value Decomposition, which is based on SVDPACKC (Berry, 1992). 6 In LRA, SVD is used to reduce noise and compensate for sparseness.
The Algorithm
We will go through each step of LRA, using an example to illustrate the steps. Assume that the input to LRA is the 374 multiple-choice SAT word analogy questions of Turney and Littman (2005). Since there are six word pairs per question (the stem and five choices), the input consists of 2,244 word pairs. Let's suppose that we wish to calculate the relational similarity between the pair quart:volume and the pair mile:distance, taken from the SAT question in Table 6. The LRA algorithm consists of the following twelve steps:
1. Find alternates: For each word pair A:B in the input set, look in Lin's (1998a) thesaurus for the top num sim words (in the following experiments, num sim is 10) that are most similar to A. For each A ′ that is similar to A, make a new word pair A ′ :B. Likewise, look for the top num sim words that are most similar to B, and for each B ′ , make a new word pair A:B ′ . A:B is called the original pair and each A ′ :B or A:B ′ is an alternate pair. The intent is that alternates should have almost the same semantic relations as the original. For each input pair, there will now be 2 × num sim alternate pairs. When looking for similar words in Lin's (1998a) thesaurus, avoid words that seem unusual (e.g., hyphenated words, words with three characters or less, words with non-alphabetical char- Table 6 This SAT question, from Claman (2000), is used to illustrate the steps in the LRA algorithm.
Stem: quart:volume Choices: (a) day:night (b) mile:distance (c) decade:century (d) friction:heat (e) part:whole Solution: (b) mile:distance acters, multi-word phrases, and capitalized words). The first column in Table 7 shows the alternate pairs that are generated for the original pair quart:volume.
Filter alternates:
For each original pair A:B, filter the 2 × num sim alternates as follows. For each alternate pair, send a query to the WMTS, to find the frequency of phrases that begin with one member of the pair and end with the other. The phrases cannot have more than max phrase words (we use max phrase = 5). Sort the alternate pairs by the frequency of their phrases.
Select the top num f ilter most frequent alternates and discard the remainder (we use num f ilter = 3, so 17 alternates are dropped). This step tends to eliminate alternates that have no clear semantic relation. The third column in Table 7 shows the frequency with which each pair co-occurs in a window of max phrase words. The last column in Table 7 shows the pairs that are selected.
Find phrases:
For each pair (originals and alternates), make a list of phrases in the corpus that contain the pair. Query the WMTS for all phrases that begin with one member of the pair and end with the other (in either order). We ignore suffixes when searching for phrases that match a given pair. The phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pair. These phrases give us information about the semantic relations between the words in each pair. A phrase with no words between the two members of the word pair would give us very little information about the semantic relations (other than that the words occur together with a certain frequency in a certain order). Table 8 gives some examples of phrases in the corpus that match the pair quart:volume.
Find patterns:
For each phrase found in the previous step, build patterns from the intervening words. A pattern is constructed by replacing any or all or none of the intervening words with wild cards (one wild card can only replace one word). If a phrase is n words long, there are n − 2 intervening words between the members of the given word pair (e.g., between quart and volume). Thus a phrase with n words generates 2 (n−2) patterns. (We use max phrase = 5, so a phrase generates at most eight patterns.) For each pattern, count the number of pairs (originals and alternates) with phrases that match the pattern (a wild card must match exactly one word). Keep the top num patterns most frequent patterns and discard the rest (we use num patterns = 4, 000). Typically there will be millions of patterns, so it is not feasible to keep them all.
Map pairs to rows:
In preparation for building the matrix X, create a mapping of word pairs to row numbers. For each pair A:B, create a row for A:B and Table 7 Alternate forms of the original pair quart:volume. The first column shows the original pair and the alternate pairs. The second column shows Lin's similarity score for the alternate word compared to the original word. For example, the similarity between quart and pint is 0.210. The third column shows the frequency of the pair in the WMTS corpus. The fourth column shows the pairs that pass the filtering step (i.e., step 2). "quarts liquid volume" "volume in quarts" "quarts of volume" "volume capacity quarts" "quarts in volume"
Word pair
"volume being about two quarts" "quart total volume" "volume of milk in quarts" "quart of spray volume" "volume include measures like quart" Table 9 Frequencies of various patterns for quart:volume. The asterisk "*" represents the wildcard. Suffixes are ignored, so "quart" matches "quarts". For example, "quarts in volume" is one of the four phrases that match "quart P volume" when P is "in". P = "in" P = "* of" P = "of *" P = "* *" freq("quart P volume") 4 1 5 19 freq("volume P quart") 10 0 2 16 another row for B:A. This will make the matrix more symmetrical, reflecting our knowledge that the relational similarity between A:B and C:D should be the same as the relational similarity between B:A and D:C. This duplication of rows is examined in Section 6.6.
6. Map patterns to columns: Create a mapping of the top num patterns patterns to column numbers. For each pattern P , create a column for "word 1 P word 2 " and another column for "word 2 P word 1 ". Thus there will be 2 × num patterns columns in X. This duplication of columns is examined in Section 6.6.
7. Generate a sparse matrix: Generate a matrix X in sparse matrix format, suitable for input to SVDLIBC. The value for the cell in row i and column j is the frequency of the j-th pattern (see step 6) in phrases that contain the i-th word pair (see step 5). Table 9 gives some examples of pattern frequencies for quart:volume.
8. Calculate entropy: Apply log and entropy transformations to the sparse matrix (Landauer and Dumais, 1997). These transformations have been found to be very helpful for information retrieval (Harman, 1986;. Let x i,j be the cell in row i and column j of the matrix X from step 7. Let m be the number of rows in X and let n be the number of columns. We wish to weight the cell x i,j by the entropy of the j-th column. To calculate the entropy of the column, we need to convert the column into a vector of probabilities. Let p i,j be the probability of x i,j , calculated by normalizing the column vector so that the sum of the elements is one, p i,j = x i,j / m k=1 x k,j . The entropy of the j-th column is then H j = − m k=1 p k,j log(p k,j ). Entropy is at its maximum when p i,j is a uniform distribution, p i,j = 1/m, in which case H j = log(m). Entropy is at its minimum when p i,j is 1 for some value of i and 0 for all other values of i, in which case H j = 0. We want to give more weight to columns (patterns) with frequencies that vary substantially from one row (word pair) to the next, and less weight to columns that are uniform. Therefore we weight the cell x i,j by w j = 1 − H j / log(m), which varies from 0 when p i,j is uniform to 1 when entropy is minimal. We also apply the log transformation to frequencies, log(x i,j + 1). (Entropy is calculated with the original frequency values, before the log transformation is applied.) For all i and all j, replace the original value x i,j in X by the new value w j log(x i,j + 1). This is an instance of the TF-IDF (Term Frequency-Inverse Document Frequency) family of transformations, which is familiar in information retrieval (Salton and Buckley, 1988; Baeza-Yates and Ribeiro-Neto, 1999): log(x i,j + 1) is the TF term and w j is the IDF term. 9. Apply SVD: After the log and entropy transformations have been applied to the matrix X, run SVDLIBC. SVD decomposes a matrix X into a product of three matrices UΣV T , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length: U T U = V T V = I) and Σ is a diagonal matrix of singular values (hence SVD) (Golub and Van Loan, 1996). If X is of rank r, then Σ is also of rank r. Let Σ k , where k < r, be the diagonal matrix formed from the top k singular values, and let U k and V k be the matrices produced by selecting the corresponding columns from U and V. The matrix U k Σ k V T k is the matrix of rank k that best approximates the original matrix X, in the sense that it minimizes the approximation errors. That
is,X = U k Σ k V T k minimizes X − X F over all matricesX of rank k, where
. . . F denotes the Frobenius norm (Golub and Van Loan, 1996). We may think of this matrix U k Σ k V T k as a "smoothed" or "compressed" version of the original matrix. In the subsequent steps, we will be calculating cosines for row vectors. For this purpose, we can simplify calculations by dropping V. The cosine of two vectors is their dot product, after they have been normalized to unit length. The matrix XX T contains the dot products of all of the row vectors. We can find the dot product of the i-th and j-th row vectors by looking at the cell in row i, column j of the matrix XX T . Since V T V = I, we have XX T = UΣV T (UΣV T ) T = UΣV T VΣ T U T = UΣ(UΣ) T , which means that we can calculate cosines with the smaller matrix UΣ, instead of using X = UΣV T (Deerwester et al., 1990).
10. Projection: Calculate U k Σ k (we use k = 300). This matrix has the same number of rows as X, but only k columns (instead of 2 × num patterns columns; in our experiments, that is 300 columns instead of 8,000). We can compare two word pairs by calculating the cosine of the corresponding row vectors in U k Σ k . The row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space. The value k = 300 is recommended by Landauer and Dumais (1997) for measuring the attributional similarity between words. We investigate other values in Section 6.4. Table 10 gives the cosines for the sixteen combinations.
Evaluate alternates: Let
12. Calculate relational similarity: The relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D.
The requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies, which may be introduced in step 1 and may have slipped through the filtering in step 2. Averaging the cosines, as opposed to taking their maximum, is intended to provide some resistance to noise. For quart:volume and mile:distance, the third column in Table 10 shows which alternates are used to calculate the average. For these two pairs, the average of the selected cosines is 0.677. In Table 7, we see that pumping:volume Table 10 The sixteen combinations and their cosines. A:B::C:D expresses the analogy "A is to B as C is to D". The third column indicates those combinations for which the cosine is greater than or equal to the cosine of the original analogy, quart:volume::mile:distance. has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. However, Table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12.
Steps 11 and 12 can be repeated for each two input pairs that are to be compared. This completes the description of LRA. Table 11 gives the cosines for the sample SAT question. The choice pair with the highest average cosine (the choice with the largest value in column #1), choice (b), is the solution for this question; LRA answers the question correctly. For comparison, column #2 gives the cosines for the original pairs and column #3 gives the highest cosine. For this particular SAT question, there is one choice that has the highest cosine for all three columns, choice (b), although this is not true in general. Note that the gap between the first choice (b) and the second choice (d) is largest for the average cosines (column #1). This suggests that the average of the cosines (column #1) is better at discriminating the correct choice than either the original cosine (column #2) or the highest cosine (column #3).
Experiments with Word Analogy Questions
This section presents various experiments with 374 multiple-choice SAT word analogy questions. Table 12 shows the performance of the baseline LRA system on the 374 SAT questions, using the parameter settings and configuration described in Section 5. LRA correctly answered 210 of the 374 questions. 160 questions were answered incorrectly and 4 questions were skipped, because the stem pair and its alternates were represented by zero Table 11 Cosines for the sample SAT question given in Table 6. Column #1 gives the averages of the cosines that are greater than or equal to the original cosines (e.g., the average of the cosines that are marked "yes" in Table 10 is 0.677; see choice (b) in column #1). Column #2 gives the cosine for the original pairs (e.g., the cosine for the first pair in Table 10 is 0.525; see choice (b) in column #2). Column #3 gives the maximum cosine for the sixteen possible analogies (e.g., the maximum cosine in Table 10 vectors. The performance of LRA is significantly better than the lexicon-based approach of Veale (2004) (see Section 3.1) and the best performance using attributional similarity (see Section 2.3), with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). As another point of reference, consider the simple strategy of always guessing the choice with the highest co-occurrence frequency. The idea here is that the words in the solution pair may occur together frequently, because there is presumably a clear and meaningful relation between the solution words, whereas the distractors may only occur together rarely, because they have no meaningful relation. This strategy is signifcantly worse than random guessing. The opposite strategy, always guessing the choice pair with the lowest co-occurrence frequency, is also worse than random guessing (but not significantly). It appears that the designers of the SAT questions deliberately chose distractors that would thwart these two strategies.
Baseline LRA System
With 374 questions and 6 word pairs per question (one stem and five choices), there are 2,244 pairs in the input set. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 8,976 pairs. In step 5, for each pair A:B, we add B:A, yielding 17,952 pairs. However, some pairs are dropped because they correspond to zero vectors (they do not appear together in a window of five words in the WMTS . Also, a few words do not appear in Lin's thesaurus, and some word pairs appear twice in the SAT questions (e.g., lion:cat). The sparse matrix (step 7) has 17,232 rows (word pairs) and 8,000 columns (patterns), with a density of 5.8% (percentage of nonzero values). Table 13 gives the time required for each step of LRA, a total of almost nine days. All of the steps used a single CPU on a desktop computer, except step 3, finding the phrases for each word pair, which used a 16 CPU Beowulf cluster. Most of the other steps are parallelizable; with a bit of programming effort, they could also be executed on the Beowulf cluster. All CPUs (both desktop and cluster) were 2.4 GHz Intel Xeons. The desktop computer had 2 GB of RAM and the cluster had a total of 16 GB of RAM. Table 14 compares LRA to the Vector Space Model with the 374 analogy questions. VSM-AV refers to the VSM using AltaVista's database as a corpus. The VSM-AV results are taken from Turney and Littman (2005). As mentioned in Section 4.2, we estimate this corpus contained about 5 × 10 11 English words at the time the VSM-AV experiments took place. VSM-WMTS refers to the VSM using the WMTS, which contains about 5 × 10 10 English words. We generated the VSM-WMTS results by adapting the VSM to the WMTS. The algorithm is slightly different from Turney and Littman (2005), because we used passage frequencies instead of document frequencies.
LRA versus VSM
All three pairwise differences in recall in Table 14 are statistically significant with 95% confidence, using the Fisher Exact Test (Agresti, 1990). The pairwise differences in Table 15 Comparison with human SAT performance. The last column in the table indicates whether (YES) or not (NO) the average human performance (57%) falls within the 95% confidence interval of the corresponding algorithm's performance. The confidence intervals are calculated using the Binomial Exact Test (Agresti, 1990 precision between LRA and the two VSM variations are also significant, but the difference in precision between the two VSM variations (42.4% versus 47.7%) is not significant. Although VSM-AV has a corpus ten times larger than LRA's, LRA still performs better than VSM-AV.
Comparing VSM-AV to VSM-WMTS, the smaller corpus has reduced the score of the VSM, but much of the drop is due to the larger number of questions that were skipped (34 for VSM-WMTS versus 5 for VSM-AV). With the smaller corpus, many more of the input word pairs simply do not appear together in short phrases in the corpus. LRA is able to answer as many questions as VSM-AV, although it uses the same corpus as VSM-WMTS, because Lin's thesaurus allows LRA to substitute synonyms for words that are not in the corpus.
VSM-AV required 17 days to process the 374 analogy questions (Turney and Littman, 2005), compared to 9 days for LRA. As a courtesy to AltaVista, Turney and Littman (2005) inserted a five second delay between each query. Since the WMTS is running locally, there is no need for delays. VSM-WMTS processed the questions in only one day.
Human Performance
The average performance of college-bound senior high school students on verbal SAT questions corresponds to a recall (percent correct) of about 57% (Turney and Littman, 2005). The SAT I test consists of 78 verbal questions and 60 math questions (there is also an SAT II test, covering specific subjects, such as chemistry). Analogy questions are only a subset of the 78 verbal SAT questions. If we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal SAT I questions, then we can estimate that the average college-bound senior would correctly answer about 57% of the 374 analogy questions.
Of our 374 SAT questions, 190 are from a collection of ten official SAT tests (Claman, 2000). On this subset of the questions, LRA has a recall of 61.1%, compared to a recall of 51.1% on the other 184 questions. The 184 questions that are not from Claman (2000) seem to be more difficult. This indicates that we may be underestimating how well LRA performs, relative to college-bound senior high school students. Claman (2000) suggests that the analogy questions may be somewhat harder than other verbal SAT questions, so we may be slightly overestimating the mean human score on the analogy questions. Table 15 gives the 95% confidence intervals for LRA, VSM-AV, and VSM-WMTS, calculated by the Binomial Exact Test (Agresti, 1990). There is no significant difference between LRA and human performance, but VSM-AV and VSM-WMTS are significantly below human-level performance.
Varying the Parameters in LRA
There are several parameters in the LRA algorithm (see Section 5.5). The parameter values were determined by trying a small number of possible values on a small set of questions that were set aside. Since LRA is intended to be an unsupervised learning algorithm, we did not attempt to tune the parameter values to maximize the precision and recall on the 374 SAT questions. We hypothesized that LRA is relatively insensitive to the values of the parameters. Table 16 shows the variation in the performance of LRA as the parameter values are adjusted. We take the baseline parameter settings (given in Section 5.5) and vary each parameter, one at a time, while holding the remaining parameters fixed at their baseline values. None of the precision and recall values are significantly different from the baseline, according to the Fisher Exact Test (Agresti, 1990), at the 95% confidence level. This supports the hypothesis that the algorithm is not sensitive to the parameter values.
Although a full run of LRA on the 374 SAT questions takes nine days, for some of the parameters it is possible to reuse cached data from previous runs. We limited the experiments with num sim and max phrase because caching was not as helpful for these parameters, so experimenting with them required several weeks.
Ablation Experiments
As mentioned in the introduction, LRA extends the VSM approach of Turney and Littman (2005) by (1) exploring variations on the analogies by replacing words with synonyms (step 1),
(2) automatically generating connecting patterns (step 4), and (3) smoothing the data with SVD (step 9). In this subsection, we ablate each of these three components to assess their contribution to the performance of LRA. Table 17 shows the results. Without SVD (compare column #1 to #2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, we hypothesize that the drop in performance would be significant with a larger set of word pairs. More word pairs would increase the sample size, which would decrease the 95% confidence interval, which would likely show that SVD is making a significant contribution. Furthermore, more word pairs would increase the matrix size, which would give SVD more leverage. For example, Landauer and Dumais (1997) apply SVD to a matrix of of 30,473 columns by 60,768 rows, but our matrix here is 8,000 columns by 17,232 rows. We are currently gathering more SAT questions, to test this hypothesis.
Without synonyms (compare column #1 to #3 in Table 17), recall drops significantly (from 56.1% to 49.5%), but the drop in precision is not significant. When the synonym component is dropped, the number of skipped questions rises from 4 to 22, which demonstrates the value of the synonym component of LRA for compensating for sparse data.
When both SVD and synonyms are dropped (compare column #1 to #4 in Table 17), the decrease in recall is significant, but the decrease in precision is not significant. Again, we believe that a larger sample size would show the drop in precision is significant.
If we eliminate both synonyms and SVD from LRA, all that distinguishes LRA from VSM-WMTS is the patterns (step 4). The VSM approach uses a fixed list of 64 patterns to generate 128 dimensional vectors (Turney and Littman, 2005), whereas LRA uses a dynamically generated set of 4,000 patterns, resulting in 8,000 dimensional vectors. We can see the value of the automatically generated patterns by comparing LRA without synonyms and SVD (column #4) to VSM-WMTS (column #5). The difference in both precision and recall is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
The ablation experiments support the value of the patterns (step 4) and synonyms (step 1) in LRA, but the contribution of SVD (step 9) has not been proven, although we believe more data will support its effectiveness. Nonetheless, the three components together result in a 16% increase in F (compare #1 to #5).
Matrix Symmetry
We know a priori that, if A:B::C:D, then B:A::D:C. For example, "mason is to stone as carpenter is to wood" implies "stone is to mason as wood is to carpenter". Therefore a good measure of relational similarity, sim r , should obey the following equation:
In steps 5 and 6 of the LRA algorithm (Section 5.5), we ensure that the matrix X is symmetrical, so that equation (8) is necessarily true for LRA. The matrix is designed so that the row vector for A:B is different from the row vector for B:A only by a permutation of the elements. The same permutation distinguishes the row vectors for C:D and D:C. Therefore the cosine of the angle between A:B and C:D must be identical to the cosine of the angle between B:A and D:C (see equation (7)).
To discover the consequences of this design decision, we altered steps 5 and 6 so that symmetry is no longer preserved. In step 5, for each word pair A:B that appears in the input set, we only have one row. There is no row for B:A unless B:A also appears in the input set. Thus the number of rows in the matrix dropped from 17,232 to 8,616.
In step 6, we no longer have two columns for each pattern P , one for "word 1 P word 2 " and another for "word 2 P word 1 ". However, to be fair, we kept the total number of columns at 8,000. In step 4, we selected the top 8,000 patterns (instead of the top 4,000), distinguishing the pattern "word 1 P word 2 " from the pattern "word 2 P word 1 " (instead of considering them equivalent). Thus a pattern P with a high frequency is likely to appear in two columns, in both possible orders, but a lower frequency pattern might appear in only one column, in only one possible order.
These changes resulted in a slight decrease in performance. Recall dropped from 56.1% to 55.3% and precision dropped from 56.8% to 55.9%. The decrease is not statistically significant. However, the modified algorithm no longer obeys equation (8).
Although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the SAT questions, we prefer to retain symmetry, to ensure that equation (8) is satisfied.
Note that, if A:B::C:D, it does not follow that B:A::C:D. For example, it is false that "stone is to mason as carpenter is to wood". In general (except when the semantic relations between A and B are symmetrical), we have the following inequality:
Therefore we do not want A:B and B:A to be represented by identical row vectors, although it would ensure that equation (8) is satisfied.
All Alternates versus Better Alternates
In step 12 of LRA, the relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D. That is, the average includes only those alternates that are "better" than the originals. Taking all alternates instead of the better alternates, recall drops from 56.1% to 40.4% and precision drops from 56.8% to 40.8%. Both decreases are statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
Interpreting Vectors
Suppose a word pair A:B corresponds to a vector r in the matrix X. It would be convenient if inspection of r gave us a simple explanation or description of the relation between A and B. For example, suppose the word pair ostrich:bird maps to the row vector r. It would be pleasing to look in r and find that the largest element corresponds to the pattern "is the largest" (i.e., "ostrich is the largest bird"). Unfortunately, inspection of r reveals no such convenient patterns. We hypothesize that the semantic content of a vector is distributed over the whole vector; it is not concentrated in a few elements. To test this hypothesis, we modified step 10 of LRA. Instead of projecting the 8,000 dimensional vectors into the 300 dimensional space U k Σ k , we use the matrix U k Σ k V T k . This matrix yields the same cosines as U k Σ k , but preserves the original 8,000 dimensions, making it easier to interpret the row vectors. For each row vector in U k Σ k V T k , we select the N largest values and set all other values to zero. The idea here is that we will only pay attention to the N most important patterns in r; the remaining patterns will be ignored. This reduces the length of the row vectors, but the cosine is the dot product of normalized vectors (all vectors are normalized to unit length; see equation (7)), so the change to the vector lengths has no impact; only the angle of the vectors is important. If most of the semantic content is in the N largest elements of r, then setting the remaining elements to zero should have relatively little impact. Table 18 shows the performance as N varies from 1 to 3,000. The precision and recall are significantly below the baseline LRA until N ≥ 300 (95% confidence, Fisher Exact Test). In other words, for a typical SAT analogy question, we need to examine the top 300 patterns to explain why LRA selected one choice instead of another.
We are currently working on an extension of LRA that will explain with a single pattern why one choice is better than another. We have had some promising results, but this work is not yet mature. However, we can confidently claim that interpreting the vectors is not trivial.
Manual Patterns versus Automatic Patterns
Turney and Littman (2005) used 64 manually generated patterns whereas LRA uses 4,000 automatically generated patterns. We know from Section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns. It may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns. If we require an exact match, 50 of the 64 manual patterns can be found in the automatic patterns. If we are lenient about wildcards, and count the pattern "not the" as matching "* not the" (for example), then 60 of the 64 manual patterns appear within the automatic patterns. This suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns, rather than a qualitative difference in the patterns. Turney and Littman (2005) point out that some of their 64 patterns have been used by other researchers. For example, Hearst (1992) used the pattern "such as" to discover hyponyms and Berland and Charniak (1999) used the pattern "of the" to discover meronyms. Both of these patterns are included in the 4,000 patterns automatically generated by LRA.
The novelty in Turney and Littman (2005) is that their patterns are not used to mine text for instances of word pairs that fit the patterns (Hearst, 1992; Berland and Charniak, 1999); instead, they are used to gather frequency data for building vectors that represent the relation between a given pair of words. The results in Section 6.8 show that a vector contains more information than any single pattern or small set of patterns; a vector is a distributed representation. LRA is distinct from Hearst (1992) and Berland and Charniak (1999) in its focus on distributed representations, which it shares with Turney and Littman (2005), but LRA goes beyond Turney and Littman (2005) by finding patterns automatically. Riloff and Jones (1999) and Yangarber (2003) also find patterns automatically, but their goal is to mine text for instances of word pairs; the same goal as Hearst (1992) and Berland and Charniak (1999). Because LRA uses patterns to build distributed vector representations, it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of Hearst (1992), Berland and Charniak (1999), Riloff and Jones (1999), and Yangarber (2003). Therefore LRA can simply select the highest frequency patterns (step 4 in Section 5.5); it does not need the more sophisticated selection algorithms of Riloff and Jones (1999) and Yangarber (2003).
Experiments with Noun-Modifier Relations
This section describes experiments with 600 noun-modifier pairs, hand-labeled with 30 classes of semantic relations (Nastase and Szpakowicz, 2003). In the following experiments, LRA is used with the baseline parameter values, exactly as described in Section 5.5. No adjustments were made to tune LRA to the noun-modifier pairs. LRA is used as a distance (nearness) measure in a single nearest neighbour supervised learning algorithm.
Classes of Relations
The following experiments use the 600 labeled noun-modifier pairs of Nastase and Szpakowicz (2003). This data set includes information about the part of speech and WordNet synset (synonym set; i.e., word sense tag) of each word, but our algorithm does not use this information. Table 19 lists the 30 classes of semantic relations. The table is based on Appendix A of Nastase and Szpakowicz (2003), with some simplifications. The original table listed several semantic relations for which there were no instances in the data set. These were relations that are typically expressed with longer phrases (three or more words), rather than noun-modifier word pairs. For clarity, we decided not to include these relations in Table 19.
In this table, H represents the head noun and M represents the modifier. For example, in "flu virus", the head noun (H) is "virus" and the modifier (M ) is "flu" (*). In English, the modifier (typically a noun or adjective) usually precedes the head noun. In the description of purpose, V represents an arbitrary verb. In "concert hall", the hall is for presenting concerts (V is "present") or holding concerts (V is "hold") ( †).
Nastase and Szpakowicz (2003) organized the relations into groups. The five capitalized terms in the "Relation" column of Table 19 are the names of five groups of semantic relations. (The original table had a sixth group, but there are no examples of this group in the data set.) We make use of this grouping in the following experiments.
Baseline LRA with Single Nearest Neighbour
The following experiments use single nearest neighbour classification with leave-oneout cross-validation. For leave-one-out cross-validation, the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers. The data set is split 600 times, so that each noun-modifier gets a turn as the testing word pair. The predicted class of the testing pair is the class of the single nearest neighbour in the training set. As the measure of nearness, we use LRA to calculate the relational similarity between the testing pair and the training pairs. The single nearest neighbour algorithm is a supervised learning algorithm (i.e., it requires a training set of labeled data), but we are using LRA to measure the distance between a pair and its potential neighbours, and LRA is itself determined in an unsupervised fashion (i.e., LRA does not need labeled data).
Each SAT question has five choices, so answering 374 SAT questions required calculating 374 × 5 × 16 = 29, 920 cosines. The factor of 16 comes from the alternate pairs, step 11 in LRA. With the noun-modifier pairs, using leave-one-out cross-validation, each test pair has 599 choices, so an exhaustive application of LRA would require calculating 600 × 599 × 16 = 5, 750, 400 cosines. To reduce the amount of computation required, we first find the 30 nearest neighbours for each pair, ignoring the alternate pairs (600 × 599 = 359, 400 cosines), and then apply the full LRA, including the alternates, to just those 30 neighbours (600 × 30 × 16 = 288, 000 cosines), which requires calculating only 359, 400 + 288, 000 = 647, 400 cosines.
There are 600 word pairs in the input set for LRA. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 2,400 pairs. In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. However, some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. The sparse matrix (step 7) has 4,748 rows and 8,000 columns, with a density of 8.4%.
Following Turney and Littman (2005), we evaluate the performance by accuracy and also by the macroaveraged F measure (Lewis, 1991). Macroaveraging calculates the precision, recall, and F for each class separately, and then calculates the average across all classes. Microaveraging combines the true positive, false positive, and false negative counts for all of the classes, and then calculates precision, recall, and F from the combined counts. Macroaveraging gives equal weight to all classes, but microaveraging gives more weight to larger classes. We use macroaveraging (giving equal weight to all classes), because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus.
Classification with 30 distinct classes is a hard problem. To make the task easier, we can collapse the 30 classes to 5 classes, using the grouping that is given in Table 19. For example, agent and beneficiary both collapse to participant. On the 30 class problem, LRA with the single nearest neighbour algorithm achieves an accuracy of 39.8% (239/600) and a macroaveraged F of 36.6%. Always guessing the majority class would result in an accuracy of 8.2% (49/600). On the 5 class problem, the accuracy is 58.0% (348/600) and the macroaveraged F is 54.6%. Always guessing the majority class would give an accuracy of 43.3% (260/600). For both the 30 class and 5 class problems, LRA's accuracy is significantly higher than guessing the majority class, with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). Table 20 shows the performance of LRA and VSM on the 30 class problem. VSM-AV is VSM with the AltaVista corpus and VSM-WMTS is VSM with the WMTS corpus. The results for VSM-AV are taken from Turney and Littman (2005). All three pairwise differences in the three F measures are statistically significant at the 95% level, according to the Paired T-Test (Feelders and Verkooijen, 1995). The accuracy of LRA is significantly higher than the accuracies of VSM-AV and VSM-WMTS, according to the Fisher Exact Test (Agresti, 1990), but the difference between the two VSM accuracies is not significant. Table 21 compares the performance of LRA and VSM on the 5 class problem. The accuracy and F measure of LRA are significantly higher than the accuracies and F measures of VSM-AV and VSM-WMTS, but the differences between the two VSM accuracies and F measures are not significant.
LRA versus VSM
Discussion
The experimental results in Sections 6 and 7 demonstrate that LRA performs significantly better than the VSM, but it is also clear that there is room for improvement. The accuracy might not yet be adequate for practical applications, although past work has shown that it is possible to adjust the tradeoff of precision versus recall (Turney and Littman, 2005). For some of the applications, such as information extraction, LRA might be suitable if it is adjusted for high precision, at the expense of low recall.
Another limitation is speed; it took almost nine days for LRA to answer 374 analogy questions. However, with progress in computer hardware, speed will gradually become less of a concern. Also, the software has not been optimized for speed; there are several places where the efficiency could be increased and many operations are parallelizable. It may also be possible to precompute much of the information for LRA, although this would require substantial changes to the algorithm.
The difference in performance between VSM-AV and VSM-WMTS shows that VSM is sensitive to the size of the corpus. Although LRA is able to surpass VSM-AV when the WMTS corpus is only about one tenth the size of the AV corpus, it seems likely that LRA would perform better with a larger corpus. The WMTS corpus requires one terabyte of hard disk space, but progress in hardware will likely make ten or even one hundred terabytes affordable in the relatively near future.
For noun-modifier classification, more labeled data should yield performance improvements. With 600 noun-modifier pairs and 30 classes, the average class has only 20 examples. We expect that the accuracy would improve substantially with five or ten times more examples. Unfortunately, it is time consuming and expensive to acquire hand-labeled data.
Another issue with noun-modifier classification is the choice of classification scheme for the semantic relations. The 30 classes of Nastase and Szpakowicz (2003) might not be the best scheme. Other researchers have proposed different schemes (Vanderwende, 1994;Barker and Szpakowicz, 1998;Rosario and Hearst, 2001;Rosario, Hearst, and Fillmore, 2002). It seems likely that some schemes are easier for machine learning than others. For some applications, 30 classes may not be necessary; the 5 class scheme may be sufficient.
LRA, like VSM, is a corpus-based approach to measuring relational similarity. Past work suggests that a hybrid approach, combining multiple modules, some corpusbased, some lexicon-based, will surpass any purebred approach (Turney et al., 2003). In future work, it would be natural to combine the corpus-based approach of LRA with the lexicon-based approach of Veale (2004), perhaps using the combination method of Turney et al. (2003).
The Singular Value Decomposition is only one of many methods for handling sparse, noisy data. We have also experimented with Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999), Kernel Principal Components Analysis (KPCA) (Scholkopf, Smola, and Muller, 1997), and Iterative Scaling (IS) (Ando, 2000). We had some interesting results with small matrices (around 2,000 rows by 1,000 columns), but none of these methods seemed substantially better than SVD and none of them scaled up to the matrix sizes we are using here (e.g., 17,232 rows and 8,000 columns; see Section 6.1).
In step 4 of LRA, we simply select the top num patterns most frequent patterns and discard the remaining patterns. Perhaps a more sophisticated selection algorithm would improve the performance of LRA. We have tried a variety of ways of selecting patterns, but it seems that the method of selection has little impact on performance. We hypothesize that the distributed vector representation is not sensitive to the selection method, but it is possible that future work will find a method that yields significant improvement in performance.
Conclusion
This paper has introduced a new method for calculating relational similarity, Latent Relational Analysis. The experiments demonstrate that LRA performs better than the VSM approach, when evaluated with SAT word analogy questions and with the task of classifying noun-modifier expressions. The VSM approach represents the relation between a pair of words with a vector, in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus. LRA extends this approach in three ways:
| 14,134 |
cs0608100
|
2951193962
|
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.
|
A semantic frame for an event such as judgement contains semantic roles such as judge , evaluee , and reason , whereas an event such as statement contains roles such as speaker , addressee , and message @cite_23 . The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. moldovan04 argue that semantic roles are merely a special case of semantic relations (), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
|
{
"abstract": [
"We present a system for identifying the semantic relationships, or semantic roles, filled by constituents of a sentence within a semantic frame. Given an input sentence and a target word and frame, the system labels constituents with either abstract semantic roles, such as AGENT or PATIENT, or more domain-specific semantic roles, such as SPEAKER, MESSAGE, and TOPIC.The system is based on statistical classifiers trained on roughly 50,000 sentences that were hand-annotated with semantic roles by the FrameNet semantic labeling project. We then parsed each training sentence into a syntactic tree and extracted various lexical and syntactic features, including the phrase type of each constituent, its grammatical function, and its position in the sentence. These features were combined with knowledge of the predicate verb, noun, or adjective, as well as information such as the prior probabilities of various combinations of semantic roles. We used various lexical clustering algorithms to generalize across possible fillers of roles. Test sentences were parsed, were annotated with these features, and were then passed through the classifiers.Our system achieves 82 accuracy in identifying the semantic role of presegmented constituents. At the more difficult task of simultaneously segmenting constituents and identifying their semantic role, the system achieved 65 precision and 61 recall.Our study also allowed us to compare the usefulness of different features and feature combination methods in the semantic role labeling task. We also explore the integration of role labeling with statistical syntactic parsing and attempt to generalize to predicates unseen in the training data."
],
"cite_N": [
"@cite_23"
],
"mid": [
"2151170651"
]
}
|
Similarity of Semantic Relations
|
There are at least two kinds of similarity. Attributional similarity is correspondence between attributes and relational similarity is correspondence between relations (Medin, Goldstone, and Gentner, 1990). When two words have a high degree of attributional similarity, we call them synonyms. When two word pairs have a high degree of relational similarity, we say they are analogous.
Verbal analogies are often written in the form A:B::C:D, meaning A is to B as C is to D; for example, traffic:street::water:riverbed. Traffic flows over a street; water flows over a riverbed. A street carries traffic; a riverbed carries water. There is a high degree of relational similarity between the word pair traffic:street and the word pair water:riverbed. In fact, this analogy is the basis of several mathematical theories of traffic flow (Daganzo, 1994).
In Section 2, we look more closely at the connections between attributional and relational similarity. In analogies such as mason:stone::carpenter:wood, it seems that relational similarity can be reduced to attributional similarity, since mason and carpenter are attributionally similar, as are stone and wood. In general, this reduction fails. Consider the analogy traffic:street::water:riverbed. Traffic and water are not attributionally similar. Street and riverbed are only moderately attributionally similar.
Many algorithms have been proposed for measuring the attributional similarity between two words (Lesk, 1969;Resnik, 1995; Landauer and Dumais, 1997; Jiang and Conrath, 1997; Lin, 1998b;Turney, 2001;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003). Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais, 1997), information retrieval (Deerwester et al., 1990), determining semantic orientation (Turney, 2002), grading student essays (Rehder et al., 1998), measuring textual cohesion (Morris and Hirst, 1991), and word sense disambiguation (Lesk, 1986).
On the other hand, since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. Many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity. We discuss related problems in natural language processing, information retrieval, and information extraction in more detail in Section 3. This paper builds on the Vector Space Model (VSM) of information retrieval. Given a query, a search engine produces a ranked list of documents. The documents are ranked in order of decreasing attributional similarity between the query and each document. Almost all modern search engines measure attributional similarity using the VSM (Baeza-Yates and Ribeiro-Neto, 1999). Turney and Littman (2005) adapt the VSM approach to measuring relational similarity. They used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words. Section 4 presents the VSM approach to measuring similarity.
In Section 5, we present an algorithm for measuring relational similarity, which we call Latent Relational Analysis (LRA). The algorithm learns from a large corpus of unlabeled, unstructured text, without supervision. LRA extends the VSM approach of Turney and Littman (2005) in three ways: (1) The connecting patterns are derived automatically from the corpus, instead of using a fixed set of patterns.
(2) Singular Value Decomposition (SVD) is used to smooth the frequency data. (3) Given a word pair such as traffic:street, LRA considers transformations of the word pair, generated by replacing one of the words by synonyms, such as traffic:road, traffic:highway.
Section 6 presents our experimental evaluation of LRA with a collection of 374 multiple-choice word analogy questions from the SAT college entrance exam. 1 An example of a typical SAT question appears in Table 1. In the educational testing literature, the first pair (mason:stone) is called the stem of the analogy. The correct choice is called the solution and the incorrect choices are distractors. We evaluate LRA by testing its ability to select the solution and avoid the distractors. The average performance of collegebound senior high school students on verbal SAT questions corresponds to an accuracy of about 57%. LRA achieves an accuracy of about 56%. On these same questions, the VSM attained 47%.
One application for relational similarity is classifying semantic relations in nounmodifier pairs (Turney and Littman, 2005). In Section 7, we evaluate the performance of LRA with a set of 600 noun-modifier pairs from Nastase and Szpakowicz (2003). The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the head noun (printer) and the modifier (laser). The 600 pairs have been manually labeled with 30 classes of semantic relations. For example, "laser printer" is classified as instrument; the printer uses the laser as an instrument for We approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem. The 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbour in the training set. LRA is used to measure distance (i.e., similarity, nearness). LRA achieves an accuracy of 39.8% on the 30-class problem and 58.0% on the 5-class problem. On the same 600 noun-modifier pairs, the VSM had accuracies of 27.8% (30-class) and 45.7% (5-class) (Turney and Littman, 2005).
We discuss the experimental results, limitations of LRA, and future work in Section 8 and we conclude in Section 9.
Attributional and Relational Similarity
In this section, we explore connections between attributional and relational similarity.
Types of Similarity
Medin, Goldstone, and Gentner (1990) distinguish attributes and relations as follows:
Attributes are predicates taking one argument (e.g., X is red, X is large), whereas relations are predicates taking two or more arguments (e.g., X collides with Y , X is larger than Y ). Attributes are used to state properties of objects; relations express relations between objects or propositions. Gentner (1983) notes that what counts as an attribute or a relation can depend on the context. For example, large can be viewed as an attribute of X, LARGE(X ), or a relation between X and some standard Y , LARGER THAN(X , Y ).
The amount of attributional similarity between two words, A and B, depends on the degree of correspondence between the properties of A and B. A measure of attributional similarity is a function that maps two words, A and B, to a real number, sim a (A, B) ∈ ℜ. The more correspondence there is between the properties of A and B, the greater their attributional similarity. For example, dog and wolf have a relatively high degree of attributional similarity.
The amount of relational similarity between two pairs of words, A:B and C:D, depends on the degree of correspondence between the relations between A and B and the relations between C and D. A measure of relational similarity is a function that maps two pairs, A:B and C:D, to a real number, sim r (A : B, C : D) ∈ ℜ. The more correspondence there is between the relations of A:B and C:D, the greater their relational similarity. For example, dog:bark and cat:meow have a relatively high degree of relational
As these examples show, semantic relatedness is the same as attributional similarity (e.g., hot and cold are both kinds of temperature, pencil and paper are both used for writing). Here we prefer to use the term attributional similarity, because it emphasizes the contrast with relational similarity. The term semantic relatedness may lead to confusion when the term relational similarity is also under discussion.
Resnik (1995) describes semantic similarity as follows:
Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. (1989) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonimic (IS-A) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information.
Thus semantic similarity is a specific type of attributional similarity. The term semantic similarity is misleading, because it refers to a type of attributional similarity, yet relational similarity is not any less semantic than attributional similarity. To avoid confusion, we will use the terms attributional similarity and relational similarity, following Medin, Goldstone, and Gentner (1990). Instead of semantic similarity (Resnik, 1995) or semantically similar (Chiarello et al., 1990), we prefer the term taxonomical similarity, which we take to be a specific type of attributional similarity. We interpret synonymy as a high degree of attributional similarity. Analogy is a high degree of relational similarity.
Measuring Attributional Similarity
Algorithms for measuring attributional similarity can be lexicon-based (Lesk, 1986;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003), corpus-based (Lesk, 1969;Landauer and Dumais, 1997;Lin, 1998a;Turney, 2001), or a hybrid of the two (Resnik, 1995;Jiang and Conrath, 1997;Turney et al., 2003). Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpus-based algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. However, experiments do not support this intuition.
Several algorithms have been evaluated using 80 multiple-choice synonym ques- Table 2. Table 3 shows the best performance on the TOEFL questions for each type of attributional similarity algorithm. The results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy.
Using Attributional Similarity to Solve Analogies
We may distinguish near analogies (mason:stone::carpenter:wood) from far analogies (traffic:street::water:riverbed) (Gentner, 1983;Medin, Goldstone, and Gentner, 1990). In an analogy A:B::C:D, where there is a high degree of relational similarity between A:B and C:D, if there is also a high degree of attributional similarity between A and C, and between B and D, then A:B::C:D is a near analogy; otherwise, it is a far analogy. It seems possible that SAT analogy questions might consist largely of near analogies, in which case they can be solved using attributional similarity measures. We could score each candidate analogy by the average of the attributional similarity, sim a , between A and C and between B and D: score(A : B ::
C : D) = 1 2 (sim a (A, C) + sim a (B, D))(1)
This kind of approach was used in two of the thirteen modules in Turney et al. (2003) (see Section 3.1).
To evaluate this approach, we applied several measures of attributional similarity to our collection of 374 SAT questions. The performance of the algorithms was measured by precision, recall, and F , defined as follows: precision = number of correct guesses total number of guesses made
(2)
F = 2 × precision × recall precision + recall (4)
Note that recall is the same as percent correct (for multiple-choice questions, with only zero or one guesses allowed per question, but not in general). Table 4 shows the experimental results for our set of 374 analogy questions. For example, using the algorithm of Hirst and St-Onge (1998), 120 questions were answered correctly, 224 incorrectly, and 30 questions were skipped. When the algorithm assigned the same similarity to all of the choices for a given question, that question was skipped. The precision was 120/(120 + 224) and the recall was 120/(120 + 224 + 30).
The first five algorithms in Table 4 are implemented in Pedersen's WordNet-Similarity package. 2 The sixth algorithm (Turney, 2001) used the Waterloo MultiText System, as described in Terra and Clarke (2003).
The difference between the lowest performance (Jiang and Conrath, 1997) and random guessing is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, the difference between the highest performance (Turney, 2001) and the VSM approach (Turney and Littman, 2005) is also statistically significant with 95% confidence. We conclude that there are enough near analogies in the 374 SAT questions for attributional similarity to perform better than random guessing, but not enough near analogies for attributional similarity to perform as well as relational similarity.
Recognizing Word Analogies
The problem of recognizing word analogies is, given a stem word pair and a finite list of choice word pairs, select the choice that is most analogous to the stem. This problem was first attempted by a system called Argus (Reitman, 1965), using a small hand-built semantic network. Argus could only solve the limited set of analogy questions that its programmer had anticipated. Argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity. Turney et al. (2003) combined 13 independent modules to answer SAT questions. The final output of the system was based on a weighted combination of the outputs of each individual module. The best of the 13 modules was the VSM, which is described in detail in Turney and Littman (2005). The VSM was evaluated on a set of 374 SAT questions, achieving a score of 47%.
In contrast with the corpus-based approach of Turney and Littman (2005), Veale (2004) applied a lexicon-based approach to the same 374 SAT questions, attaining a score of 43%. Veale evaluated the quality of a candidate analogy A:B::C:D by looking for paths in WordNet, joining A to B and C to D. The quality measure was based on the similarity between the A:B paths and the C:D paths. Turney (2005) introduced Latent Relational Analysis (LRA), an enhanced version of the VSM approach, which reached 56% on the 374 SAT questions. Here we go beyond Turney (2005) by describing LRA in more detail, performing more extensive experiments, and analyzing the algorithm and related work in more depth. French (2002) cites Structure Mapping Theory (SMT) (Gentner, 1983) and its implementation in the Structure Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner, 1989) as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
Structure Mapping Theory
For example, there is an analogy between the solar system and Rutherford's model of the atom (Falkenhainer, Forbus, and Gentner, 1989). The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
Each individual connection (e.g., from revolve(planet, sun) to revolve(electron, nucleus)) in an analogical mapping implies that the connected relations are similar; thus, SMT requires a measure of relational similarity, in order to form maps. Early versions of SME only mapped identical relations, but later versions of SME allowed similar, non-identical relations to match (Falkenhainer, 1990). However, the focus of research in analogy-making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations, hence the similarity measures used in SME at the level of individual connections are somewhat rudimentary.
We believe that a more sophisticated measure of relational similarity, such as LRA, may enhance the performance of SME. Likewise, the focus of our work here is on the similarity between particular relations, and we ignore systematic mapping between sets animal:eat::inflation:reduce of relations, so LRA may also be enhanced by integration with SME.
Metaphor
Metaphorical language is very common in our daily life; so common that we are usually unaware of it (Lakoff and Johnson, 1980). argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language (Lakoff and Johnson, 1980). Dolan (1995) describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's (1995) algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors. Lakoff and Johnson (1980) give many examples of sentences in support of their claim that metaphorical language is ubiquitous. The metaphors in their sample sentences can be expressed using SAT-style verbal analogies of the form A:B::C:D. The first column in Table 5 is a list of sentences from Lakoff and Johnson (1980) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy.
Classifying Semantic Relations
The task of classifying semantic relations is to identify the relation between a pair of words. Often the pairs are restricted to noun-modifier pairs, but there are many interesting relations, such as antonymy, that do not occur in noun-modifier pairs. However, noun-modifier pairs are interesting due to their high frequency in English. For instance, WordNet 2.0 contains more than 26,000 noun-modifier pairs, although many common noun-modifiers are not in WordNet, especially technical terms. Rosario and Hearst (2001) and Rosario, Hearst, and Fillmore (2002) classify nounmodifier relations in the medical domain, using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources for representing each noun-modifier pair with a feature vector. They trained a neural network to distinguish 13 classes of semantic relations. Nastase and Szpakowicz (2003) explore a similar approach to classifying general noun-modifier pairs (i.e., not restricted to a particular domain, such as medicine), using WordNet and Roget's Thesaurus as lexical resources. Vanderwende (1994) used hand-built rules, together with a lexical knowledge base, to classify noun-modifier pairs.
None of these approaches explicitly involved measuring relational similarity, but any classification of semantic relations necessarily employs some implicit notion of relational similarity, since members of the same class must be relationally similar to some extent. Barker and Szpakowicz (1998) tried a corpus-based approach that explicitly used a measure of relational similarity, but their measure was based on literal matching, which limited its ability to generalize. Moldovan et al. (2004) also used a measure of relational similarity, based on mapping each noun and modifier into semantic classes in WordNet. The noun-modifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm, to improve the mapping of the noun and modifier into WordNet. Turney and Littman (2005) used the VSM (as a component in a single nearest neighbour learning algorithm) to measure relational similarity. We take the same approach here, substituting LRA for the VSM, in Section 7.
Lauer (1995) used a corpus-based approach (using the BNC) to paraphrase nounmodifier pairs, by inserting the prepositions of, for, in, at, on, from, with, and about. For example, reptile haven was paraphrased as haven for reptiles. Lapata and Keller (2004) achieved improved results on this task, by using the database of AltaVista's search engine as a corpus.
Word Sense Disambiguation
We believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text. If we can identify the semantic relations between the given word and its context, then we can disambiguate the given word. Yarowsky's (1993) observation that collocations are almost always monosemous is evidence for this view. Federici, Montemagni, and Pirrelli (1997) present an analogybased approach to word sense disambiguation.
For example, consider the word plant. Out of context, plant could refer to an industrial plant or a living organism. Suppose plant appears in some text near food. A typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism (Lesk, 1986;Banerjee and Pedersen, 2003). In this case, the decision may not be clear, since industrial plants often produce food and living organisms often serve as food. It would be very helpful to know the relation between food and plant in this example. In the phrase "food for the plant", the relation between food and plant strongly suggests that the plant is a living organism, since industrial plants do not need food. In the text "food at the plant", the relation strongly suggests that the plant is an industrial plant, since living organisms are not usually considered as locations. Thus an algorithm for classifying semantic relations (as in Section 7) should be helpful for word sense disambiguation.
Information Extraction
The problem of relation extraction is, given an input document and a specific relation R, extract all pairs of entities (if any) that have the relation R in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. Zelenko, Aone, and Richardella (2003) present a kernel method for extracting the relations person-affiliation and organization-location. For example, in the sentence "John Smith is the chief scientist of the Hardcom Corporation," there is a person-affiliation relation between "John Smith" and "Hardcom Corporation" (Zelenko, Aone, and Richardella, 2003). This is similar to the problem of classifying semantic relations (Section 3.4), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
In the VSM approach to classifying semantic relations (Turney and Littman, 2005), we would have a training set of labeled examples of the relation person-affiliation, for instance. Each example would be represented by a vector of pattern frequencies. Given a specific document discussing "John Smith" and "Hardcom Corporation", we could construct a vector representing the relation between these two entities, and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors. It would seem that there is a problem here, because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for "John Smith" and "Hardcom Corporation" would be very sparse, since these entities might be mentioned only once in the given document. However, this is not a new problem for the Vector Space Model; it is the standard situation when the VSM is used for information retrieval. A query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vector. There are well-known techniques in information retrieval for coping with this disparity, such as weighting schemes for query vectors that are different from the weighting schemes for document vectors (Salton and Buckley, 1988).
Question Answering
In their paper on classifying semantic relations, Moldovan et al. (2004) suggest that an important application of their work is Question Answering. As defined in the Text REtrieval Conference (TREC) Question Answering (QA) track, the task is to answer simple questions, such as "Where have nuclear incidents occurred?", by retrieving a relevant document from a large corpus and then extracting a short string from the document, such as "The Three Mile Island nuclear incident caused a DOE policy crisis." Moldovan et al. (2004) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text. They argue that the desired semantic relation can easily be inferred from the surface form of the question. A question of the form "Where ...?" is likely to be seeking for entities with a location relation and a question of the form "What did ... make?" is likely to be looking for entities with a product relation. In Section 7, we show how LRA can recognize relations such as location and product (see Table 19). Hearst (1992) presents an algorithm for learning hyponym (type of) relations from a corpus and Berland and Charniak (1999) describe how to learn meronym (part of) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words (Fellbaum, 1998) and Nastase and Szpakowicz (2003) list 30 semantic relations for noun-modifier pairs. Hearst (1992) and Berland and Charniak (1999) use manually generated rules to mine text for semantic relations. Turney and Littman (2005) also use a manually generated set of 64 patterns.
Automatic Thesaurus Generation
LRA does not use a predefined set of patterns; it learns patterns from a large corpus. Instead of manually generating new rules or patterns for each new semantic relation, it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations. A nearest neighbour algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations, given the appropriate labeled training data.
Girju, Badulescu, and Moldovan (2003) present an algorithm for learning meronym relations from a corpus. Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. However, they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. Veale (2003) has developed an algorithm for recognizing certain types of word analogies, based on information in WordNet. He proposes to use the algorithm for analogical information retrieval. For example, the query "Muslim church" should return "mosque" and the query "Hindu bible" should return "the Vedas". The algorithm was designed with a focus on analogies of the form adjective:noun::adjective:noun, such as Christian:church::Muslim:mosque.
Information Retrieval
A measure of relational similarity is applicable to this task. Given a pair of words, A and B, the task is to return another pair of words, X and Y , such that there is high relational similarity between the pair A:X and the pair Y :B. For example, given A = "Muslim" and B = "church", return X = "mosque" and Y = "Christian". (The pair Muslim:mosque has a high relational similarity to the pair Christian:church.)
Marx et al. (2002) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora. Each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus. For example, one experiment used a corpus of Buddhist documents and a corpus of Christian documents. A cluster of words such as {Hindu, Mahayana, Zen, ...} from the Buddhist corpus was coupled with a cluster of words such as {Catholic, Protestant, ...} from the Christian corpus. Thus the algorithm appears to have discovered an analogical mapping between Buddhist schools and traditions and Christian schools and traditions. This is interesting work, but it is not directly applicable to SAT analogies, because it discovers analogies between clusters of words, rather than individual words.
Identifying Semantic Roles
A semantic frame for an event such as judgement contains semantic roles such as judge, evaluee, and reason, whereas an event such as statement contains roles such as speaker, addressee, and message (Gildea and Jurafsky, 2002). The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. Moldovan et al. (2004) argue that semantic roles are merely a special case of semantic relations (Section 3.4), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
The Vector Space Model
This section examines past work on measuring attributional and relational similarity using the Vector Space Model (VSM).
Measuring Attributional Similarity with the Vector Space Model
The VSM was first developed for information retrieval (Salton and McGill, 1983;Salton and Buckley, 1988;Salton, 1989) and it is at the core of most modern search engines (Baeza-Yates and Ribeiro-Neto, 1999).
In the VSM approach to information retrieval, queries and documents are represented by vectors. Elements in these vectors are based on the frequencies of words in the corresponding queries and documents. The frequencies are usually transformed by various formulas and weights, tailored to improve the effectiveness of the search engine (Salton, 1989). The attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors. For a given query, the search engine sorts the matching documents in order of decreasing cosine.
The VSM approach has also been used to measure the attributional similarity of words (Lesk, 1969;Ruge, 1992;Pantel and Lin, 2002). Pantel and Lin (2002) clustered words according to their attributional similarity, as measured by a VSM. Their algorithm is able to discover the different senses of polysemous words, using unsupervised learning.
Latent Semantic Analysis enhances the VSM approach to information retrieval by using the Singular Value Decomposition (SVD) to smooth the vectors, which helps to handle noise and sparseness in the data (Deerwester et al., 1990;Dumais, 1993; Landauer and Dumais, 1997). SVD improves both document-query attributional similarity measures (Deerwester et al., 1990;Dumais, 1993) and word-word attributional similarity measures (Landauer and Dumais, 1997). LRA also uses SVD to smooth vectors, as we discuss in Section 5.
Measuring Relational Similarity with the Vector Space Model
Let R 1 be the semantic relation (or set of relations) between a pair of words, A and B, and let R 2 be the semantic relation (or set of relations) between another pair, C and D. We wish to measure the relational similarity between R 1 and R 2 . The relations R 1 and R 2 are not given to us; our task is to infer these hidden (latent) relations and then compare them.
In the VSM approach to relational similarity (Turney and Littman, 2005), we create vectors, r 1 and r 2 , that represent features of R 1 and R 2 , and then measure the similarity of R 1 and R 2 by the cosine of the angle θ between r 1 and r 2 :
r 1 = r 1,1 , . . . , r 1,n (5) r 2 = r 2,1 , . . . r 2,n (6) cosine(θ) = n i=1 r 1,i · r 2,i n i=1 (r 1,i ) 2 · n i=1 (r 2,i ) 2 = r 1 · r 2 √ r 1 · r 1 · √ r 2 · r 2 = r 1 · r 2 r 1 · r 2 (7)
We create a vector, r, to characterize the relationship between two words, X and Y , by counting the frequencies of various short phrases containing X and Y . Turney and Littman (2005) use a list of 64 joining terms, such as "of", "for", and "to", to form 128 phrases that contain X and Y , such as "X of Y ", "Y of X", "X for Y ", "Y for X", "X to Y ", and "Y to X". These phrases are then used as queries for a search engine and the number of hits (matching documents) is recorded for each query. This process yields a vector of 128 numbers. If the number of hits for a query is x, then the corresponding element in the vector r is log(x + 1). Several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures (Salton and Buckley, 1988;Ruge, 1992;Lin, 1998b).
Turney and Littman (2005) evaluated the VSM approach by its performance on 374 SAT analogy questions, achieving a score of 47%. Since there are five choices for each question, the expected score for random guessing is 20%. To answer a multiple-choice analogy question, vectors are created for the stem pair and each choice pair, and then cosines are calculated for the angles between the stem pair and each choice pair. The best guess is the choice pair with the highest cosine. We use the same set of analogy questions to evaluate LRA in Section 6.
The VSM was also evaluated by its performance as a distance (nearness) measure in a supervised nearest neighbour classifier for noun-modifier semantic relations (Turney and Littman, 2005). The evaluation used 600 hand-labeled noun-modifier pairs from Nastase and Szpakowicz (2003). A testing pair is classified by searching for its single nearest neighbour in the labeled training data. The best guess is the label for the training pair with the highest cosine. LRA is evaluated with the same set of nounmodifier pairs in Section 7. Turney and Littman (2005) used the AltaVista search engine to obtain the frequency information required to build vectors for the VSM. Thus their corpus was the set of all web pages indexed by AltaVista. At the time, the English subset of this corpus consisted of about 5 × 10 11 words. Around April 2004, AltaVista made substantial changes to their search engine, removing their advanced search operators. Their search engine no longer supports the asterisk operator, which was used by Turney and Littman (2005) for stemming and wild-card searching. AltaVista also changed their policy towards automated searching, which is now forbidden. 3 Turney and Littman (2005) used AltaVista's hit count, which is the number of documents (web pages) matching a given query, but LRA uses the number of passages (strings) matching a query. In our experiments with LRA (Sections 6 and 7), we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003), running on a 16 CPU Beowulf Cluster, with a corpus of about 5 × 10 10 English words. The Waterloo MultiText System (WMTS) is a distributed (multiprocessor) search engine, designed primarily for passage retrieval (although document retrieval is possible, as a special case of passage retrieval). The text and index require approximately one terabyte of disk space. Although AltaVista only gives a rough estimate of the number of matching documents, the Waterloo MultiText System gives exact counts of the number of matching passages. Turney et al. (2003) combine 13 independent modules to answer SAT questions. The performance of LRA significantly surpasses this combined system, but there is no real contest between these approaches, because we can simply add LRA to the combination, as a fourteenth module. Since the VSM module had the best performance of the thirteen modules (Turney et al., 2003), the following experiments focus on comparing VSM and LRA.
Latent Relational Analysis
LRA takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs. LRA relies on three resources, a search engine with a very large corpus of text, a broad-coverage thesaurus of synonyms, and an efficient implementation of SVD.
We first present a short description of the core algorithm. Later, in the following subsections, we will give a detailed description of the algorithm, as it is applied in the experiments in Sections 6 and 7.
• Given a set of word pairs as input, look in a thesaurus for synonyms for each word in each word pair. For each input pair, make alternate pairs by replacing the original words with their synonyms. The alternate pairs are intended to form near analogies with the corresponding original pairs (see Section 2.3).
• Filter out alternate pairs that do not form near analogies, by dropping alternate pairs that co-occur rarely in the corpus. In the preceding step, if a synonym replaced an ambiguous original word, but the synonym captures the wrong sense of the original word, it is likely that there is no significant relation between the words in the alternate pair, so they will rarely co-occur.
• For each original and alternate pair, search in the corpus for short phrases that begin with one member of the pair and end with the other. These phrases characterize the relation between the words in each pair.
• For each phrase from the previous step, create several patterns, by replacing words in the phrase with wild cards.
• Build a pair-pattern frequency matrix, in which each cell represents the number of times that the corresponding pair (row) appears in the corpus with the corresponding pattern (column). The number will usually be zero, resulting in a sparse matrix.
• Apply the Singular Value Decomposition to the matrix. This reduces noise in the matrix and helps with sparse data.
• Suppose that we wish to calculate the relational similarity between any two of the original pairs. Start by looking for the two row vectors in the pair-pattern frequency matrix that correspond to the two original pairs. Calculate the cosine of the angle between these two row vectors. Then merge the cosine of the two original pairs with the cosines of their corresponding alternate pairs, as follows. If an analogy formed with alternate pairs has a higher cosine than the original pairs, we assume that we have found a better way to express the analogy, but we have not significantly changed its meaning. If the cosine is lower, we assume that we may have changed the meaning, by inappropriately replacing words with synonyms. Filter out inappropriate alternates by dropping all analogies formed of alternates, such that the cosines are less than the cosine for the original pairs. The relational similarity between the two original pairs is then calculated as the average of all of the remaining cosines.
The motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus. The hope is that we can find near analogies for the original pairs, such that the near analogies co-occur more frequently in the corpus. The danger is that the alternates may have different relations from the originals. The filtering steps above aim to reduce this risk.
Input and Output
In our experiments, the input set contains from 600 to 2,244 word pairs. The output similarity measure is based on cosines, so the degree of similarity can range from −1 (dissimilar; θ = 180 • ) to +1 (similar; θ = 0 • ). Before applying SVD, the vectors are completely nonnegative, which implies that the cosine can only range from 0 to +1, but SVD introduces negative values, so it is possible for the cosine to be negative, although we have never observed this in our experiments.
Search Engine and Corpus
In the following experiments, we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003). 4 The corpus consists of about 5 × 10 10 English words, gathered by a web crawler, mainly from US academic web sites. The web pages cover a very wide range of topics, styles, genres, quality, and writing skill. The WMTS is well suited to LRA, because the WMTS scales well to large corpora (one terabyte, in our case), it gives exact frequency counts (unlike most web search engines), it is designed for passage retrieval (rather than document retrieval), and it has a powerful query syntax.
Thesaurus
As a source of synonyms, we use Lin's (1998a) automatically generated thesaurus. This thesaurus is available through an online interactive demonstration or it can be downloaded. 5 We used the online demonstration, since the downloadable version seems to contain fewer words. For each word in the input set of word pairs, we automatically query the online demonstration and fetch the resulting list of synonyms. As a courtesy to other users of Lin's online system, we insert a 20 second delay between each query.
Lin's thesaurus was generated by parsing a corpus of about 5 × 10 7 English words, consisting of text from the Wall Street Journal, San Jose Mercury, and AP Newswire (Lin, 1998a). The parser was used to extract pairs of words and their grammatical relations. Words were then clustered into synonym sets, based on the similarity of their grammatical relations. Two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words. Given a word and its part of speech, Lin's thesaurus provides a list of words, sorted in order of decreasing attributional similarity. This sorting is convenient for LRA, since it makes it possible to focus on words with higher attributional similarity and ignore the rest. WordNet, in contrast, given a word and its part of speech, provides a list of words grouped by the possible senses of the given word, with groups sorted by the frequencies of the senses. WordNet's sorting does not directly correspond to sorting by degree of attributional similarity, although various algorithms have been proposed for deriving attributional similarity from WordNet (Resnik, 1995; Jiang and Conrath, 1997; Budanitsky and Hirst, 2001; Banerjee and Pedersen, 2003).
Singular Value Decomposition
We use Rohde's SVDLIBC implementation of the Singular Value Decomposition, which is based on SVDPACKC (Berry, 1992). 6 In LRA, SVD is used to reduce noise and compensate for sparseness.
The Algorithm
We will go through each step of LRA, using an example to illustrate the steps. Assume that the input to LRA is the 374 multiple-choice SAT word analogy questions of Turney and Littman (2005). Since there are six word pairs per question (the stem and five choices), the input consists of 2,244 word pairs. Let's suppose that we wish to calculate the relational similarity between the pair quart:volume and the pair mile:distance, taken from the SAT question in Table 6. The LRA algorithm consists of the following twelve steps:
1. Find alternates: For each word pair A:B in the input set, look in Lin's (1998a) thesaurus for the top num sim words (in the following experiments, num sim is 10) that are most similar to A. For each A ′ that is similar to A, make a new word pair A ′ :B. Likewise, look for the top num sim words that are most similar to B, and for each B ′ , make a new word pair A:B ′ . A:B is called the original pair and each A ′ :B or A:B ′ is an alternate pair. The intent is that alternates should have almost the same semantic relations as the original. For each input pair, there will now be 2 × num sim alternate pairs. When looking for similar words in Lin's (1998a) thesaurus, avoid words that seem unusual (e.g., hyphenated words, words with three characters or less, words with non-alphabetical char- Table 6 This SAT question, from Claman (2000), is used to illustrate the steps in the LRA algorithm.
Stem: quart:volume Choices: (a) day:night (b) mile:distance (c) decade:century (d) friction:heat (e) part:whole Solution: (b) mile:distance acters, multi-word phrases, and capitalized words). The first column in Table 7 shows the alternate pairs that are generated for the original pair quart:volume.
Filter alternates:
For each original pair A:B, filter the 2 × num sim alternates as follows. For each alternate pair, send a query to the WMTS, to find the frequency of phrases that begin with one member of the pair and end with the other. The phrases cannot have more than max phrase words (we use max phrase = 5). Sort the alternate pairs by the frequency of their phrases.
Select the top num f ilter most frequent alternates and discard the remainder (we use num f ilter = 3, so 17 alternates are dropped). This step tends to eliminate alternates that have no clear semantic relation. The third column in Table 7 shows the frequency with which each pair co-occurs in a window of max phrase words. The last column in Table 7 shows the pairs that are selected.
Find phrases:
For each pair (originals and alternates), make a list of phrases in the corpus that contain the pair. Query the WMTS for all phrases that begin with one member of the pair and end with the other (in either order). We ignore suffixes when searching for phrases that match a given pair. The phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pair. These phrases give us information about the semantic relations between the words in each pair. A phrase with no words between the two members of the word pair would give us very little information about the semantic relations (other than that the words occur together with a certain frequency in a certain order). Table 8 gives some examples of phrases in the corpus that match the pair quart:volume.
Find patterns:
For each phrase found in the previous step, build patterns from the intervening words. A pattern is constructed by replacing any or all or none of the intervening words with wild cards (one wild card can only replace one word). If a phrase is n words long, there are n − 2 intervening words between the members of the given word pair (e.g., between quart and volume). Thus a phrase with n words generates 2 (n−2) patterns. (We use max phrase = 5, so a phrase generates at most eight patterns.) For each pattern, count the number of pairs (originals and alternates) with phrases that match the pattern (a wild card must match exactly one word). Keep the top num patterns most frequent patterns and discard the rest (we use num patterns = 4, 000). Typically there will be millions of patterns, so it is not feasible to keep them all.
Map pairs to rows:
In preparation for building the matrix X, create a mapping of word pairs to row numbers. For each pair A:B, create a row for A:B and Table 7 Alternate forms of the original pair quart:volume. The first column shows the original pair and the alternate pairs. The second column shows Lin's similarity score for the alternate word compared to the original word. For example, the similarity between quart and pint is 0.210. The third column shows the frequency of the pair in the WMTS corpus. The fourth column shows the pairs that pass the filtering step (i.e., step 2). "quarts liquid volume" "volume in quarts" "quarts of volume" "volume capacity quarts" "quarts in volume"
Word pair
"volume being about two quarts" "quart total volume" "volume of milk in quarts" "quart of spray volume" "volume include measures like quart" Table 9 Frequencies of various patterns for quart:volume. The asterisk "*" represents the wildcard. Suffixes are ignored, so "quart" matches "quarts". For example, "quarts in volume" is one of the four phrases that match "quart P volume" when P is "in". P = "in" P = "* of" P = "of *" P = "* *" freq("quart P volume") 4 1 5 19 freq("volume P quart") 10 0 2 16 another row for B:A. This will make the matrix more symmetrical, reflecting our knowledge that the relational similarity between A:B and C:D should be the same as the relational similarity between B:A and D:C. This duplication of rows is examined in Section 6.6.
6. Map patterns to columns: Create a mapping of the top num patterns patterns to column numbers. For each pattern P , create a column for "word 1 P word 2 " and another column for "word 2 P word 1 ". Thus there will be 2 × num patterns columns in X. This duplication of columns is examined in Section 6.6.
7. Generate a sparse matrix: Generate a matrix X in sparse matrix format, suitable for input to SVDLIBC. The value for the cell in row i and column j is the frequency of the j-th pattern (see step 6) in phrases that contain the i-th word pair (see step 5). Table 9 gives some examples of pattern frequencies for quart:volume.
8. Calculate entropy: Apply log and entropy transformations to the sparse matrix (Landauer and Dumais, 1997). These transformations have been found to be very helpful for information retrieval (Harman, 1986;. Let x i,j be the cell in row i and column j of the matrix X from step 7. Let m be the number of rows in X and let n be the number of columns. We wish to weight the cell x i,j by the entropy of the j-th column. To calculate the entropy of the column, we need to convert the column into a vector of probabilities. Let p i,j be the probability of x i,j , calculated by normalizing the column vector so that the sum of the elements is one, p i,j = x i,j / m k=1 x k,j . The entropy of the j-th column is then H j = − m k=1 p k,j log(p k,j ). Entropy is at its maximum when p i,j is a uniform distribution, p i,j = 1/m, in which case H j = log(m). Entropy is at its minimum when p i,j is 1 for some value of i and 0 for all other values of i, in which case H j = 0. We want to give more weight to columns (patterns) with frequencies that vary substantially from one row (word pair) to the next, and less weight to columns that are uniform. Therefore we weight the cell x i,j by w j = 1 − H j / log(m), which varies from 0 when p i,j is uniform to 1 when entropy is minimal. We also apply the log transformation to frequencies, log(x i,j + 1). (Entropy is calculated with the original frequency values, before the log transformation is applied.) For all i and all j, replace the original value x i,j in X by the new value w j log(x i,j + 1). This is an instance of the TF-IDF (Term Frequency-Inverse Document Frequency) family of transformations, which is familiar in information retrieval (Salton and Buckley, 1988; Baeza-Yates and Ribeiro-Neto, 1999): log(x i,j + 1) is the TF term and w j is the IDF term. 9. Apply SVD: After the log and entropy transformations have been applied to the matrix X, run SVDLIBC. SVD decomposes a matrix X into a product of three matrices UΣV T , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length: U T U = V T V = I) and Σ is a diagonal matrix of singular values (hence SVD) (Golub and Van Loan, 1996). If X is of rank r, then Σ is also of rank r. Let Σ k , where k < r, be the diagonal matrix formed from the top k singular values, and let U k and V k be the matrices produced by selecting the corresponding columns from U and V. The matrix U k Σ k V T k is the matrix of rank k that best approximates the original matrix X, in the sense that it minimizes the approximation errors. That
is,X = U k Σ k V T k minimizes X − X F over all matricesX of rank k, where
. . . F denotes the Frobenius norm (Golub and Van Loan, 1996). We may think of this matrix U k Σ k V T k as a "smoothed" or "compressed" version of the original matrix. In the subsequent steps, we will be calculating cosines for row vectors. For this purpose, we can simplify calculations by dropping V. The cosine of two vectors is their dot product, after they have been normalized to unit length. The matrix XX T contains the dot products of all of the row vectors. We can find the dot product of the i-th and j-th row vectors by looking at the cell in row i, column j of the matrix XX T . Since V T V = I, we have XX T = UΣV T (UΣV T ) T = UΣV T VΣ T U T = UΣ(UΣ) T , which means that we can calculate cosines with the smaller matrix UΣ, instead of using X = UΣV T (Deerwester et al., 1990).
10. Projection: Calculate U k Σ k (we use k = 300). This matrix has the same number of rows as X, but only k columns (instead of 2 × num patterns columns; in our experiments, that is 300 columns instead of 8,000). We can compare two word pairs by calculating the cosine of the corresponding row vectors in U k Σ k . The row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space. The value k = 300 is recommended by Landauer and Dumais (1997) for measuring the attributional similarity between words. We investigate other values in Section 6.4. Table 10 gives the cosines for the sixteen combinations.
Evaluate alternates: Let
12. Calculate relational similarity: The relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D.
The requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies, which may be introduced in step 1 and may have slipped through the filtering in step 2. Averaging the cosines, as opposed to taking their maximum, is intended to provide some resistance to noise. For quart:volume and mile:distance, the third column in Table 10 shows which alternates are used to calculate the average. For these two pairs, the average of the selected cosines is 0.677. In Table 7, we see that pumping:volume Table 10 The sixteen combinations and their cosines. A:B::C:D expresses the analogy "A is to B as C is to D". The third column indicates those combinations for which the cosine is greater than or equal to the cosine of the original analogy, quart:volume::mile:distance. has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. However, Table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12.
Steps 11 and 12 can be repeated for each two input pairs that are to be compared. This completes the description of LRA. Table 11 gives the cosines for the sample SAT question. The choice pair with the highest average cosine (the choice with the largest value in column #1), choice (b), is the solution for this question; LRA answers the question correctly. For comparison, column #2 gives the cosines for the original pairs and column #3 gives the highest cosine. For this particular SAT question, there is one choice that has the highest cosine for all three columns, choice (b), although this is not true in general. Note that the gap between the first choice (b) and the second choice (d) is largest for the average cosines (column #1). This suggests that the average of the cosines (column #1) is better at discriminating the correct choice than either the original cosine (column #2) or the highest cosine (column #3).
Experiments with Word Analogy Questions
This section presents various experiments with 374 multiple-choice SAT word analogy questions. Table 12 shows the performance of the baseline LRA system on the 374 SAT questions, using the parameter settings and configuration described in Section 5. LRA correctly answered 210 of the 374 questions. 160 questions were answered incorrectly and 4 questions were skipped, because the stem pair and its alternates were represented by zero Table 11 Cosines for the sample SAT question given in Table 6. Column #1 gives the averages of the cosines that are greater than or equal to the original cosines (e.g., the average of the cosines that are marked "yes" in Table 10 is 0.677; see choice (b) in column #1). Column #2 gives the cosine for the original pairs (e.g., the cosine for the first pair in Table 10 is 0.525; see choice (b) in column #2). Column #3 gives the maximum cosine for the sixteen possible analogies (e.g., the maximum cosine in Table 10 vectors. The performance of LRA is significantly better than the lexicon-based approach of Veale (2004) (see Section 3.1) and the best performance using attributional similarity (see Section 2.3), with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). As another point of reference, consider the simple strategy of always guessing the choice with the highest co-occurrence frequency. The idea here is that the words in the solution pair may occur together frequently, because there is presumably a clear and meaningful relation between the solution words, whereas the distractors may only occur together rarely, because they have no meaningful relation. This strategy is signifcantly worse than random guessing. The opposite strategy, always guessing the choice pair with the lowest co-occurrence frequency, is also worse than random guessing (but not significantly). It appears that the designers of the SAT questions deliberately chose distractors that would thwart these two strategies.
Baseline LRA System
With 374 questions and 6 word pairs per question (one stem and five choices), there are 2,244 pairs in the input set. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 8,976 pairs. In step 5, for each pair A:B, we add B:A, yielding 17,952 pairs. However, some pairs are dropped because they correspond to zero vectors (they do not appear together in a window of five words in the WMTS . Also, a few words do not appear in Lin's thesaurus, and some word pairs appear twice in the SAT questions (e.g., lion:cat). The sparse matrix (step 7) has 17,232 rows (word pairs) and 8,000 columns (patterns), with a density of 5.8% (percentage of nonzero values). Table 13 gives the time required for each step of LRA, a total of almost nine days. All of the steps used a single CPU on a desktop computer, except step 3, finding the phrases for each word pair, which used a 16 CPU Beowulf cluster. Most of the other steps are parallelizable; with a bit of programming effort, they could also be executed on the Beowulf cluster. All CPUs (both desktop and cluster) were 2.4 GHz Intel Xeons. The desktop computer had 2 GB of RAM and the cluster had a total of 16 GB of RAM. Table 14 compares LRA to the Vector Space Model with the 374 analogy questions. VSM-AV refers to the VSM using AltaVista's database as a corpus. The VSM-AV results are taken from Turney and Littman (2005). As mentioned in Section 4.2, we estimate this corpus contained about 5 × 10 11 English words at the time the VSM-AV experiments took place. VSM-WMTS refers to the VSM using the WMTS, which contains about 5 × 10 10 English words. We generated the VSM-WMTS results by adapting the VSM to the WMTS. The algorithm is slightly different from Turney and Littman (2005), because we used passage frequencies instead of document frequencies.
LRA versus VSM
All three pairwise differences in recall in Table 14 are statistically significant with 95% confidence, using the Fisher Exact Test (Agresti, 1990). The pairwise differences in Table 15 Comparison with human SAT performance. The last column in the table indicates whether (YES) or not (NO) the average human performance (57%) falls within the 95% confidence interval of the corresponding algorithm's performance. The confidence intervals are calculated using the Binomial Exact Test (Agresti, 1990 precision between LRA and the two VSM variations are also significant, but the difference in precision between the two VSM variations (42.4% versus 47.7%) is not significant. Although VSM-AV has a corpus ten times larger than LRA's, LRA still performs better than VSM-AV.
Comparing VSM-AV to VSM-WMTS, the smaller corpus has reduced the score of the VSM, but much of the drop is due to the larger number of questions that were skipped (34 for VSM-WMTS versus 5 for VSM-AV). With the smaller corpus, many more of the input word pairs simply do not appear together in short phrases in the corpus. LRA is able to answer as many questions as VSM-AV, although it uses the same corpus as VSM-WMTS, because Lin's thesaurus allows LRA to substitute synonyms for words that are not in the corpus.
VSM-AV required 17 days to process the 374 analogy questions (Turney and Littman, 2005), compared to 9 days for LRA. As a courtesy to AltaVista, Turney and Littman (2005) inserted a five second delay between each query. Since the WMTS is running locally, there is no need for delays. VSM-WMTS processed the questions in only one day.
Human Performance
The average performance of college-bound senior high school students on verbal SAT questions corresponds to a recall (percent correct) of about 57% (Turney and Littman, 2005). The SAT I test consists of 78 verbal questions and 60 math questions (there is also an SAT II test, covering specific subjects, such as chemistry). Analogy questions are only a subset of the 78 verbal SAT questions. If we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal SAT I questions, then we can estimate that the average college-bound senior would correctly answer about 57% of the 374 analogy questions.
Of our 374 SAT questions, 190 are from a collection of ten official SAT tests (Claman, 2000). On this subset of the questions, LRA has a recall of 61.1%, compared to a recall of 51.1% on the other 184 questions. The 184 questions that are not from Claman (2000) seem to be more difficult. This indicates that we may be underestimating how well LRA performs, relative to college-bound senior high school students. Claman (2000) suggests that the analogy questions may be somewhat harder than other verbal SAT questions, so we may be slightly overestimating the mean human score on the analogy questions. Table 15 gives the 95% confidence intervals for LRA, VSM-AV, and VSM-WMTS, calculated by the Binomial Exact Test (Agresti, 1990). There is no significant difference between LRA and human performance, but VSM-AV and VSM-WMTS are significantly below human-level performance.
Varying the Parameters in LRA
There are several parameters in the LRA algorithm (see Section 5.5). The parameter values were determined by trying a small number of possible values on a small set of questions that were set aside. Since LRA is intended to be an unsupervised learning algorithm, we did not attempt to tune the parameter values to maximize the precision and recall on the 374 SAT questions. We hypothesized that LRA is relatively insensitive to the values of the parameters. Table 16 shows the variation in the performance of LRA as the parameter values are adjusted. We take the baseline parameter settings (given in Section 5.5) and vary each parameter, one at a time, while holding the remaining parameters fixed at their baseline values. None of the precision and recall values are significantly different from the baseline, according to the Fisher Exact Test (Agresti, 1990), at the 95% confidence level. This supports the hypothesis that the algorithm is not sensitive to the parameter values.
Although a full run of LRA on the 374 SAT questions takes nine days, for some of the parameters it is possible to reuse cached data from previous runs. We limited the experiments with num sim and max phrase because caching was not as helpful for these parameters, so experimenting with them required several weeks.
Ablation Experiments
As mentioned in the introduction, LRA extends the VSM approach of Turney and Littman (2005) by (1) exploring variations on the analogies by replacing words with synonyms (step 1),
(2) automatically generating connecting patterns (step 4), and (3) smoothing the data with SVD (step 9). In this subsection, we ablate each of these three components to assess their contribution to the performance of LRA. Table 17 shows the results. Without SVD (compare column #1 to #2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, we hypothesize that the drop in performance would be significant with a larger set of word pairs. More word pairs would increase the sample size, which would decrease the 95% confidence interval, which would likely show that SVD is making a significant contribution. Furthermore, more word pairs would increase the matrix size, which would give SVD more leverage. For example, Landauer and Dumais (1997) apply SVD to a matrix of of 30,473 columns by 60,768 rows, but our matrix here is 8,000 columns by 17,232 rows. We are currently gathering more SAT questions, to test this hypothesis.
Without synonyms (compare column #1 to #3 in Table 17), recall drops significantly (from 56.1% to 49.5%), but the drop in precision is not significant. When the synonym component is dropped, the number of skipped questions rises from 4 to 22, which demonstrates the value of the synonym component of LRA for compensating for sparse data.
When both SVD and synonyms are dropped (compare column #1 to #4 in Table 17), the decrease in recall is significant, but the decrease in precision is not significant. Again, we believe that a larger sample size would show the drop in precision is significant.
If we eliminate both synonyms and SVD from LRA, all that distinguishes LRA from VSM-WMTS is the patterns (step 4). The VSM approach uses a fixed list of 64 patterns to generate 128 dimensional vectors (Turney and Littman, 2005), whereas LRA uses a dynamically generated set of 4,000 patterns, resulting in 8,000 dimensional vectors. We can see the value of the automatically generated patterns by comparing LRA without synonyms and SVD (column #4) to VSM-WMTS (column #5). The difference in both precision and recall is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
The ablation experiments support the value of the patterns (step 4) and synonyms (step 1) in LRA, but the contribution of SVD (step 9) has not been proven, although we believe more data will support its effectiveness. Nonetheless, the three components together result in a 16% increase in F (compare #1 to #5).
Matrix Symmetry
We know a priori that, if A:B::C:D, then B:A::D:C. For example, "mason is to stone as carpenter is to wood" implies "stone is to mason as wood is to carpenter". Therefore a good measure of relational similarity, sim r , should obey the following equation:
In steps 5 and 6 of the LRA algorithm (Section 5.5), we ensure that the matrix X is symmetrical, so that equation (8) is necessarily true for LRA. The matrix is designed so that the row vector for A:B is different from the row vector for B:A only by a permutation of the elements. The same permutation distinguishes the row vectors for C:D and D:C. Therefore the cosine of the angle between A:B and C:D must be identical to the cosine of the angle between B:A and D:C (see equation (7)).
To discover the consequences of this design decision, we altered steps 5 and 6 so that symmetry is no longer preserved. In step 5, for each word pair A:B that appears in the input set, we only have one row. There is no row for B:A unless B:A also appears in the input set. Thus the number of rows in the matrix dropped from 17,232 to 8,616.
In step 6, we no longer have two columns for each pattern P , one for "word 1 P word 2 " and another for "word 2 P word 1 ". However, to be fair, we kept the total number of columns at 8,000. In step 4, we selected the top 8,000 patterns (instead of the top 4,000), distinguishing the pattern "word 1 P word 2 " from the pattern "word 2 P word 1 " (instead of considering them equivalent). Thus a pattern P with a high frequency is likely to appear in two columns, in both possible orders, but a lower frequency pattern might appear in only one column, in only one possible order.
These changes resulted in a slight decrease in performance. Recall dropped from 56.1% to 55.3% and precision dropped from 56.8% to 55.9%. The decrease is not statistically significant. However, the modified algorithm no longer obeys equation (8).
Although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the SAT questions, we prefer to retain symmetry, to ensure that equation (8) is satisfied.
Note that, if A:B::C:D, it does not follow that B:A::C:D. For example, it is false that "stone is to mason as carpenter is to wood". In general (except when the semantic relations between A and B are symmetrical), we have the following inequality:
Therefore we do not want A:B and B:A to be represented by identical row vectors, although it would ensure that equation (8) is satisfied.
All Alternates versus Better Alternates
In step 12 of LRA, the relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D. That is, the average includes only those alternates that are "better" than the originals. Taking all alternates instead of the better alternates, recall drops from 56.1% to 40.4% and precision drops from 56.8% to 40.8%. Both decreases are statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
Interpreting Vectors
Suppose a word pair A:B corresponds to a vector r in the matrix X. It would be convenient if inspection of r gave us a simple explanation or description of the relation between A and B. For example, suppose the word pair ostrich:bird maps to the row vector r. It would be pleasing to look in r and find that the largest element corresponds to the pattern "is the largest" (i.e., "ostrich is the largest bird"). Unfortunately, inspection of r reveals no such convenient patterns. We hypothesize that the semantic content of a vector is distributed over the whole vector; it is not concentrated in a few elements. To test this hypothesis, we modified step 10 of LRA. Instead of projecting the 8,000 dimensional vectors into the 300 dimensional space U k Σ k , we use the matrix U k Σ k V T k . This matrix yields the same cosines as U k Σ k , but preserves the original 8,000 dimensions, making it easier to interpret the row vectors. For each row vector in U k Σ k V T k , we select the N largest values and set all other values to zero. The idea here is that we will only pay attention to the N most important patterns in r; the remaining patterns will be ignored. This reduces the length of the row vectors, but the cosine is the dot product of normalized vectors (all vectors are normalized to unit length; see equation (7)), so the change to the vector lengths has no impact; only the angle of the vectors is important. If most of the semantic content is in the N largest elements of r, then setting the remaining elements to zero should have relatively little impact. Table 18 shows the performance as N varies from 1 to 3,000. The precision and recall are significantly below the baseline LRA until N ≥ 300 (95% confidence, Fisher Exact Test). In other words, for a typical SAT analogy question, we need to examine the top 300 patterns to explain why LRA selected one choice instead of another.
We are currently working on an extension of LRA that will explain with a single pattern why one choice is better than another. We have had some promising results, but this work is not yet mature. However, we can confidently claim that interpreting the vectors is not trivial.
Manual Patterns versus Automatic Patterns
Turney and Littman (2005) used 64 manually generated patterns whereas LRA uses 4,000 automatically generated patterns. We know from Section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns. It may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns. If we require an exact match, 50 of the 64 manual patterns can be found in the automatic patterns. If we are lenient about wildcards, and count the pattern "not the" as matching "* not the" (for example), then 60 of the 64 manual patterns appear within the automatic patterns. This suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns, rather than a qualitative difference in the patterns. Turney and Littman (2005) point out that some of their 64 patterns have been used by other researchers. For example, Hearst (1992) used the pattern "such as" to discover hyponyms and Berland and Charniak (1999) used the pattern "of the" to discover meronyms. Both of these patterns are included in the 4,000 patterns automatically generated by LRA.
The novelty in Turney and Littman (2005) is that their patterns are not used to mine text for instances of word pairs that fit the patterns (Hearst, 1992; Berland and Charniak, 1999); instead, they are used to gather frequency data for building vectors that represent the relation between a given pair of words. The results in Section 6.8 show that a vector contains more information than any single pattern or small set of patterns; a vector is a distributed representation. LRA is distinct from Hearst (1992) and Berland and Charniak (1999) in its focus on distributed representations, which it shares with Turney and Littman (2005), but LRA goes beyond Turney and Littman (2005) by finding patterns automatically. Riloff and Jones (1999) and Yangarber (2003) also find patterns automatically, but their goal is to mine text for instances of word pairs; the same goal as Hearst (1992) and Berland and Charniak (1999). Because LRA uses patterns to build distributed vector representations, it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of Hearst (1992), Berland and Charniak (1999), Riloff and Jones (1999), and Yangarber (2003). Therefore LRA can simply select the highest frequency patterns (step 4 in Section 5.5); it does not need the more sophisticated selection algorithms of Riloff and Jones (1999) and Yangarber (2003).
Experiments with Noun-Modifier Relations
This section describes experiments with 600 noun-modifier pairs, hand-labeled with 30 classes of semantic relations (Nastase and Szpakowicz, 2003). In the following experiments, LRA is used with the baseline parameter values, exactly as described in Section 5.5. No adjustments were made to tune LRA to the noun-modifier pairs. LRA is used as a distance (nearness) measure in a single nearest neighbour supervised learning algorithm.
Classes of Relations
The following experiments use the 600 labeled noun-modifier pairs of Nastase and Szpakowicz (2003). This data set includes information about the part of speech and WordNet synset (synonym set; i.e., word sense tag) of each word, but our algorithm does not use this information. Table 19 lists the 30 classes of semantic relations. The table is based on Appendix A of Nastase and Szpakowicz (2003), with some simplifications. The original table listed several semantic relations for which there were no instances in the data set. These were relations that are typically expressed with longer phrases (three or more words), rather than noun-modifier word pairs. For clarity, we decided not to include these relations in Table 19.
In this table, H represents the head noun and M represents the modifier. For example, in "flu virus", the head noun (H) is "virus" and the modifier (M ) is "flu" (*). In English, the modifier (typically a noun or adjective) usually precedes the head noun. In the description of purpose, V represents an arbitrary verb. In "concert hall", the hall is for presenting concerts (V is "present") or holding concerts (V is "hold") ( †).
Nastase and Szpakowicz (2003) organized the relations into groups. The five capitalized terms in the "Relation" column of Table 19 are the names of five groups of semantic relations. (The original table had a sixth group, but there are no examples of this group in the data set.) We make use of this grouping in the following experiments.
Baseline LRA with Single Nearest Neighbour
The following experiments use single nearest neighbour classification with leave-oneout cross-validation. For leave-one-out cross-validation, the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers. The data set is split 600 times, so that each noun-modifier gets a turn as the testing word pair. The predicted class of the testing pair is the class of the single nearest neighbour in the training set. As the measure of nearness, we use LRA to calculate the relational similarity between the testing pair and the training pairs. The single nearest neighbour algorithm is a supervised learning algorithm (i.e., it requires a training set of labeled data), but we are using LRA to measure the distance between a pair and its potential neighbours, and LRA is itself determined in an unsupervised fashion (i.e., LRA does not need labeled data).
Each SAT question has five choices, so answering 374 SAT questions required calculating 374 × 5 × 16 = 29, 920 cosines. The factor of 16 comes from the alternate pairs, step 11 in LRA. With the noun-modifier pairs, using leave-one-out cross-validation, each test pair has 599 choices, so an exhaustive application of LRA would require calculating 600 × 599 × 16 = 5, 750, 400 cosines. To reduce the amount of computation required, we first find the 30 nearest neighbours for each pair, ignoring the alternate pairs (600 × 599 = 359, 400 cosines), and then apply the full LRA, including the alternates, to just those 30 neighbours (600 × 30 × 16 = 288, 000 cosines), which requires calculating only 359, 400 + 288, 000 = 647, 400 cosines.
There are 600 word pairs in the input set for LRA. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 2,400 pairs. In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. However, some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. The sparse matrix (step 7) has 4,748 rows and 8,000 columns, with a density of 8.4%.
Following Turney and Littman (2005), we evaluate the performance by accuracy and also by the macroaveraged F measure (Lewis, 1991). Macroaveraging calculates the precision, recall, and F for each class separately, and then calculates the average across all classes. Microaveraging combines the true positive, false positive, and false negative counts for all of the classes, and then calculates precision, recall, and F from the combined counts. Macroaveraging gives equal weight to all classes, but microaveraging gives more weight to larger classes. We use macroaveraging (giving equal weight to all classes), because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus.
Classification with 30 distinct classes is a hard problem. To make the task easier, we can collapse the 30 classes to 5 classes, using the grouping that is given in Table 19. For example, agent and beneficiary both collapse to participant. On the 30 class problem, LRA with the single nearest neighbour algorithm achieves an accuracy of 39.8% (239/600) and a macroaveraged F of 36.6%. Always guessing the majority class would result in an accuracy of 8.2% (49/600). On the 5 class problem, the accuracy is 58.0% (348/600) and the macroaveraged F is 54.6%. Always guessing the majority class would give an accuracy of 43.3% (260/600). For both the 30 class and 5 class problems, LRA's accuracy is significantly higher than guessing the majority class, with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). Table 20 shows the performance of LRA and VSM on the 30 class problem. VSM-AV is VSM with the AltaVista corpus and VSM-WMTS is VSM with the WMTS corpus. The results for VSM-AV are taken from Turney and Littman (2005). All three pairwise differences in the three F measures are statistically significant at the 95% level, according to the Paired T-Test (Feelders and Verkooijen, 1995). The accuracy of LRA is significantly higher than the accuracies of VSM-AV and VSM-WMTS, according to the Fisher Exact Test (Agresti, 1990), but the difference between the two VSM accuracies is not significant. Table 21 compares the performance of LRA and VSM on the 5 class problem. The accuracy and F measure of LRA are significantly higher than the accuracies and F measures of VSM-AV and VSM-WMTS, but the differences between the two VSM accuracies and F measures are not significant.
LRA versus VSM
Discussion
The experimental results in Sections 6 and 7 demonstrate that LRA performs significantly better than the VSM, but it is also clear that there is room for improvement. The accuracy might not yet be adequate for practical applications, although past work has shown that it is possible to adjust the tradeoff of precision versus recall (Turney and Littman, 2005). For some of the applications, such as information extraction, LRA might be suitable if it is adjusted for high precision, at the expense of low recall.
Another limitation is speed; it took almost nine days for LRA to answer 374 analogy questions. However, with progress in computer hardware, speed will gradually become less of a concern. Also, the software has not been optimized for speed; there are several places where the efficiency could be increased and many operations are parallelizable. It may also be possible to precompute much of the information for LRA, although this would require substantial changes to the algorithm.
The difference in performance between VSM-AV and VSM-WMTS shows that VSM is sensitive to the size of the corpus. Although LRA is able to surpass VSM-AV when the WMTS corpus is only about one tenth the size of the AV corpus, it seems likely that LRA would perform better with a larger corpus. The WMTS corpus requires one terabyte of hard disk space, but progress in hardware will likely make ten or even one hundred terabytes affordable in the relatively near future.
For noun-modifier classification, more labeled data should yield performance improvements. With 600 noun-modifier pairs and 30 classes, the average class has only 20 examples. We expect that the accuracy would improve substantially with five or ten times more examples. Unfortunately, it is time consuming and expensive to acquire hand-labeled data.
Another issue with noun-modifier classification is the choice of classification scheme for the semantic relations. The 30 classes of Nastase and Szpakowicz (2003) might not be the best scheme. Other researchers have proposed different schemes (Vanderwende, 1994;Barker and Szpakowicz, 1998;Rosario and Hearst, 2001;Rosario, Hearst, and Fillmore, 2002). It seems likely that some schemes are easier for machine learning than others. For some applications, 30 classes may not be necessary; the 5 class scheme may be sufficient.
LRA, like VSM, is a corpus-based approach to measuring relational similarity. Past work suggests that a hybrid approach, combining multiple modules, some corpusbased, some lexicon-based, will surpass any purebred approach (Turney et al., 2003). In future work, it would be natural to combine the corpus-based approach of LRA with the lexicon-based approach of Veale (2004), perhaps using the combination method of Turney et al. (2003).
The Singular Value Decomposition is only one of many methods for handling sparse, noisy data. We have also experimented with Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999), Kernel Principal Components Analysis (KPCA) (Scholkopf, Smola, and Muller, 1997), and Iterative Scaling (IS) (Ando, 2000). We had some interesting results with small matrices (around 2,000 rows by 1,000 columns), but none of these methods seemed substantially better than SVD and none of them scaled up to the matrix sizes we are using here (e.g., 17,232 rows and 8,000 columns; see Section 6.1).
In step 4 of LRA, we simply select the top num patterns most frequent patterns and discard the remaining patterns. Perhaps a more sophisticated selection algorithm would improve the performance of LRA. We have tried a variety of ways of selecting patterns, but it seems that the method of selection has little impact on performance. We hypothesize that the distributed vector representation is not sensitive to the selection method, but it is possible that future work will find a method that yields significant improvement in performance.
Conclusion
This paper has introduced a new method for calculating relational similarity, Latent Relational Analysis. The experiments demonstrate that LRA performs better than the VSM approach, when evaluated with SAT word analogy questions and with the task of classifying noun-modifier expressions. The VSM approach represents the relation between a pair of words with a vector, in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus. LRA extends this approach in three ways:
| 14,134 |
cs0606096
|
2949641004
|
This paper describes an interdisciplinary approach which brings together the fields of corpus linguistics and translation studies. It presents ongoing work on the creation of a corpus resource in which translation shifts are explicitly annotated. Translation shifts denote departures from formal correspondence between source and target text, i.e. deviations that have occurred during the translation process. A resource in which such shifts are annotated in a systematic way will make it possible to study those phenomena that need to be addressed if machine translation output is to resemble human translation. The resource described in this paper contains English source texts (parliamentary proceedings) and their German translations. The shift annotation is based on predicate-argument structures and proceeds in two steps: first, predicates and their arguments are annotated monolingually in a straightforward manner. Then, the corresponding English and German predicates and arguments are aligned with each other. Whenever a shift - mainly grammatical or semantic -has occurred, the alignment is tagged accordingly.
|
First of all, there are those projects that also deal with predicate-argument structures in some way, in particular FrameNet @cite_4 (which is mainly a lexicographical project but can, of course, be adopted for extensive corpus annotation, as is currently done in the project @cite_3 ), PropBank @cite_1 , and NomBank @cite_5 . In these projects, the predicate-argument annotation is the main objective, so they all try some kind of generalisation by organising their predicates in semantic frames (FrameNet) or by following the Levin classes (PropBank, and for nominalisations also NomBank). In FuSe, however, this type of annotation is not an end in itself -- predicates and their arguments simply constitute the transemes. Consequently, their annotation is kept deliberately simple and is entirely predicate-group specific without any attempt at generalisation.
|
{
"abstract": [
"The Proposition Bank project takes a practical approach to semantic representation, adding a layer of predicate-argument information, or semantic role labels, to the syntactic structures of the Penn Treebank. The resulting resource can be thought of as shallow, in that it does not represent coreference, quantification, and many other higher-order phenomena, but also broad, in that it covers every instance of every verb in the corpus and allows representative statistics to be calculated.We discuss the criteria used to define the sets of semantic roles used in the annotation process and to analyze the frequency of syntactic semantic alternations in the corpus. We describe an automatic system for semantic role tagging trained on the corpus and discuss the effect on its performance of various types of information, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty ''trace'' categories of the treebank.",
"When complete, NomBank will provide annotation of noun arguments in Penn Treebank II (PTB). In PropBank, University of Pennsylvania annotators provide similar information for verbs. Given nominalization verb mappings, the combination of NomBank and PropBank allows for generalization of arguments across parts of speech. This paper describes our annotation task including factors which make assigning role labels to noun arguments a challenging task.",
"",
"We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the large-scale acquisition of word-semantic information, e.g. the construction of domain-independent lexica. The backbone of the annotation are semantic roles in the frame semantics paradigm. We report experiences and evaluate the annotated data from the first project stage. On this basis, we discuss the problems of vagueness and ambiguity in semantic annotation."
],
"cite_N": [
"@cite_1",
"@cite_5",
"@cite_4",
"@cite_3"
],
"mid": [
"2158847908",
"2403542349",
"1552061706",
"2167144759"
]
}
|
Building a resource for studying translation shifts
|
Recent years have shown a growing interest in bi-or multilingual linguistic resources. In particular, parallel corpora (or translation corpora) have become increasingly popular as a resource for various machine translation applications. So far, the linguistic annotation of these resources has mostly been limited to sentence or word alignment, which can be done largely automatically. However, this type of alignment reveals only a small part of the relationship that actually exists between a source text and its translation. In fact, in most cases, straightforward correspondences are the exception rather than the rule, because translations deviate in many ways from their originals: they contain numerous shifts. The notion of shift is an important concept in translation studies (see Section 2.). However, shifts have not yet been dealt with extensively and systematically in corpus linguistics. This paper presents an ongoing effort to build a resource (FuSe) in which shifts (in translations from English to German) are annotated explicitly on the basis of predicate-argument structures, thus making translation equivalence visible. When finished, the resource will open up a possibility for linguists and translation theorists to investigate the correspondences and shifts empirically, but also for researchers in the field of machine translation, who can use this resource to detect the problems they still have to address if they want to make their output resemble human translation. The FuSe annotation project is described in more detail in Section 3., and Section 4. gives an overview of the way it relates to other work.
Translation Shifts
The investigation of shifts has a long-standing tradition in translation studies. Vinay and Darbelnet (1958), working in the field of comparative stylistics, developed a system of translation procedures. Some of them are more or less direct or literal, but some of them are oblique and result in various differences between the source and the target text. These procedures are called transposition (change in word class), modulation (change in semantics), equivalence (completely different translation, e. g. proverbs), and adaptation (change of situation due to cultural differences). There is a slight prescriptive undertone in the work of Vinay and Darbelnet, because they state that oblique procedures should only be used if a more direct one would lead to a wrong or awkward translation. Nevertheless, their approach to translation shifts, even though avant la lettre, continues to be highly influential. The actual term shift was introduced by Catford (1965), who distinguishes formal correspondence, which exists between source and target categories that occupy approximately the same place in their respective systems, and translational equivalence, which holds between two portions of texts that are actually translations of each other. A shift has occurred if there are "departures from formal correspondence" (p. 73) between source and target text, i. e. if translational equivalents are not formal correspondents. According to Catford, there are two major types of shifts: level shifts and category shifts. Level shifts are shifts between grammar and lexis, e. g. the translation of verbal aspect by means of an adverb or vice versa. Category shifts are further subdivided into structure shifts (e. g. a change in clause structure), class shifts (e. g. a change in word class), unit shifts (e. g. translating a phrase with a clause), and intra-system shifts (e. g. a change in number even though the languages have the same number system). One of the problems with Catford's approach is that it relies heavily on the structuralist notion of system and thus presupposes that it is feasible -or indeed possible -to determine and compare the valeurs of any two given linguistic items. His account remains theoretic and, at least to my knowledge, has never been applied to any actual translations, not even by himself. The comparative model by Leuven-Zwart (1989) has been devised as a practical method for studying syntactic, semantic, stylistic, and pragmatic shifts within sentences, clauses, and phrases of literary texts and their translations. 1 It consists of four steps. Firstly, the units to be com-pared must be established. Van Leuven-Zwart calls them transemes, and they consist of predicates and their arguments or of predicateless adverbials. Secondly, the common denominator of the source and the target text transeme -van Leuven-Zwart calls this the architranseme -must be determined. In a third step, the relationship between each transeme and the architranseme -either synonymic or hyponymic -is established. Finally, the two transemes are compared with each other. If both are synonymic with the architranseme, no shift has occurred. Otherwise, there are three major categories of shifts: modulation (if one transeme is a synonym and the other a hyponym), modification (if both transemes are hyponymic with respect to the architranseme), and mutation (if there is no relationship between the transemes). There are a number of subcategories for each type of shift: the whole list comprises 37 items, which is why the model has sometimes been criticized for being too complex to be applied consistently.
The FuSe Annotation Project
The Data
The data annotated in FuSe are taken from the Europarl corpus (Koehn, 2002) 2 , which contains proceedings of the European parliament. In a resource designed for studying translation shifts, it is not enough that the data be parallel. It is of vital importance that they are actually translations of each other. 3 Since many translation shifts are directional (e. g. passivisation), the direction of the translation must also be clear (in this case from English into German). We used the language attribute provided by the corpus to extract those sentences that were originally English. In the corpus, the language attribute is only used if the language of the corpus file does not correspond with the original language. Thus, we extracted those sentences from the English subcorpus that had no language attribute and were aligned to sentences with the language attribute "EN" in the German subcorpus. Furthermore, in order to ensure that the English source sentences were produced by native speakers, we matched the value of the name attribute against the list of British and Irish Members of Parliament, which is available on the Europarl website. 4
Predicates and Arguments as Transemes
The first step in annotating translation shifts is determining the transemes, i. e. those translation units on which the comparison between source and target text will be based. It was mentioned in Section 2. that the transemes originally used by Leuven-Zwart (1989) consist of predicates and their arguments (and adverbials). The disadvantage with this division is that the transemes are quite complex (whole clauses), which means that there could occur several shifts the comparative model are used to gain insight into shifts on the story level and into the norms governing the translation process (Leuven-Zwart, 1990). This model is not further discussed, because it is not related to the approach presented in this paper. 2 We use the XCES version by Tiedemann and Nygaard (2004). 3 The Europarl corpus is available in eleven languages, so large parts of the English and German subcorpora will be translated from a third language. 4 http://www.europarl.eu.int/ within one transeme. While this seems to have been unproblematic for van Leuven-Zwart, who worked with pen and paper, the units must be smaller in a computational annotation project in order for the shifts to be assigned unambiguously.
The approach presented in this paper is also based on predicate-argument structures, because it is assumed that these capture the major share of the meaning of a sentence and are most likely to be represented in both source and target sentence. However, unlike in van Leuven-Zwart's approach, each predicate (lexical verbs, certain nouns and certain adjectives) and each argument represents a transeme in itself, i. e. there are predicate transemes and argument transemes. Of course, even this more fine-grained annotation entails that correspondences and shifts on other levels will be missed, but in order to ensure workability and reproducibility of the annotation, this restriction seems justifiable.
The predicate-argument structures are annotated monolingually, and since the annotation is mostly a means to an end, it is kept deliberately simple: predicates are represented by the capitalised citation form of the lexical item (e. g. DRAMATISE). They are assigned a class based on their syntactic form (v, n, a, c, l for 'verbal', 'nominal', 'adjectival', 'copula', and 'light verb construction' respectively). Homonymous predicates are disambiguated for word senses, and related predicates (e. g. a verb and its nominalisation) are assigned to a common predicate group. In order to facilitate the annotation process, the arguments are given short intuitive role names (e. g. ENT DRAMATISED, i. e. the entity being dramatised). These role names have to be used consistently only within a predicate group. If, for example, an argument of the predicate DRAMATISE has been assigned the role ENT DRAMATISED and the annotator encounters a comparable role as an argument to the predicate DRAMATISATION, the same role name for this argument has to be used. Other than that, no attempt at generalisation along the lines of semantic cases is made. If a predicate is realised in a way that might influence the realisation of its argument structure in a systematic way (e. g. infinitive, passive), it receives a tag to indicate this. If one of the arguments is a relative pronoun, its antecedent is also annotated. This is done in order to avoid the annotation of a pronominalisation shift (see Section 3.3.1.) in these cases, since the antecedent of relative pronouns is so close that it would be wrong to call this a pronominalisation. Apart from this, there is no anaphor resolution.
Shift Annotation
After the predicate-argument structures have been annotated monolingually, the source predicates and arguments are aligned to their target counterparts. Sometimes, this is possible in a straightforward manner, like in sentence pair (1). 5
(1) However, more often than not the relationship will not be this simple. Whenever a shift occurs, the alignment between the two predicates or arguments is tagged. Mainly, the shifts are categorised according to whether they occur on the level of grammar or on the level of semantics. The following is an introduction to the main types of shifts. They are first described in Sections 3.3.1. to 3.3.3., and to make this more concrete, a few examples are given in Section 3.3.4.
Grammatical Shifts Category Change
This tag is assigned whenever the corresponding transemes belong to different syntactic categories, and it can be applied both to predicates and arguments. A typical example would be a verbal predicate transeme that is translated as a nominal predicate (nominalisation).
Passivisation This tag can only be assigned to the alignment between verbal predicates (and certain light verb constructions) and is used if an active predicate has been rendered as a passive predicate. Often, but not always, a passivisation shift goes hand in hand with a deletion shift (see below), namely if the source subject is no longer explicitly expressed in the passivised translation.
Depassivisation Conversely, if a passive verbal predicate has been rendered as an active verbal predicate, this is tagged depassivisation. If the source predicate-argument structure lacks the agentive argument, there will also be an addition shift (see below).
Pronominalisation This tag can only be assigned to the alignment between arguments. It is used if the source argument is realised by lexical material (or a proper name) but translated as a pronoun. This tag is not used if the pronoun in question is a relative pronoun, because the antecedent can always be found in close vicinity and is annotated as part of the transeme (see Section 3.2.).
Depronominalisation This tag can only be assigned to the alignment between arguments. It is used if a source argument transeme is realised as a pronoun but translated with lexical material or a proper name.
Number Change
This tag is assigned if the corresponding transemes differ in number, i. e. one is singular, the other plural. This happens mainly between argument transemes, but can also occur between nominal predicates.
Semantic Shifts Semantic Modification
This tag is assigned if the two transemes are not straightforward equivalents of each other because of some type of semantic divergence, for example a difference in aktionsart between two verbal predicates. 6 Opus/Europarl (en): file ep-00-01-18.xml, sentence id 4.2 It is rather difficult to find objective criteria for this shift. In the majority of cases two corresponding transemes exhibit some kind of divergence if taken out of their context, but are more or less inconspicuous translations in the concrete sentence pair. Since an inflationary use of this tag would decrease its expressiveness, semantic likeness is interpreted somewhat liberally and the tag is assigned only if the semantic difference is significant. Of course, this is far from being a proper operationalisation, and we hope to clarify the concept as we go along.
Explicitation This is a subcategory of semantic modification, which is assigned if the target transeme is lexically more specific than the source transeme. A clear case of explicitation is when extra information has been added to the transeme. One could also speak of explicitation when a transeme has been depronominalised (see Section 3.3.1.). However, since the depronominalisation shift is already used in these cases, this would be redundant and is therefore not annotated.
Generalisation This is the counterpart to the explicitation shift and is used when the target transeme is lexically less specific than its source, and in particular if some information has been left out in the translation. To avoid redundancy, it is not used for pronominalisation shifts.
Addition This tag is assigned to a target transeme, either predicate or argument, that has been added in the translation process. For instance, if there has been a depassivisation shift and if the agentive argument had not been realised in the source text, it must be added in the target text. Note that we do not speak of addition if only part of the transeme has been added. In this case, the explicitation tag is to be assigned (see above).
Deletion This tag is assigned to a source transeme that is untranslated in the target version of the text. Analogous to the addition shift, this tag is only used if the entire transeme has been deleted. If it is only part of a transeme that is untranslated, the shift is classified as generalisation.
Mutation This tag is used if it is possible to tell that two transemes are translation equivalents (in the sense intended by Catford, see Section 2.), but if they differ radically in their lexical meaning. This shift often involves a number of other shifts as well.
Problematic Issues
Long Transemes Normally, a maximum of two shifts can be assigned to any one pair of transemes: a grammatical and a semantic shift. This can be a problem if the transemes are long, like for instance clausal arguments. Because of their length, they can contain multiple shifts, and it is difficult to determine which of them is to be the basis for the shift annotation, in particular if they are contradictory (e. g. there might occur both generalisation and explicitation in different parts of the transeme). The general rule here is to check whether the shift actually affects the overall transeme. In most cases, long transemes contain further transemes, e. g. clausal arguments contain at least one extra predicate plus arguments, which will be represented by their own predicate-argument structure, and it is on this level that these shifts are recorded.
Lexical Modals Modal auxiliaries are currently not annotated as separate predicates. This is no problem as long as the modality is expressed by means of a modal auxiliary in both languages. However, sometimes modality is expressed by a full verb with modal meaning (e. g. to wish), which is consequently annotated as a predicate. If the other language uses a modal auxiliary, no alignment is possible, because there is no predicate transeme. Normally, when a predicate transeme has no correspondent in the other language, one would assign the addition or deletion shift, but since nothing really has been added or deleted, this is not a particularly satisfying solution. One way out would be to rethink our attitude towards modals and simply annotate them as predicates. While the decision is still pending, such predicates are tagged dangling modal.
Structure Shifts
It also happens that a transeme cannot be aligned because it is not realised as part of a predicateargument structure in the other language. An example of this would be a full verb with modal meaning that is rendered as an adverb in the other language (e. g. to wishgern, 'with pleasure'). Again, it would not be adequate to speak of addition or deletion. However, since these cases constitute real structural shifts, the additional tag non-pas (i. e. 'non-predicate-argument-structure') has been introduced to deal with them.
Examples
In this section, the shift annotation described in the previous sections is illustrated by a few examples from the corpus.
(2)
a. Both sentences contain one predicate transeme (DRAMATISE and AUFBAUSCHEN) and two argument transemes. The two predicates differ with respect to voice: while the source predicate in (2-a) is passive, its German counterpart (2-b) is active, so the alignment between these two predicates would receive the depassivisation tag. As a consequence of the change of voice, the agentive argument, which is left unexpressed in the passive source sentence, is explicitly expressed in the German translation (Wir, 'we'), and is consequently tagged addition. Conversely, the argument into more than that is left unexpressed in the German version -this is an instance of deletion. Furthermore, the subject of the English sentence (it), the entity that is being dramatised, is expressed lexically in the translation. The alignment between these two arguments is thus tagged as depronominalisation.
( In this sentence pair, the alignment between the two predicate transemes HAVE and SETZEN is tagged semantic modification because they differ in aktionsart: the English predicate is static, while the German predicate is telic.
(4) a. Example (4) illustrates the use of the generalisation shift. The second argument transeme in the original (4-a) contains explicit information on what the issue is about. This information is left out in the translation (4-b), with the result that the transeme is more general. Since it is only a part of the transeme that has been dropped in the translation, this is not annotated as deletion.
Tools
The (monolingual) predicate-argument structures are annotated with FuSer (Pyka and Schwall, 2006). The annotator is presented with a sentence and, if available, 10 a graphical view of its syntactic structure, and selects those tokens (or nodes from the tree) which are to be annotated as a predicate. The annotator can choose from a list of predicates, or, if the predicate type is encountered for the first time, add a new predicate type or group to the database. Once the predicate is annotated, the procedure is repeated for the arguments of this predicate. Again, either the argument types are chosen from the list or added to the database. Additionally, the necessary tags (see Section 3.2.) are added to the predicates and arguments. The annotation process is then repeated for all the predicate-argument structures in a sentence. They are annotated independently, i.e. there is no nesting of predicates. Currently, the predicate-argument structures are annotated manually, which is a time-consuming task. However, there are a couple of "wizards" under development which will assist the annotator. For instance, there will be a wizard to scan the sentence for predicate candidates or to suggest suitable argument types when the predicate is already included in the database. Technically, FuSer is a platform-independent Java application which operates on an extended ANNOTATE MySQL database. This data model makes it possible to be flexible with respect to the input data, which can be either raw (as is currently the case) or syntactically annotated. Furthermore, since the ANNOTATE database is only extended and not modified, data processed with FuSer can always be processed by ANNOTATE afterwards (e. g. for further annotation). It is planned to extend FuSer for the bilingual alignment and the shift annotation. While this extension is under development, we use a simple Web-based alignment tool (XML, Perl, CGI) for this task (see Figure 1). The browser window is divided into three parts: in the upper third, the annotator can select a sentence pair. In the middle part, all the predicate-argument structures that have been annotated for these sentences are listed, with the predicates and arguments being highlighted in different colours. The annotator chooses (by means of radio buttons) two corresponding predicate-argument structures, which are then displayed in more detail in the lower window. Here, the annotator can align corresponding predicates and arguments with each other and, if necessary, choose up to two shift-tags for each pair of transemes from a drop-down menu. The lower window can also be used for viewing existing annotation.
Outlook
So far, the annotated data consist of English source texts that have been translated into German. It would be interesting to include the opposite direction as well, i. e. German source texts that have been translated into English. This would make it possible -by comparing the types of shifts and their quantity -to find out which shifts have occurred due to the direction of the translation process, and which shifts might be due to the translation process as such (e. g. explicitation is taken to be such a potential "translation universal" in current translation research, see Mauranen and Kujamäki (2004)). Furthermore, the genre of the Europarl corpus -parliamentary proceedings -is highly restricted and it would be a useful extension to include other types of data (e. g. technical language, literary prose) in order to compare the occurrence of shifts across genres.
Acknowledgements
I would like to thank Hendrik Feddes, Robert Memering, Frank Schumacher, and the three anonymous reviewers for helpful and valuable comments.
| 3,614 |
cs0606096
|
2949641004
|
This paper describes an interdisciplinary approach which brings together the fields of corpus linguistics and translation studies. It presents ongoing work on the creation of a corpus resource in which translation shifts are explicitly annotated. Translation shifts denote departures from formal correspondence between source and target text, i.e. deviations that have occurred during the translation process. A resource in which such shifts are annotated in a systematic way will make it possible to study those phenomena that need to be addressed if machine translation output is to resemble human translation. The resource described in this paper contains English source texts (parliamentary proceedings) and their German translations. The shift annotation is based on predicate-argument structures and proceeds in two steps: first, predicates and their arguments are annotated monolingually in a straightforward manner. Then, the corresponding English and German predicates and arguments are aligned with each other. Whenever a shift - mainly grammatical or semantic -has occurred, the alignment is tagged accordingly.
|
In the project @cite_8 , texts from six languages (Arabic, French, Hindi, Japanese, Korean, and Spanish) and their translations into English are annotated for interlingual content. For each original text, at least two English translations are being annotated (so as to be able to study paraphrases), and the annotation proceeds incrementally over three increasingly abstract levels of representation.
|
{
"abstract": [
"This paper describes a multi-site project to annotate six sizable bilingual parallel corpora for interlingual content. After presenting the background and objectives of the effort, we will go on to describe the data set that is being annotated, the interlingua representation language used, an interface environment that supports the annotation task and the annotation process itself. We will then present a preliminary version of our evaluation methodology and conclude with a summary of the current status of the project along with a number of issues which have arisen."
],
"cite_N": [
"@cite_8"
],
"mid": [
"1499072090"
]
}
|
Building a resource for studying translation shifts
|
Recent years have shown a growing interest in bi-or multilingual linguistic resources. In particular, parallel corpora (or translation corpora) have become increasingly popular as a resource for various machine translation applications. So far, the linguistic annotation of these resources has mostly been limited to sentence or word alignment, which can be done largely automatically. However, this type of alignment reveals only a small part of the relationship that actually exists between a source text and its translation. In fact, in most cases, straightforward correspondences are the exception rather than the rule, because translations deviate in many ways from their originals: they contain numerous shifts. The notion of shift is an important concept in translation studies (see Section 2.). However, shifts have not yet been dealt with extensively and systematically in corpus linguistics. This paper presents an ongoing effort to build a resource (FuSe) in which shifts (in translations from English to German) are annotated explicitly on the basis of predicate-argument structures, thus making translation equivalence visible. When finished, the resource will open up a possibility for linguists and translation theorists to investigate the correspondences and shifts empirically, but also for researchers in the field of machine translation, who can use this resource to detect the problems they still have to address if they want to make their output resemble human translation. The FuSe annotation project is described in more detail in Section 3., and Section 4. gives an overview of the way it relates to other work.
Translation Shifts
The investigation of shifts has a long-standing tradition in translation studies. Vinay and Darbelnet (1958), working in the field of comparative stylistics, developed a system of translation procedures. Some of them are more or less direct or literal, but some of them are oblique and result in various differences between the source and the target text. These procedures are called transposition (change in word class), modulation (change in semantics), equivalence (completely different translation, e. g. proverbs), and adaptation (change of situation due to cultural differences). There is a slight prescriptive undertone in the work of Vinay and Darbelnet, because they state that oblique procedures should only be used if a more direct one would lead to a wrong or awkward translation. Nevertheless, their approach to translation shifts, even though avant la lettre, continues to be highly influential. The actual term shift was introduced by Catford (1965), who distinguishes formal correspondence, which exists between source and target categories that occupy approximately the same place in their respective systems, and translational equivalence, which holds between two portions of texts that are actually translations of each other. A shift has occurred if there are "departures from formal correspondence" (p. 73) between source and target text, i. e. if translational equivalents are not formal correspondents. According to Catford, there are two major types of shifts: level shifts and category shifts. Level shifts are shifts between grammar and lexis, e. g. the translation of verbal aspect by means of an adverb or vice versa. Category shifts are further subdivided into structure shifts (e. g. a change in clause structure), class shifts (e. g. a change in word class), unit shifts (e. g. translating a phrase with a clause), and intra-system shifts (e. g. a change in number even though the languages have the same number system). One of the problems with Catford's approach is that it relies heavily on the structuralist notion of system and thus presupposes that it is feasible -or indeed possible -to determine and compare the valeurs of any two given linguistic items. His account remains theoretic and, at least to my knowledge, has never been applied to any actual translations, not even by himself. The comparative model by Leuven-Zwart (1989) has been devised as a practical method for studying syntactic, semantic, stylistic, and pragmatic shifts within sentences, clauses, and phrases of literary texts and their translations. 1 It consists of four steps. Firstly, the units to be com-pared must be established. Van Leuven-Zwart calls them transemes, and they consist of predicates and their arguments or of predicateless adverbials. Secondly, the common denominator of the source and the target text transeme -van Leuven-Zwart calls this the architranseme -must be determined. In a third step, the relationship between each transeme and the architranseme -either synonymic or hyponymic -is established. Finally, the two transemes are compared with each other. If both are synonymic with the architranseme, no shift has occurred. Otherwise, there are three major categories of shifts: modulation (if one transeme is a synonym and the other a hyponym), modification (if both transemes are hyponymic with respect to the architranseme), and mutation (if there is no relationship between the transemes). There are a number of subcategories for each type of shift: the whole list comprises 37 items, which is why the model has sometimes been criticized for being too complex to be applied consistently.
The FuSe Annotation Project
The Data
The data annotated in FuSe are taken from the Europarl corpus (Koehn, 2002) 2 , which contains proceedings of the European parliament. In a resource designed for studying translation shifts, it is not enough that the data be parallel. It is of vital importance that they are actually translations of each other. 3 Since many translation shifts are directional (e. g. passivisation), the direction of the translation must also be clear (in this case from English into German). We used the language attribute provided by the corpus to extract those sentences that were originally English. In the corpus, the language attribute is only used if the language of the corpus file does not correspond with the original language. Thus, we extracted those sentences from the English subcorpus that had no language attribute and were aligned to sentences with the language attribute "EN" in the German subcorpus. Furthermore, in order to ensure that the English source sentences were produced by native speakers, we matched the value of the name attribute against the list of British and Irish Members of Parliament, which is available on the Europarl website. 4
Predicates and Arguments as Transemes
The first step in annotating translation shifts is determining the transemes, i. e. those translation units on which the comparison between source and target text will be based. It was mentioned in Section 2. that the transemes originally used by Leuven-Zwart (1989) consist of predicates and their arguments (and adverbials). The disadvantage with this division is that the transemes are quite complex (whole clauses), which means that there could occur several shifts the comparative model are used to gain insight into shifts on the story level and into the norms governing the translation process (Leuven-Zwart, 1990). This model is not further discussed, because it is not related to the approach presented in this paper. 2 We use the XCES version by Tiedemann and Nygaard (2004). 3 The Europarl corpus is available in eleven languages, so large parts of the English and German subcorpora will be translated from a third language. 4 http://www.europarl.eu.int/ within one transeme. While this seems to have been unproblematic for van Leuven-Zwart, who worked with pen and paper, the units must be smaller in a computational annotation project in order for the shifts to be assigned unambiguously.
The approach presented in this paper is also based on predicate-argument structures, because it is assumed that these capture the major share of the meaning of a sentence and are most likely to be represented in both source and target sentence. However, unlike in van Leuven-Zwart's approach, each predicate (lexical verbs, certain nouns and certain adjectives) and each argument represents a transeme in itself, i. e. there are predicate transemes and argument transemes. Of course, even this more fine-grained annotation entails that correspondences and shifts on other levels will be missed, but in order to ensure workability and reproducibility of the annotation, this restriction seems justifiable.
The predicate-argument structures are annotated monolingually, and since the annotation is mostly a means to an end, it is kept deliberately simple: predicates are represented by the capitalised citation form of the lexical item (e. g. DRAMATISE). They are assigned a class based on their syntactic form (v, n, a, c, l for 'verbal', 'nominal', 'adjectival', 'copula', and 'light verb construction' respectively). Homonymous predicates are disambiguated for word senses, and related predicates (e. g. a verb and its nominalisation) are assigned to a common predicate group. In order to facilitate the annotation process, the arguments are given short intuitive role names (e. g. ENT DRAMATISED, i. e. the entity being dramatised). These role names have to be used consistently only within a predicate group. If, for example, an argument of the predicate DRAMATISE has been assigned the role ENT DRAMATISED and the annotator encounters a comparable role as an argument to the predicate DRAMATISATION, the same role name for this argument has to be used. Other than that, no attempt at generalisation along the lines of semantic cases is made. If a predicate is realised in a way that might influence the realisation of its argument structure in a systematic way (e. g. infinitive, passive), it receives a tag to indicate this. If one of the arguments is a relative pronoun, its antecedent is also annotated. This is done in order to avoid the annotation of a pronominalisation shift (see Section 3.3.1.) in these cases, since the antecedent of relative pronouns is so close that it would be wrong to call this a pronominalisation. Apart from this, there is no anaphor resolution.
Shift Annotation
After the predicate-argument structures have been annotated monolingually, the source predicates and arguments are aligned to their target counterparts. Sometimes, this is possible in a straightforward manner, like in sentence pair (1). 5
(1) However, more often than not the relationship will not be this simple. Whenever a shift occurs, the alignment between the two predicates or arguments is tagged. Mainly, the shifts are categorised according to whether they occur on the level of grammar or on the level of semantics. The following is an introduction to the main types of shifts. They are first described in Sections 3.3.1. to 3.3.3., and to make this more concrete, a few examples are given in Section 3.3.4.
Grammatical Shifts Category Change
This tag is assigned whenever the corresponding transemes belong to different syntactic categories, and it can be applied both to predicates and arguments. A typical example would be a verbal predicate transeme that is translated as a nominal predicate (nominalisation).
Passivisation This tag can only be assigned to the alignment between verbal predicates (and certain light verb constructions) and is used if an active predicate has been rendered as a passive predicate. Often, but not always, a passivisation shift goes hand in hand with a deletion shift (see below), namely if the source subject is no longer explicitly expressed in the passivised translation.
Depassivisation Conversely, if a passive verbal predicate has been rendered as an active verbal predicate, this is tagged depassivisation. If the source predicate-argument structure lacks the agentive argument, there will also be an addition shift (see below).
Pronominalisation This tag can only be assigned to the alignment between arguments. It is used if the source argument is realised by lexical material (or a proper name) but translated as a pronoun. This tag is not used if the pronoun in question is a relative pronoun, because the antecedent can always be found in close vicinity and is annotated as part of the transeme (see Section 3.2.).
Depronominalisation This tag can only be assigned to the alignment between arguments. It is used if a source argument transeme is realised as a pronoun but translated with lexical material or a proper name.
Number Change
This tag is assigned if the corresponding transemes differ in number, i. e. one is singular, the other plural. This happens mainly between argument transemes, but can also occur between nominal predicates.
Semantic Shifts Semantic Modification
This tag is assigned if the two transemes are not straightforward equivalents of each other because of some type of semantic divergence, for example a difference in aktionsart between two verbal predicates. 6 Opus/Europarl (en): file ep-00-01-18.xml, sentence id 4.2 It is rather difficult to find objective criteria for this shift. In the majority of cases two corresponding transemes exhibit some kind of divergence if taken out of their context, but are more or less inconspicuous translations in the concrete sentence pair. Since an inflationary use of this tag would decrease its expressiveness, semantic likeness is interpreted somewhat liberally and the tag is assigned only if the semantic difference is significant. Of course, this is far from being a proper operationalisation, and we hope to clarify the concept as we go along.
Explicitation This is a subcategory of semantic modification, which is assigned if the target transeme is lexically more specific than the source transeme. A clear case of explicitation is when extra information has been added to the transeme. One could also speak of explicitation when a transeme has been depronominalised (see Section 3.3.1.). However, since the depronominalisation shift is already used in these cases, this would be redundant and is therefore not annotated.
Generalisation This is the counterpart to the explicitation shift and is used when the target transeme is lexically less specific than its source, and in particular if some information has been left out in the translation. To avoid redundancy, it is not used for pronominalisation shifts.
Addition This tag is assigned to a target transeme, either predicate or argument, that has been added in the translation process. For instance, if there has been a depassivisation shift and if the agentive argument had not been realised in the source text, it must be added in the target text. Note that we do not speak of addition if only part of the transeme has been added. In this case, the explicitation tag is to be assigned (see above).
Deletion This tag is assigned to a source transeme that is untranslated in the target version of the text. Analogous to the addition shift, this tag is only used if the entire transeme has been deleted. If it is only part of a transeme that is untranslated, the shift is classified as generalisation.
Mutation This tag is used if it is possible to tell that two transemes are translation equivalents (in the sense intended by Catford, see Section 2.), but if they differ radically in their lexical meaning. This shift often involves a number of other shifts as well.
Problematic Issues
Long Transemes Normally, a maximum of two shifts can be assigned to any one pair of transemes: a grammatical and a semantic shift. This can be a problem if the transemes are long, like for instance clausal arguments. Because of their length, they can contain multiple shifts, and it is difficult to determine which of them is to be the basis for the shift annotation, in particular if they are contradictory (e. g. there might occur both generalisation and explicitation in different parts of the transeme). The general rule here is to check whether the shift actually affects the overall transeme. In most cases, long transemes contain further transemes, e. g. clausal arguments contain at least one extra predicate plus arguments, which will be represented by their own predicate-argument structure, and it is on this level that these shifts are recorded.
Lexical Modals Modal auxiliaries are currently not annotated as separate predicates. This is no problem as long as the modality is expressed by means of a modal auxiliary in both languages. However, sometimes modality is expressed by a full verb with modal meaning (e. g. to wish), which is consequently annotated as a predicate. If the other language uses a modal auxiliary, no alignment is possible, because there is no predicate transeme. Normally, when a predicate transeme has no correspondent in the other language, one would assign the addition or deletion shift, but since nothing really has been added or deleted, this is not a particularly satisfying solution. One way out would be to rethink our attitude towards modals and simply annotate them as predicates. While the decision is still pending, such predicates are tagged dangling modal.
Structure Shifts
It also happens that a transeme cannot be aligned because it is not realised as part of a predicateargument structure in the other language. An example of this would be a full verb with modal meaning that is rendered as an adverb in the other language (e. g. to wishgern, 'with pleasure'). Again, it would not be adequate to speak of addition or deletion. However, since these cases constitute real structural shifts, the additional tag non-pas (i. e. 'non-predicate-argument-structure') has been introduced to deal with them.
Examples
In this section, the shift annotation described in the previous sections is illustrated by a few examples from the corpus.
(2)
a. Both sentences contain one predicate transeme (DRAMATISE and AUFBAUSCHEN) and two argument transemes. The two predicates differ with respect to voice: while the source predicate in (2-a) is passive, its German counterpart (2-b) is active, so the alignment between these two predicates would receive the depassivisation tag. As a consequence of the change of voice, the agentive argument, which is left unexpressed in the passive source sentence, is explicitly expressed in the German translation (Wir, 'we'), and is consequently tagged addition. Conversely, the argument into more than that is left unexpressed in the German version -this is an instance of deletion. Furthermore, the subject of the English sentence (it), the entity that is being dramatised, is expressed lexically in the translation. The alignment between these two arguments is thus tagged as depronominalisation.
( In this sentence pair, the alignment between the two predicate transemes HAVE and SETZEN is tagged semantic modification because they differ in aktionsart: the English predicate is static, while the German predicate is telic.
(4) a. Example (4) illustrates the use of the generalisation shift. The second argument transeme in the original (4-a) contains explicit information on what the issue is about. This information is left out in the translation (4-b), with the result that the transeme is more general. Since it is only a part of the transeme that has been dropped in the translation, this is not annotated as deletion.
Tools
The (monolingual) predicate-argument structures are annotated with FuSer (Pyka and Schwall, 2006). The annotator is presented with a sentence and, if available, 10 a graphical view of its syntactic structure, and selects those tokens (or nodes from the tree) which are to be annotated as a predicate. The annotator can choose from a list of predicates, or, if the predicate type is encountered for the first time, add a new predicate type or group to the database. Once the predicate is annotated, the procedure is repeated for the arguments of this predicate. Again, either the argument types are chosen from the list or added to the database. Additionally, the necessary tags (see Section 3.2.) are added to the predicates and arguments. The annotation process is then repeated for all the predicate-argument structures in a sentence. They are annotated independently, i.e. there is no nesting of predicates. Currently, the predicate-argument structures are annotated manually, which is a time-consuming task. However, there are a couple of "wizards" under development which will assist the annotator. For instance, there will be a wizard to scan the sentence for predicate candidates or to suggest suitable argument types when the predicate is already included in the database. Technically, FuSer is a platform-independent Java application which operates on an extended ANNOTATE MySQL database. This data model makes it possible to be flexible with respect to the input data, which can be either raw (as is currently the case) or syntactically annotated. Furthermore, since the ANNOTATE database is only extended and not modified, data processed with FuSer can always be processed by ANNOTATE afterwards (e. g. for further annotation). It is planned to extend FuSer for the bilingual alignment and the shift annotation. While this extension is under development, we use a simple Web-based alignment tool (XML, Perl, CGI) for this task (see Figure 1). The browser window is divided into three parts: in the upper third, the annotator can select a sentence pair. In the middle part, all the predicate-argument structures that have been annotated for these sentences are listed, with the predicates and arguments being highlighted in different colours. The annotator chooses (by means of radio buttons) two corresponding predicate-argument structures, which are then displayed in more detail in the lower window. Here, the annotator can align corresponding predicates and arguments with each other and, if necessary, choose up to two shift-tags for each pair of transemes from a drop-down menu. The lower window can also be used for viewing existing annotation.
Outlook
So far, the annotated data consist of English source texts that have been translated into German. It would be interesting to include the opposite direction as well, i. e. German source texts that have been translated into English. This would make it possible -by comparing the types of shifts and their quantity -to find out which shifts have occurred due to the direction of the translation process, and which shifts might be due to the translation process as such (e. g. explicitation is taken to be such a potential "translation universal" in current translation research, see Mauranen and Kujamäki (2004)). Furthermore, the genre of the Europarl corpus -parliamentary proceedings -is highly restricted and it would be a useful extension to include other types of data (e. g. technical language, literary prose) in order to compare the occurrence of shifts across genres.
Acknowledgements
I would like to thank Hendrik Feddes, Robert Memering, Frank Schumacher, and the three anonymous reviewers for helpful and valuable comments.
| 3,614 |
cs0606110
|
2949837610
|
Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equally-sized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplink-sharing version of the well-known broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice.
|
In recent years, overlay networks have proven a popular way of disseminating potentially large files (such as a new software product or a video) from a single server @math to a potentially large group of @math end users via the Internet. A number of algorithms and protocols have been suggested, implemented and studied. In particular, much attention has been given to peer-to-peer (P2P) systems such as BitTorrent @cite_15 , Slurpie @cite_24 , SplitStream @cite_32 , Bullet' @cite_22 and Avalanche @cite_8 , to name but a few. The key idea is that the file is divided into @math parts of equal size and that a given user may download any one of these (or, for network coding based systems such as Avalanche, linear combinations of these) either from the server or from a peer who has previously downloaded it. That is, the end users collaborate by forming a P2P network of peers, so they can download from one another as well as from the server. Our motivation for revisiting the broadcasting problem is the performance analysis of such systems.
|
{
"abstract": [
"The need to distribute large files across multiple wide-area sites is becoming increasingly common, for instance, in support of scientific computing, configuring distributed systems, distributing software updates such as open source ISOs or Windows patches, or disseminating multimedia content. Recently a number of techniques have been proposed for simultaneously retrieving portions of a file from multiple remote sites with the twin goals of filling the client's pipe and overcoming any performance bottlenecks between the client and any individual server. While there are a number of interesting tradeoffs in locating appropriate download sites in the face of dynamically changing network conditions, to date there has been no systematic evaluation of the merits of different protocols. This paper explores the design space of file distribution protocols and conducts a detailed performance evaluation of a number of competing systems running in both controlled emulation environments and live across the Internet. Based on our experience with these systems under a variety of conditions, we propose, implement and evaluate Bullet' (Bullet prime), a mesh based high bandwidth data dissemination system that outperforms previous techniques under both static and dynamic conditions.",
"We propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks of information. The randomization introduced by the coding process eases the scheduling of block propagation, and, thus, makes the distribution more efficient. This is particularly important in large unstructured overlay networks, where the nodes need to make block forwarding decisions based on local information only. We compare network coding to other schemes that transmit unencoded information (i.e. blocks of the original file) and, also, to schemes in which only the source is allowed to generate and transmit encoded packets. We study the performance of network coding in heterogeneous networks with dynamic node arrival and departure patterns, clustered topologies, and when incentive mechanisms to discourage free-riding are in place. We demonstrate through simulations of scenarios of practical interest that the expected file download time improves by more than 20-30 with network coding compared to coding at the server only and, by more than 2-3 times compared to sending unencoded information. Moreover, we show that network coding improves the robustness of the system and is able to smoothly handle extreme situations where the server and nodes leave the system.",
"In tree-based multicast systems, a relatively small number of interior nodes carry the load of forwarding multicast messages. This works well when the interior nodes are highly-available, dedicated infrastructure routers but it poses a problem for application-level multicast in peer-to-peer systems. SplitStream addresses this problem by striping the content across a forest of interior-node-disjoint multicast trees that distributes the forwarding load among all participating peers. For example, it is possible to construct efficient SplitStream forests in which each peer contributes only as much forwarding bandwidth as it receives. Furthermore, with appropriate content encodings, SplitStream is highly robust to failures because a node failure causes the loss of a single stripe on average. We present the design and implementation of SplitStream and show experimental results obtained on an Internet testbed and via large-scale network simulation. The results show that SplitStream distributes the forwarding load among all peers and can accommodate peers with different bandwidth capacities while imposing low overhead for forest construction and maintenance.",
"We present Slurpie: a peer-to-peer protocol for bulk data transfer. Slurpie is specifically designed to reduce client download times for large, popular files, and to reduce load on servers that serve these files. Slurpie employs a novel adaptive downloading strategy to increase client performance, and employs a randomized backoff strategy to precisely control load on the server. We describe a full implementation of the Slurpie protocol, and present results from both controlled local-area and wide-area testbeds. Our results show that Slurpie clients improve performance as the size of the network increases, and the server is completely insulated from large flash crowds entering the Slurpie network.",
"The BitTorrent file distribution system uses tit-fortat as a method of seeking pareto efficiency. It achieves a higher level of robustness and resource utilization than any currently known cooperative technique. We explain what BitTorrent does, and how economic methods are used to achieve that goal. 1 What BitTorrent Does When a file is made available using HTTP, all upload cost is placed on the hosting machine. With BitTorrent, when multiple people are downloading the same file at the same time, they upload pieces of the file to each other. This redistributes the cost of upload to downloaders, (where it is often not even metered), thus making hosting a file with a potentially unlimited number of downloaders affordable. Researchers have attempted to find practical techniqes to do this before[3]. It has not been previously deployed on a large scale because the logistical and robustness problems are quite difficult. Simply figuring out which peers have what parts of the file and where they should be sent is difficult to do without incurring a huge overhead. In addition, real deployments experience very high churn rates. Peers rarely connect for more than a few hours, and frequently for only a few minutes [4]. Finally, there is a general problem of fairness [1]. The total download rate across all downloaders must, of mathematical necessity, be equal to the total upload rate. The strategy for allocating upload which seems most likely to make peers happy with their download rates is to make each peer’s download rate be proportional to their upload rate. In practice it’s very difficult to keep peer download rates from sometimes dropping to zero by chance, much less make upload and download rates be correlated. We will explain how BitTorrent solves all of these problems well. 1.1 BitTorrent Interface BitTorrent’s interface is almost the simplest possible. Users launch it by clicking on a hyperlink to the file they wish to download, and are given a standard “Save As” dialog, followed by a download progress dialog which is mostly notable for having an upload rate in addition to a download rate. This extreme ease of use has contributed greatly to BitTorrent’s adoption, and may even be more important than, although it certainly complements, the performance and cost redistribution features which are described in this paper."
],
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_32",
"@cite_24",
"@cite_15"
],
"mid": [
"1656271119",
"2107520978",
"2127494222",
"2135039403",
"239964209"
]
}
|
Optimal Scheduling of Peer-to-Peer File Dissemination
|
Suppose that M messages of equal length are initially known only at a single source node in a network. The so-called broadcasting problem is about disseminating these M messages to a population of N other nodes in the least possible time, subject to capacity constraints along the links of the network. The assumption is that once a node has received one of the messages it can participate subsequently in sending that message to its neighbouring nodes.
Scheduling background and related work
The broadcasting problem has been considered for different network topologies. Comprehensive surveys can be found in [15] and [16]. On a complete graph, the problem was first solved in [8] and [10]. Their communication model was a unidirectional telephone model in which each node can either send or receive one message during each round, but cannot do both. In this model, the minimal number of rounds required is 2M − 1 + ⌊log 2 (N + 1)⌋ for even N , and 2M + ⌊log
2 (N + 1)⌋ − ⌊ M −1+2 ⌊log 2 (N+1)⌋ (N +1)/2 ⌋ for odd N . 3
In [2], the authors considered the bidirectional telephone model in which nodes can both send one message and receive one message simultaneously, but they must be matched pairwise. That is, in each given round, a node can only receive a message from the same node to which it sends a message. They provide an optimal algorithm for odd N , which takes M + ⌊log 2 N ⌋ rounds. For even N their algorithm is optimal up to an additive term of 3, taking M + ⌊log 2 N ⌋ + M/N + 2 rounds.
The simultaneous send/receive model [21] supposes that during each round every user may receive one message and send one message. Unlike the telephone model, it is not required that a user can send a message only to the same user from which it receives a message. The optimal number of rounds turns out to be M + ⌊log 2 N ⌋ and we will return to this result in Section 3.
In this paper, we are working with our new uplink-sharing model designed for P2P file dissemination (cf. Section 2). It is closely related to the simultaneous send/receive model, but is set in continuous time. Moreover, we permit users to have different upload capacities which are the constraints on the data that can be sent per unit of time. This contrasts with previous work in which the aim was to model interactions of processors and so it was natural to assume that all nodes have equal capacities. Our work also differs from previous work in that we are motivated by the evaluation of necessarily decentralized P2P file dissemination algorithms, i.e., ones that can be implemented by the users themselves, rather than by a centralized controller. Our interest in the centralized case is as a basis for comparison and to give a lower bound. We show that in the case of equal upload capacities the optimal number of rounds is M + ⌊log 2 N ⌋ as for the simultaneous send/receive model. Moreover, we provide two complementary solutions for the case of general upload capacities and investigate the performance of a decentralized strategy.
Outlook
The rest of this paper is organized as follows. In Section 2 we introduce the uplink-sharing model and relate it to the simultaneous send/receive model. Our optimal algorithm for the simultaneous send/receive broadcasting problem is presented in Section 3. We show that it also solves the problem for the uplink-sharing model with equal capacities. In Section 4 we show that the general uplink-sharing model can be solved via a finite number of mixed integer linear programming (MILP) problems. This approach is suitable for a small number of file parts M . We provide additional insight through the solution of some special cases. We then consider the limiting case that the file can be divided into infinitely many parts and provide the centralized fluid solution. We extend these results to the even more general situation where different users might have different (disjoint) files of different sizes to disseminate (Section 5). This approach is suitable for typical and for large numbers of file parts M . Finally, we turn to decentralized algorithms. In Section 6 we evaluate the performance of a very simple and natural randomized strategy, theoretically, by simulation and by direct computation. We provide results in two different information scenarios with equal capacities showing that even this naive algorithm disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to the performance bounds of the previous sections so that they are useful in practice. We conclude and present ideas for further research in Section 7.
The Uplink-Sharing Model
We now introduce an abstract model for the file dissemination scenario described in the previous section, focusing on the important features of P2P file dissemination.
Underlying the file dissemination system is the Internet. Thus, each user can connect to every other user and the network topology is a complete graph. The server S has upload capacity C S and the N peers have upload capacities C 1 , . . . , C N , measured in megabytes per second (MBps). Once a user has received a file part it can participate subsequently in uploading it to its peers (source availability). We suppose that, in principle, any number of users can simultaneously connect to the server or another peer, the available upload capacity being shared equally amongst the open connections (fair sharing). Taking the file size to be 1 MB, this means that if n users try simultaneously to download a part of the file (of size 1/M ) from the server then it takes n/M C S seconds for these downloads to complete. Observe that the rate at which an upload takes place can both increase and decrease during the time of that upload (varying according to the number of other uploads with which it shares the upload capacity), but we assume that uploads are not interrupted until complete, that is the rate is always positive (continuity). In fact, Lemma 1 below shows that the makespan is not increased if we restrict the server and all peers to carry out only a single upload at a time. We permit a user to download more than one file part simultaneously, but these must be from different sources; only one file part may be transferred from one user to another at the same time. We ignore more complicated interactions and suppose that the upload capacities, C S , C 1 , . . . , C N , impose the only constraints on the rates at which file parts can be transferred between peers which is a reasonable assumption if the underlying network is not overloaded. Finally, we assume that rates of uploads and downloads do not constrain one another.
Note that we have assumed the download rates to be unconstrained and this might be considered unrealistic. However, we shall show a posteriori in Section 3 that if the upload capacities are equal then additional download capacity constraints do not increase the minimum possible makespan, as long as these download capacities are at least as big. Indeed, this is usually the case in practice.
Typically, N is the order of several thousands and the file size is up to a few gigabytes (GB), so that there are several thousand file parts of size 1/4 MB each.
Finding the minimal makespan looks potentially very hard as upload times are interdependent and might start at arbitrary points in time. However, the following two observations help simplify it dramatically. As we see in the next section, they also relate the uplink-sharing model to the simultaneous send/receive broadcasting model.
Lemma 1
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which the server and each of the peers only carry out a single upload at a time.
Proof. Identify the server as peer 0 and, for each i = 0, 1, . . . , N consider the schedule of peer i. We shall use the term job to mean the uploading of a particular file part to a particular peer. Consider the set of jobs, say J, whose processing involves some sharing of the upload capacity C i . Pick any job, say j, in J which is last in J to finish and call the time at which it finishes t f . Now fair sharing and continuity imply that job j is amongst the last to start amongst all the jobs finishing before or at time t f . To see this, note that if some job k were to start later than j, then (by fair sharing and continuity) k must receive less processing than job j by time t f and so cannot have finished by time t f . Let t s denote the starting time of job j.
We now modify the schedule between time t s and t f as follows. Let K be the set of jobs with which job j's processing has involved some sharing of the upload capacity. Let us re-schedule job j so that it is processed on its own between times t f − 1/C i M and t f . This consumes some amount of upload capacity that had been devoted to jobs in K between t f − 1/C i M and t f . However, it releases an exactly equal amount of upload capacity between times t s and t f − 1/C i M which had been used by job j. This can now be allocated (using fair sharing) to processing jobs in K.
The result is that j can be removed from the set J. All jobs finish no later than they did under the original schedule. Moreover, job j starts later than it did under the original schedule and the scheduling before time t s and after time t f is not affected. Thus, all jobs start no earlier than they did under the original schedule. This ensures that the source availability constraints are satisfied and that we can consider the upload schedules independently. We repeatedly apply this argument until set J is empty.
Using Lemma 1, a similar argument shows the following result.
Lemma 2
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which uploads start only at times that other uploads finish or at time 0.
Proof. By the previous Lemma it suffices to consider schedules in which the server and each of the peers only carry out a single upload at a time. Consider the joint schedule of all peers i = 0, 1, . . . , N and let J be the set of jobs that start at a time other than 0 at which no other upload finishes. Pick a job, say j, that is amongst the first in J to start, say at time t s . Consider the greatest time t f such that t f < t s and t f is either 0 or the time that some other upload finishes and modify the schedule so that job j already starts at time t f .
The source availability constraints are still satisfied and all uploads finish no later than they did under the original schedule. Job j can be removed from the set J and the number of jobs in J that start at time t s is decreased by 1, although there might now be more (but at most N in total) jobs in J that start at the time that job j finished in the original schedule.
But this time is later than t s . Thus, we repeatedly apply this argument until the number of jobs in J that start at time t s becomes 0 and then move along to jobs in J that are now amongst the first in j to start at time t ′ s > t s . Note that once a job has been removed from J, it will never be included again. Thus we continue until the set J is empty.
Centralized Solution for Equal Capacities
In this section, we give the optimal centralized solution of the uplink-sharing model of the previous section with equal upload capacities. We first consider the simultaneous send/receive broadcasting model in which the server and all users have upload capacity of 1. The following theorem provides a formula for the minimal makespan and a centralized algorithm that achieves it is contained in the proof.
This agrees with a result of Bar-Noy, Kipnis and Schieber [2], who obtained it as a byproduct of their result on the bidirectional telephone model. However, they required pairwise matchings in order to apply the results from the telephone model. So, for the simultaneous send/receive model, too, they use perfect matching in each round for odd N , and perfect matching on N − 2 nodes for even N . As a result, their algorithm differs for odd and even N and it is substantially more complicated, to describe, implement and prove to be correct, than the one we present within the proof of Theorem 1. Theorem 1 has been obtained also by Kwon and Chwa [21], via an algorithm for broadcasting in hypercubes. By contrast, our explicitly constructive proof makes the structure of the algorithm very easy to see. Moreover, it makes the proof of Theorem 3, that is, the result for the uplink-sharing model, a trivial consequence (using Lemmata 1 and 2).
Essentially, the log 2 N -scaling is due to the P2P approach. This compares favourably to the linear scaling of N that we would obtain for a fixed set of servers. The factor of 1/M is due to splitting the file into parts.
T * = 1 + ⌊log 2 N ⌋ M .(1)
Proof. Suppose that N = 2 n − 1 + x, for x = 1, . . . , 2 n . So n = ⌊log 2 N ⌋. The fact that M + n is a lower bound on the number of rounds is straightforwardly seen as follows. There are M different file parts and the server can only upload one file part (or one linear combination of file parts) in each round. Thus, it takes at least M rounds until the server has made sufficiently many uploads of file parts (or linear combinations of file parts) that the whole file can be recovered. The last of these M uploads by the server contains information that is essential to recovering the file, but this information is now known to only the server and one peer. It must takes at least n further rounds to disseminate this information to the other N − 1 peers. Now we show how the bound can be achieved. The result is trivial for M = 1. It is instructive to consider the case M = 2 explicitly. If n = 0 then N = 1 and the result is trivial. If n = 1 then N is 2 or 3. Suppose N = 3. In the following diagram each line corresponds to a round; each column to a peer. The entries denote the file part that the peer downloads that round. The bold entries indicate downloads from the server; un-bold entries indicate downloads from a peer who has the corresponding part.
1 2 1 2 1 2
Thus, dissemination of the two file parts to the 3 users can be completed in 3 rounds. The case N = 2 is even easier.
If n ≥ 2, then in rounds 2 to n each user uploads his part to a peer who has no file part and the server uploads part 2 to a peer who has no file part. We reach a point, shown below, at which a set of 2 n−1 peers have file part 1, a set of 2 n−1 − 1 peers have file part 2, and a set of x peers have no file part (those denoted by * · · · * ). Let us call these three sets A 1 , A 2 and A 0 , respectively.
1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 . . . 2 1 · · · 2 1 * · · · *
In round n + 1 we let peers in A 1 upload part 1 to 2 n−1 − ⌊x/2⌋ peers in A 2 and to ⌊x/2⌋ peers in A 0 (If x = 1, to 2 n−1 − 1 peers in A 2 and to 1 peer in A 0 ). Peers in A 2 upload part 2 to 2 n−1 − ⌈x/2⌉ peers in A 1 and to another ⌈x/2⌉ − 1 peers in A 0 . The server uploads part 2 to a member of A 0 (If x = 1, to a member of A 1 ). Thus, at the end of this round 2 n − x peers have both file parts, x peers have only file part 1, and x − 1 peers have only file part 2. One more round (round n + 2) is clearly sufficient to complete the dissemination. Now consider M ≥ 3. The server uploads part 1 to one peer in round 1. In rounds j = 2, . . . , min{n, M − 1}, each peer who has a file part uploads his part to another peer who has no file part and the server uploads part j to a peer who has no file part. If M ≤ n, then in rounds M to n each peer uploads his part to a peer who has no file part and the server uploads part M to a peer who has no file part. As above, we illustrate this with a diagram. Here we show the first n rounds in the case M ≤ n.
1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 . . . M 1 · · · 2 1 . . . M 1 · · · 2 1 * · · · *
When round n ends, 2 n − 1 peers have one file part and x peers have no file part. The number of peers having file part i is given in the second column of Table 1. In this table any entry which evaluates to less than 1 is to be read as 0 (so, for example, the bottom two entries in Part Numbers of the file parts at the ends of rounds n n + 1 n + 2 n + 3 · · · n + M − 1 set peers in the set have number of peers in set B 12 parts 1 and 2 2 n−1 − ⌊x/2⌋ B 1p part 1 and a part other than 1 or 2 2 n−1 − ⌈x/2⌉
1 2 n−1 2 n N N · · · N 2 2 n−2 2 n−1 2 n N · · · N 3 2 n−3 2 n−2 2 n−1 2 n · · · N 4 2 n−4 2 n−3 2 n−2 2 n−1 · · · N . . . . . . . . . . . . . . . . . . M − 2 2 n−M+2 2 n−M+3 2 n−M+4 2 n−M+5 · · · N M − 1 2 n−M+1 2 n−M+2 2 n−M+3 2 n−M+4 · · · 2 n M 2 n−M+1 − 1 2 n−M+2 − 1 2 n−M+3 − 1 2 n−M+4 − 1 · · · 2 n − 1B 1 just part 1 x B 2 just part 2 ⌊x/2⌋ B p
just a part other than 1 or 2 ⌈x/2⌉ − 1 column 2 and the bottom entry in column 3 are 0 for n = M − 2). Now in round n + 1, by downloading from every peer who has a file part, and downloading part min{n + 1, M } from the server, we can obtain the numbers shown in the third column. Moreover, we can easily arrange so that peers can be divided into the sets B 12 , B 1p , B 1 , B 2 and B p as shown in Table 2. In round n + 2, x − 1 of the peers in B 1 upload part 1 to peers in B 2 and B p . Peers in B 12 and B 2 each upload part 2 to the peers in B 1p and to ⌈x/2⌉ of the peers in B 1 . The server and the peers in B 1p and B p each upload a part other than 1 or 2 to the peers in B 12 and to the other ⌊x/2⌋ peers in B 1 . The server uploads part min{n + 2, M } and so we obtain the numbers in the fourth column of Table 1. Now all peers have part 1 and so it can be disregarded subsequently. Moreover, we can make the downloads from the server, B 1p and B p so that (disregarding part 1) the number of peers who ultimately have only part 3 is ⌊x/2⌋. This is possible because the size of B p is no more than ⌊x/2⌋; so if j peers in B p have part 3 then we can upload part 3 to exactly ⌊x/2⌋ − j peers in B 1 . Thus, a similar partitioning into sets as in Table 2 will hold as we start step n + 3 (when parts 2 and 3 takes over the roles of parts 1 and 2 respectively). Note that the optimal strategy above follows two principles. As many different peers as possible obtain file parts early on so that they can start uploading themselves and the maximal possible upload capacity is used. Moreover, there is a certain balance in the upload of different file parts so that no part gets circulated too late.
It is interesting that not all the available upload capacity is used. Suppose M ≥ 2. Observe that in round k, for each k = n + 2, . . . , n + M − 1, only x − 1 of the x peers (in set B 1 ) who have only file part k − n − 1 make an upload. This happens M − 2 times. Also, in round n + M there are only 2x − 1 uploads, whereas N + 1 are possible. Overall, we use N + M − 2x less uploads than we might. It can be checked that this number is the same for M = 1.
Suppose we were to follow a schedule that uses only x uploads during round n + 1, when the last peer gets its first file part. We would be using 2 n − x less uploads than we might in this round. Since 2 n − x ≤ N + M − 2x, we see that the schedule used in the proof above wastes at least as many uploads. So the mathematically interesting question arises as to whether or not it is necessary to use more than x uploads in round n + 1. In fact,
(N + M − 2x) − (2 n − x) = M − 1,
so, in terms of the total number of uploads, such a scheduling could still afford not to use one upload during each of the last M − 1 rounds. The question is whether or not each file part can be made available sufficiently often.
The following example shows that if we are not to use more than x uploads in round n + 1 we will have to do something quite subtle. We cannot simply pick any x out of the 2 n uploads possible and still hope that an optimal schedule will be shiftable: by which we mean that the number of copies of part j at the end of round k will be the same as the number of copies of part j − 1 at the end of round k − 1. It is the fact that the optimal schedule used in Theorem 1 is shiftable that makes its optimality so easy to see.
Example 1 Suppose M = 4 and N = 13 = 2 3 + 6 − 1, so M + ⌊log 2 N ⌋ = 7.
If we follow the same schedule as in Theorem 1, we reach after round 3,
1 2 1 3 1 2 1 · · · · · ·
Now if we only make x = 6 uploads during round 4, then there are eight ways to choose which six parts to upload and which two parts not to upload. One can check that in no case is it possible to arrange so that once this is done and uploads are made for round 5 then the resulting state has the same numbers of parts 2, 3 and 4, respectively, as the numbers of parts 1, 2 and 3 at the end of round 4. That is, there is no shiftable optimal schedule. In fact, if our six uploads has been four part 1s and two part 2s, then it would not even be possible to achieve (1).
In some cases, we can achieve (1), if we relax the demand that the schedule be shiftable. Indeed, we conjecture that this is always possible for at least one schedule that uses only x uploads during round n + 1. However, the fact that we cannot use essentially the same strategy in each round makes the general description of a non-shiftable optimal schedule very complicated. Our aim has been to find an optimal (shiftable) schedule that is easy to describe. We have shown that this is possible if we do use the spare capacity at round n + 1. For practical purposes this is desirable anyway, since even if it does not affect the makespan it is better if users obtain file parts earlier.
When x = 2 n our schedule can be realized using matchings between the 2 n peers holding the part that is to be completed next and the server together with the 2 n − 1 peers holding the remaining parts. But otherwise this is not always possible to schedule only with matchings. This is why our solution would not work for the more constrained telephone-like model considered in [2] (where, in fact, the answer differs as N is even or odd). to describe.
The solution of the simultaneous send/receive broadcasting model problem now gives the solution of our original uplink-sharing model when all capacities are the same.
Theorem 2 Consider the uplink-sharing model with all upload capacities equal to 1. The minimal makespan is given by (1), for all M , N , the same as in the simultaneous send/receive model with all upload capacities equal to 1.
Proof. Note that under the assumptions of the theorem and with application of Lemmas 1 and 2, the optimal solution to the uplink-sharing model is the same as that of the simultaneous send/receive broadcast model when all upload capacities equal to 1.
In the proof of Theorem 1 we explicitly gave an optimal schedule which also satisfies the constraints that no peer downloads more than a single file part at a time. Thus, we also have the following result.
Centralized Solution for General Capacities
We now consider the optimal centralized solution in the general case of the uplink-sharing model in which the upload capacities may be different. Essentially, we have an unusual type of precedence-constrained job scheduling problem. In Section 4.1 we formulate it as a mixed integer linear program (MILP). The MILP can also be used to find approximate solutions of bounded size of sub-optimality. In practice, it is suitable for a small number of file parts M . We discuss its implementation in Section 4.2. Finally, we provide additional insight into the solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different (Sections 4.3 and 4.4).
MILP formulation
In order to give the MILP formulation, we need the following Lemma. Essentially, it shows that time can be discretized suitably. We next show how the solution to the general problem can be found by solving a number of linear programs. Let time interval t be the interval [tτ, tτ + τ ), t = 0, . . . . Identify the server as peer 0. Let x ijk (t) be 1 or 0 as peer i downloads file part k from peer j during interval t or not. Let p ik (t) denote the proportion of file part k that peer i has downloaded by time t. Our problem is then is to find the minimal T such that the optimal value of the following MILP is M N . Since this T is certainly greater than 1/C S and less than N/C S , we can search for its value by a simple bisection search, solving this LP for various T :
maximize i,k p ik (T )(2)
subject to the constraints given below. The source availability constraint (6) guarantees that a user has completely downloaded a part before he can upload it to his peers. The connection constraint (7) requires that each user only carries out a single upload at a time. This is justified by Lemma 1 which also saves us another essential constraint and variable to control the actual download rates: The single user downloading from peer j at time t will do so at rate C j as expressed in the link constraint (5). Continuity and stopping constraints (8,9) require that a download that has started will not be interrupted until completion and then be stopped. The exclusivity constraint (10) ensures that each user downloads a given file part only from one peer, not from several ones. Stopping and exclusivity constraints are not based on assumptions, but obvious constraints to exclude redundant uploads.
Regional constraints
x ijk (t) ∈ {0, 1} for all i, j, k, t (3) p ik (t) ∈ [0, 1] for all i, k, t(4)
Link constraints between variables
p ik (t) = M τ t−τ t ′ =0 N j=0 x ijk (t ′ )C j for all i, k, t(5)
Essential constraints
x ijk (t) − ξ jk (t) ≤ 0 for all i, j, k, t (Source availability constraint) (6) i,k
x ijk (t) ≤ 1 for all j, t (Connection constraint)
x ijk (t) − ξ ik (t + 1) − x ijk (t + 1) ≤ 0 for all i, j, k, t (Continuity constraint)
(8) x ijk (t) + ξ ik (t) ≤ 1 for all i, j, k, t (Stopping constraint) (9) j x ijk (t) ≤ 1 for all i, k, t (Exclusivity constraint)(10)
Initial conditions p 0k (0) = 1 for all k (11) p ik (0) = 0 for all i, k
Constraints (8)- (6) have been linearized. Background can be found in [34]. For this, we used the auxiliary variable ξ ik (t) = 1 {p ik (t) = 1}. This definition can be expressed through the following linear constraints.
Linearization constraints
ξ ik (t) ∈ {0, 1} for all i, k, t (13) p ik (t) − ξ ik (t) ≥ 0 and p ik (t) − ξ ik (t) < 1 for all i, k, t(14)
It can be checked that together with (8)-(6), indeed, this gives
x ijk (t) = 1 and p ik (t + 1) < 1 =⇒ x ijk (t + 1) = 1 for all i, j, k, t
p ik (t) = 1 =⇒ x ijk (t) = 0 for all i, j, k, t (16) p jk (t) < 1 =⇒ x ijk (t) = 0 for all i, j, k, t(15)
that is, continuity, stopping and source availability constraints respectively.
Implementation of the MILP
MILPs are well-understood and there exist efficient computational methods and program codes. The simplex method introduced by Dantzig in 1947, in particular, has been found to yield an efficient algorithm in practice as well as providing insight into the theory. Since then, the method has been specialized to take advantage of the particular structure of certain classes of problems and various interior point methods have been introduced. For integer programming there are branch-and-bound, cutting plane (branch-and-cut) and column generation (branch-and-price) methods as well as dynamic programming algorithms. Moreover, there are various approximation algorithms and heuristics. These methods have been implemented in many commercial optimization libraries such as OSL or CPLEX. For further reading on these issues the reader is referred to [28], [4] and [38]. Thus, implementing and solving the MILPs gives the minimal makespan solution. Although, as the numbers of variables and constraints in the LP grows exponentially in N and M , this approach is not practical for large N and M .
Even so, we can use the LP formulation to obtain a bounded approximation to the solution. If we look at the problem with a greater τ , then the job end and start times are not guaranteed to lie at integer multiples of τ . However, if we imagine that each job does take until the end of an τ -length interval to finish (rather than finishing before the end), then we will overestimate the time that each job takes by at most τ . Since there are N M jobs in total, we overestimate the total time taken by at most N M τ . Thus, the approximation gives us an upper bound on the time taken and is at most N M τ greater than the true optimum. So we obtain both upper and lower bounds on the minimal makespan. Even for this approximation, the computing required is formidable for large N and M .
Insight for special cases with small N and M
We now provide some insight into the minimal makespan solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different. This addresses the case of the server having a significantly higher upload capacity than the end users.
Suppose N = 2 and M = 1, that is, the file has not been split. Only the server has the file initially, thus either (a) both peers download from the server, in which case the makespan is T = 2/C S , or (b) one peer downloads from the server and then the second peer downloads from the first; in this case T = 1/C S + 1/C 1 . Thus, the minimal makespan is T * = 1/C S + min{1/C S , 1/C 1 }.
If N = M = 2 we can again adopt a brute force approach. There are 16 possible cases, each specifying the download source that each peer uses for each part. These can be reduced to four by symmetry.
Case A: Everything is downloaded from the server. This is effectively the same as case (a) above. When C 1 is small compared to C S , this is the optimal strategy. Case B: One peer downloads everything from the server. The second peer downloads from the first. This is as case (b) above, but since the file is split in two, T is less. Case C: One peer downloads from the server. The other peer downloads one part of the file from the server and the other part from the first peer. Case D: Each peer downloads exactly one part from the server and the other part from the other peer. When C 1 is large compared to C S , this is the optimal strategy.
In each case, we can find the optimal scheduling and hence the minimal makespan. This is shown in Table 3.
case makespan The optimal strategy arises from A, C or D as C 1 /C S lies in the intervals [0, 1/3], [1/3, 1] or [1, ∞) respectively. In [1, ∞), B and D yield the same. See Figure 1. Note that under the optimal schedule for case C one peer has to wait while the other starts downloading. This illustrates that greedy-type distributed algorithms may not be optimal and that restricting uploaders to a single upload is sometimes necessary for an optimal scheduling (cf. Section 2).
A 2 C S B 1 2C S + 1 2C 1 + max 1 2C S , 1 2C 1 C 1 2C S + max 1 C S , 1 2C 1 D 1 C S + 1 2C 1
Insight for special cases with large M
We still assume C 1 = C 2 = · · · = C N , but C S might be different. In the limiting case that the file can be divided into infinitely many parts, the problem can be easily solved for any number N of users. Let each user download a fraction 1− α directly from the server at rate C S /N and a fraction α/(N − 1) from each of the other N − 1 peers, at rate min{C S /N, C 1 /(N − 1)} from each. The makespan is minimized by choosing α such that the times for these two downloads are equal, if possible. Equating them, we find the minimal makespan as follows.
Case 1: C 1 /(N − 1) ≤ C S /N : (1 − α)N C S = α C 1 =⇒ α = N C 1 C S + N C 1 =⇒ T = N C S + N C 1 .(18)Case 2: C 1 /(N − 1) ≥ C S /N : (1 − α)N C S = αN (N − 1)C S =⇒ α = N − 1 N =⇒ T = 1 C S .(19)
In total, there are N MB to upload and the total available upload capacity is C S + N C 1 MBps. Thus, a lower bound on the makespan is N/(C S + N C 1 ) seconds. Moreover, the server has to upload his file to at least one user. Hence another lower bound on the makespan is 1/C S . The former bound dominates in case 1 and we have shown that it can be achieved. The latter bound dominates in case 2 and we have shown that it can be achieved. As a result, the minimal makespan is
T * = max 1 C S , N C S + N C 1 .
(20) Figure 2 shows the minimal makespan when the file is split in 1, 2 and infinitely many file parts when N = 2. It illustrates how the makespan decreases with M . In the next section, we extend the results in this limiting case to a much more general scenario.
Centralized Fluid Limit Solution
In this section, we generalize the results of Section 4.4 to allow for general capacities C i . Moreover, instead of limiting the number of sources to one designated server with a file to disseminate, we now allow every user i to have a file that is to be disseminated to all other users. We provide the centralized solution in the limiting case that the file can be divided into infinitely many parts.
Let F i ≥ 0 denote the size of the file that user i disseminates to all other users. Seeing that in this situation there is no longer one particular server and everything is symmetric, we change notation for the rest of this section so that there are N ≥ 2 users 1, 2, . . . , N .
Moreover, let F = N i=1 F i and C = N i=1 C i .
We will prove the following result.
Theorem 4 In the fluid limit, the minimal makespan is
T * = max F 1 C 1 , F 2 C 2 , . . . , F N C N , (N − 1)F C (21)
and this can be achieved with a two-hop strategy, i.e., one in which users i's file is uploaded to user j, either directly from user i, or via at most one intermediate user.
Proof. The result is obvious for N = 2. Then the minimal makespan is max{F 1 /C 1 , F 2 /C 2 } and this is exactly the value of T * in (21).
So we consider N ≥ 3. It is easy to see that each of the N + 1 terms within the braces on the right hand side of (21) are lower bounds on the makespan. Each user has to upload his file at least to one user, which takes time F i /C i . Moreover, the total volume of files to be uploaded is (N − 1)F and the total available capacity is C. Thus, the makespan is at least T * , and it remains to be shown that a makespan of T * can be achieved. There are two cases to consider.
Case 1: (N − 1)F/C ≥ max i F i /C i for all i.
In this case, T * = (N − 1)F/C. Let us consider the 2-hop strategy in which each user uploads a fraction α ii of its file F i directly to all (N − 1) peers, simultaneously and at equal rates. Moreover, he uploads a fraction α ij to peer j who in turn then uploads it to the remaining (N − 2) peers, again simultaneously and at equal rates. Note that N j=1 α ij = 1. Explicitly constructing a suitable set α ij , we thus obtain the problem min T (22) subject to, for all i,
1 C i α ii F i (N − 1) + k =i α ik F i + k =i α ki F k (N − 2) ≤ T .(23)
We minimize T by choosing the α ij in such a way as to equate the N left hand sides of the constraints, if possible. Rewriting the expression in square brackets, equating the constraints for i and j and then summing over all j we obtain
C α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = C i (N − 2) j α jj F j + F + (N − 2)(F − j α jj F j ) = (N − 1)C i F.(24)
Thus,
α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = (N − 1) C i C F.(25)
Note that there is a lot of freedom in the choice of the α so let us specify that we require α ki to be constant in k for k = i, that is α ki = α * i for k = i. This means that i has the capacity to take over a certain part of the dissemination from some peer, then it can and will also take over the same proportion from any other peer. Put another way, user i splits excess capacity equally between its peers. Thus,
α ii F i (N − 2) + F i + α * i (N − 2)(F − F i ) = (N − 1) C i C F(26)
Still, we have twice as many variables as constraints. Let us also specify that α * i = α ii for all i. Similarly as above, this says that the proportion of its own file F i that i uploads to all its peers (rather than just to one of them) is the same as the proportion of the files that it takes over from its peers. Then
α * i = (N − 1)(C i /C)F − F i (N − 2)F = (N − 1)C i (N − 2)C − F i (N − 2)F ,(27)
where i α * i = 1 and α * i ≥ 0, because in case 1 F i /C i ≤ (N − 1)F/C. With these α ij , we obtain the time for i to complete its upload and hence the time for everyone to complete their upload as
T = 1 C i α * i F i (N − 2) + F i + k =i α * i F k (N − 2) = (N − 1)F i C − F i 2 C i F + F i C i + (N − 1)(F − F i ) C − F i (F − F i ) C i F = (N − 1)F/C.(28)
Note that there is no problem with precedence constraints. All uploads happen simultaneously stretched out from time 0 to T . User i uploads to j a fraction α ij of F i . Thus, he does so at constant rate α ij F i /T i = α ij F i /T . User j passes on the same amount of data to each of the other users in the same time, hence at the same rate α ij F i /T j = α ij F i /T .
Thus, we have shown that if the aggregate lower bound dominates the others, it can be achieved. It remains to be shown that if an individual lower bound dominates, than this can be achieved also.
Case 2: F i /C i > (N − 1)F/C for some i.
By contradiction it is easily seen that this cannot be the case for all i. Let us order the users in decreasing order of F i /C i , so that F 1 /C 1 is the largest of the F i /C i . We wish to show that all files can be disseminated within a time of F 1 /C 1 . To do this we construct new capacities C ′ i with the following properties:
C ′ 1 = C 1 ,(29)C ′ i ≤ C i for i = 1,(30)(N − 1)F/C ′ = F 1 /C ′ 1 = F 1 /C 1 and (31) F i /C ′ i ≤ F 1 /C 1 .(32)
This new problem satisfies the condition of Case 1 and so the minimal makespan is T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem is T = F 1 /C 1 also, because the unprimed capacities are greater or equal to the primed capacities by property (30).
To explicitly construct capacities satisfying (29)-(32), let us define
C ′ i = (N − 1) C 1 F 1 γ i F i(33)
with constants γ i ≥ 0 such that
i γ i F i = F .(34)
Then (N − 1)F/C ′ = F 1 /C 1 , that is (31) holds. Moreover, choosing
γ i ≤ 1 N − 1 C i F i F 1 C 1(35)
ensures C ′ i ≤ C i , i.e. property (30) and choosing
γ i ≥ 1 N − 1(36)
ensures F i /C ′ i ≤ F 1 /C 1 , that is property (32). Furthermore, the previous two conditions together ensure that γ 1 = 1/(N − 1) and thus C ′ 1 = C 1 , that is property (29). It remains to construct a set of parameters γ i that satisfies (34), (35) and (36).
Putting all γ i equal to the lower bound (36) gives i γ i F i = F/(N − 1), that is too small to satisfy (34). Putting all equal to the upper bound (35) gives i γ i F i = F 1 C/(N − 1)C 1 , that is too large to satisfy (34). So we pick a suitably weighted average instead. Namely,
γ i = 1 N − 1 δ C i F i F 1 C 1 + (1 − δ)(37)
such that δ C N − 1
F 1 C 1 + (1 − δ) F N − 1 = F(38)that is δ = (N − 2)F C 1 F 1 C − F C 1 .(39)
Substituting back in we obtain
γ i = 1 N − 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 (F 1 C − F C 1 )F i(40)
and thus
C ′ i = C 1 F 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 F 1 C − F C 1(41)
By construction, these C ′ i satisfy properties (29)-(32) and hence, by the results in Case 1, T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem T = F 1 /C 1 also.
It is worth noting that there is a lot of freedom in the choice of the α ij . We have chosen a symmetric approach, but other choices are possible.
In practice, the file will not be infinitely divisible. However, we often have M >> log(N ) and this appears to be sufficient for (21) to be a good approximation. Thus, the fluid limit approach of this section is suitable for typical and for large values of M .
Decentralized Solution for Equal Capacities
In order to give a lower bound on the minimal makespan, we have been assuming a centralized controller does the scheduling. We now consider a naive randomized strategy and investigate the loss in performance that is due to the lack of centralized control. We do this for equal capacities and in two different information scenarios, evaluating its performance by analytic bounds, simulation as well as direct computation. In Section 6.1 we consider the special case of one file part, in Section 6.2 we consider the general case of M file parts. We find that even this naive strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller (cf. Section 3). This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bounds so that they are useful in practice.
The special case of one file part
Assumptions Let us start with the case M = 1. We must first specify what information is available to users. It makes sense to assume that each peer knows the number of parts into which the file is divided, M , and the address of the server. However, a peer might not know N , the total number of peers, nor its peers' addresses, nor if they have the file, nor whether they are at present occupied uploading to someone else.
We consider two different information scenarios. In the first one, List, the number of peers holding the file and their addresses are known. In the second one, NoList, the number and addresses of all peers are known, but not which of them currently hold the file. Thus, in List, downloading users choose uniformly at random between the server and the peers already having the file. In NoList, downloading users choose uniformly amongst the server and all their peers. If a peer receives a query from a single peer, he uploads the file to that peer. If a peer receives queries from multiple peers, he chooses one of them uniformly at random. The others remain unsuccessful in that round. Thus, in List transmission can fail only if too many users try to download simultaneously from the same uploader. In NoList, transmission might also fail if a user tries to download from a peer who does not yet have the file.
Theoretical Bounds
The following theorem explains how the expected makespan that is achieved by the randomized strategy grows with N , in both the List and the NoList scenarios.
Theorem 5 In the uplink-sharing model, with equal upload capacities, the expected number of rounds required to disseminate a single file to all peers in either the List or NoList scenario is Θ(log N ).
Proof. In the List scenario our simple randomized algorithm runs in less time than in the NoList scenario. Since already have the lower bound given by Theorem 1, it suffices to prove that the expected runing time in the NoList scenario is O(log N ). There is also similar direct proof that the expected running time under the List scenario is O(log N ).
Suppose we have reached a stage in the dissemination at which n 1 peers (including the server) have the file and n 0 peers do not, with n 0 +n 1 = N +1. (The base case is n 1 = 1, when only the server has the file.) Each of the peers that does not have the file randomly chooses amongst the server and all his peers (NoList) and tries to download the file. If more than one peer tries to download from the same place then only one of the downloads is successful. The proof has two steps.
(i) Suppose that n 1 ≤ n 0 . Let i be the server or a peer who has the file and let I i be an indicator random variable that is 0 or 1 as i does or does not upload it. Let Y = i I i , where the sum is taken over all n 1 peers who have the file. Thus n 1 − Y is the number of uploads that take place. Then
EI i = 1 − 1 N n 0 ≤ 1 − 1 2n 0 n 0 ≤ 1 √ e .(42)
Now since E( i I i ) = i EI i , we have EY ≤ n 1 / √ e. Thus, by the Markov inequality, that for a nonnegative random variable Y we have that for any k (not necessarily an integer) P (Y ≥ k) ≤ (1/k)EY , we have by taking k = (2/3)n 1 ,
P n 1 − Y ≡ number of uploads ≤ 1 3 n 1 = P (Y ≥ 2 3 n 1 ) ≤ n 1 / √ e 2 3 n 1 = 3/(2 √ e) < 1 .(43)
Thus the expected number of steps required for the number of peers who have the file to increases from n 1 to at least n 1 + (1/3)n 1 = (4/3)n 1 is bounded by a geometric random variable with mean µ = 1/(1 − 3/(2 √ e)). This implies that we will reach a state in which more peers have the file than do not in an expected time that is O(log N ). From that point we continue with step (ii) of the proof.
(ii) Suppose n 1 > n 0 . Let j be a peer who does not have the file and let J j be an indicator random variable that is 0 or 1 as peer j does or does not succeed in downloading it. Let Z = j J j , where the sum is taken over all n 0 peers who do not have the file. Suppose X is the number of the other n 0 − 1 peers that try to download from the same place as does peer j. Then
P (J j = 0) = E n 1 N 1 1 + X ≥ E n 1 N (1 − X) = n 1 N 1 − n 0 − 1 N = n 1 N 1 − N − n 1 N = n 2 1 N 2 ≥ 1/4 .(44)
Hence EZ ≤ (3/4)n 0 and so, again using the Markov inequality,
P n 0 − Z ≡ number of downloads ≤ 1 8 n 0 = P Z ≥ 7 8 n 0 ≤ 3 4 n 0 7 8 n 0 = 6 7 .(45)
It follows that the number of peers who do not yet have the file decreases from n 0 to no more than (7/8)n 0 in an expected number of steps no more than µ ′ = 1/(1 − 6 7 ) = 7. Thus the number of steps needed for the number of peers without the file to decrease from n 0 to 0 is O(log n 0 ) = O(log N ). In fact, this is a weak upper bound. By more complicated arguments we can show that if n 0 = aN , where a ≤ 1/2, then the expected remaining time for our algorithm to complete under NoList is Θ(log log N ). For a > 1/2 the expected time remains Θ(log N ).
Simulation
For the problem with one server and N users we have carried out 1000 independent simulation runs 4 for a large range of parameters, N = 2, 4, . . . , 2 25 . We found that the achieved expected makespan appears to grow as a + b × log 2 N . Motivated by this and the theoretical bound from Theorem 5 we fitted the linear model
y ij = α + βx i + ǫ ij ,(46)
where y ij is the makespan for x i = log 2 2 i , obtained in run j, j = 1, . . . , 1000. Indeed, the model fits the data very well in both scenarios. We obtain the following results that enable us to compare the expected makespan of the naive randomized strategy to the that of a centralized controller. For List, the regression analysis gives a good fit, with Multiple R-squared value of 0.9975 and significant p-and t-values. The makespan increases as
1.1392 + 1.1021 × log 2 N .(47)
For NoList, there is more variation in the data than for List, but, again, the linear regression gives a good fit, with Multiple R-squared of 0.9864 and significant p-and t-values. The makespan increases as 1.7561 + 1.5755 × log 2 N .
As expected, the additional information for List leads to a significantly lesser makespan when compared to NoList, in particular the log-term coefficient is significantly smaller. In the List scenario, the randomized strategy achieves a makespan that is very close to the centralized optimum of 1 + ⌊log 2 N ⌋ of Section 3: It is only suboptimal by about 10%. Hence even this simple randomized strategy performs well in both cases and very well when state information is available, suggesting that our bounds are useful in practice.
Computations
Alternatively, it is possible to compute the mean makespan analytically by considering a Markov Chain on the state space 0, 1, 2, . . . , N , where state i corresponds to i of the N peers having the file. We can calculate the transition probabilities p ij . In the NoList case, for example, following the Occupancy Distribution (e.g., [18]), we obtain
p ii+m = i j=i−m (−1) j−i+m i! (i − j)!(i − m)!(j − i + m)! N − 1 − j N − 1 N −i .(49)
Hence we can successively compute the expected hitting times k(i) of state N starting from state i via
k(i) = 1 + j>i k(j)p ij 1 − p ii .(50)
The resulting formula is rather complicated, but can be evaluated exactly using arbitrary precision arithmetic on a computer. Computation times are long, so to keep them shorter we only work out the transition probabilities of the associated Markov Chain exactly. Hitting times are then computed in double arithmetic, that is, to 16 significant digits. Even so, computations are only feasible up to N = 512 with our equipment, despite repeatedly enhanced efficiency. This suggests that simulation is the more computationally efficient approach to our problem. The computed mean values for List and NoList are shown in Tables 4 and 5 respectively. The difference to the simulated values is small without any apparent trend. It can also be checked by computing the standard deviation that the computed mean makespan is contained in the approximate 95% confidence interval of the simulated mean makespan. The only exception is for N = 128 for NoList where it is just outside by approximately 0.0016.
Thus, the computations prove our simulation results accurate. Since simulation results are also obtained more efficiently, we shall stick to simulation when investigating the general case of M file parts in the next section.
The general case of M file parts
Assumptions
We now consider splitting the file into several file parts. With the same assumptions as in the previous section, we repeat the analysis for List for various values of M . Thus, in each round, a downloading user connects to a peer chosen uniformly at random from those peers that have at least one file part that the user does not yet have. An uploading peer randomly chooses one out of the peers requesting a download from him. He uploads to that peer a file part that is randomly chosen from amongst those that he has and the peer still needs.
Simulation
Again, we consider a large range of parameter. We carried out 100 independent runs for each N = 2, 4, . . . , 2 15 . For each value of M = 1 − 5, 8, 10, 15, 20, 50 we fitted the linear model (46). Table 6 summarizes the simulation results. The Multiple R-squared values indicate a good fit, although the fact that these decrease with M suggests there may be a finer dependence on M or N . In fact, we obtain a better fit using Generalized Additive Models (cf. [14]). However, our interest here is not in fitting the best possible model, but to compare the growth rate with N to the one obtained in the centralized case in Section 3. Moreover, from the diagnostic plots we note that the actual performance for large N is better than given by the regression line, increasingly so for increasing M . In each case, we obtain significant p-and t-values. The regression 0.7856+1.1520×log 2 N for M = 1 does not quite agree with 1.1392+1.1021×log 2 N found in (47). It can be checked, by repeating the analysis there for N = 2, 4, . . . , 2 15 that this is due to the different range of N . Thus, our earlier result of 1.1021 might be regarded more reliable, being based on N ranging up to 2 25 .
We conclude that, as in the centralized scenario, the makespan can also be reduced significantly in a decentralized scenario even when a simple randomized strategy is used to disseminate the file parts. However, as we note by comparing the second and fourth columns of Table 6, as M increases the achieved makespan compares less well relative to the centralized minimum of 1 + (1/M )⌊log 2 N ⌋. In particular, note the slower decrease of the log-term coefficient. This is depicted in Figure 3.
Still, we have seen that even this naive randomized strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller in Section 3, confirming our performance bounds are useful in practice. This is confirmed also by initial results of current work on the performance evaluation of the Bullet' system [20].
The program code for simulations as well as the computations and the diagnostic plots used in this section are available on request and will be made available via the Internet 5 .
Discussion
In this paper, we have given three complementary solutions for the minimal time to fully disseminate a file of M parts from a server to N end users in a centralized scenario, thereby providing a lower bound on and a performance benchmark for P2P file dissemination systems. Our results illustrate how the P2P approach, together with splitting the file into M parts, can achieve a significant reduction in makespan. Moreover, the server has a reduced workload when compared to the traditional client/server approach in which it does all the uploads itself. We also investigate the part of the loss in efficiency that is due to the lack of centralized control in practice. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bound confirming their practical use. It would now be very interesting to compare dissemination times of the various efficient real overlay networks directly to our performance bound. A mathematical analysis of the protocols is rarely tractable, but simulation or measurements such as in [17] and [30] for the BitTorrent protocol can be carried out in an environment suitable for this comparison. Cf. also testbed results for Slurpie [33] and simulation results for Avalanche [12]. It is current work to compare our bounds to the makespan obtained by Bullet' [20]. Initial results confirm their practical use further.
In practice, splitting the file and passing on extra information has an overhead cost. Moreover, with the Transmission Control Protocol (TCP), longer connections are more efficient than shorter ones. TCP is used practically everywhere except for the Internet Control Message Protocol (ICMP) and User Datagram Protocol (UDP) for real-time applications. For further details see [35]. Still, with an overhead cost it will not be optimal to increase M beyond a certain value. This could be investigated in more detail.
In the proof of Lemma 1 and Lemma 2 we have used fair sharing and continuity assumptions. It would be of interest to investigate whether one of them or both can be relaxed. Table 6: the decentralized List scenario (solid) and the idealized centralized scenario (dashed).
It would be interesting to generalize our results to account for a dynamic setting with peers arriving and perhaps leaving when they have completed the download of the file. In Internet applications users often connect for only relatively short times. Work in this direction, using a fluid model to study the steady-state performance, is pursued in [31] and there is other relevant work in [37].
Also of interest would be to extend our model to consider users who prefer to free-ride and do not wish to contribute uploading effort. Or, to users who might want to leave the system once they have downloaded the whole file, a behaviour sometimes referred to as easy-riding. The BitTorrent protocol, for example, implements a choking algorithm to limit free-riding.
In another scenario it might be appropriate to assume that users push messages rather than pull them. See [11] for an investigation of the design space for distributed information systems. The push-pull distinction is also part of their classification. In a push system, the centralized case would remain the same. However, we expect the decentralized case to be different. There are a number of other interesting questions which could be investigated in this context. For example, what happens if only a subset of the users is actually interested in the file, but the uploaders do not know which.
From a mathematical point of view it would also be interesting to consider additional download constraints explicitly as part of the model, in particular when up-and download capacities are all different and not positively correlated. We might suppose that user i can upload at a rate C i and simultaneously download at rate D i .
More generally, one might want to assume different capacities for all links between pairs. Or, phrased in terms of transmission times, let us assume that for a file to be sent from user i to user j it takes time t ij . Then we obtain a transportation network, where instead of link costs we now have link delays. This problem can be phrased as a one-to-all shortest path problem if C j is at least N +1. This suggests that there might be some relation which could be exploited. On the other hand, the problem is sufficiently different so that greedy algorithms, induction on nodes and Dynamic Programming do not appear to work. Background on these can be found in [4] and [3]. For M = 1, Prüfer's (N + 1) N −1 labelled trees [6] together with the obvious O(N ) algorithm for the optimal scheduling given a tree is an exhaustive search. A Branch and Bound algorithm can be formulated.
| 11,555 |
cs0606110
|
2949837610
|
Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equally-sized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplink-sharing version of the well-known broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice.
|
With the BitTorrent protocol http: bitconjurer.org BitTorrent protocol.html , for example, when the load on the server is heavy, the protocol delegates most of the uploading burden to the users who have already downloaded parts of the file, and who can start uploading those parts to their peers. File parts are typically @math megabyte (MB) in size. An application helps downloading peers to find each other by supplying lists of contact information about randomly selected peers also downloading the file. Peers use this information to connect to a number of neighbours. A full description can be found in @cite_15 . The BitTorrent protocol has been implemented successfully and is deployed widely. A detailed measurement study of the BitTorrent system is reported in @cite_19 . According to @cite_23 , BitTorrent's share of the total P2P traffic has reached 53 . For recent measurements of the total P2P traffic on Internet backbones see @cite_5 .
|
{
"abstract": [
"",
"The BitTorrent file distribution system uses tit-fortat as a method of seeking pareto efficiency. It achieves a higher level of robustness and resource utilization than any currently known cooperative technique. We explain what BitTorrent does, and how economic methods are used to achieve that goal. 1 What BitTorrent Does When a file is made available using HTTP, all upload cost is placed on the hosting machine. With BitTorrent, when multiple people are downloading the same file at the same time, they upload pieces of the file to each other. This redistributes the cost of upload to downloaders, (where it is often not even metered), thus making hosting a file with a potentially unlimited number of downloaders affordable. Researchers have attempted to find practical techniqes to do this before[3]. It has not been previously deployed on a large scale because the logistical and robustness problems are quite difficult. Simply figuring out which peers have what parts of the file and where they should be sent is difficult to do without incurring a huge overhead. In addition, real deployments experience very high churn rates. Peers rarely connect for more than a few hours, and frequently for only a few minutes [4]. Finally, there is a general problem of fairness [1]. The total download rate across all downloaders must, of mathematical necessity, be equal to the total upload rate. The strategy for allocating upload which seems most likely to make peers happy with their download rates is to make each peer’s download rate be proportional to their upload rate. In practice it’s very difficult to keep peer download rates from sometimes dropping to zero by chance, much less make upload and download rates be correlated. We will explain how BitTorrent solves all of these problems well. 1.1 BitTorrent Interface BitTorrent’s interface is almost the simplest possible. Users launch it by clicking on a hyperlink to the file they wish to download, and are given a standard “Save As” dialog, followed by a download progress dialog which is mostly notable for having an upload rate in addition to a download rate. This extreme ease of use has contributed greatly to BitTorrent’s adoption, and may even be more important than, although it certainly complements, the performance and cost redistribution features which are described in this paper.",
"Keywords: peer-to-peer ; content distribution ; performance ; analysis Reference LCA-CONF-2006-009 URL: http: drops.dagstuhl.de portals index.php?semnr=04201 Record created on 2006-05-18, modified on 2017-05-12",
"Recent reports in the popular media suggest a significant decrease in peer-to-peer (P2P) file-sharing traffic, attributed to the public's response to legal threats. Have we reached the end of the P2P revolution? In pursuit of legitimate data to verify this hypothesis, in this paper, we embark on a more accurate measurement effort of P2P traffic at the link level. In contrast to previous efforts, we introduce two novel elements in our methodology. First, we measure traffic of all known popular P2P protocols. Second, we go beyond the \"known port\" limitation by reverse engineering the protocols and identifying characteristic strings in the payload. We find that, if measured accurately, P2P traffic has never declined; indeed we have never seen the proportion of P2P traffic decrease over time (any change is an increase) in any of our data sources."
],
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_23",
"@cite_5"
],
"mid": [
"",
"239964209",
"2248729722",
"1559810148"
]
}
|
Optimal Scheduling of Peer-to-Peer File Dissemination
|
Suppose that M messages of equal length are initially known only at a single source node in a network. The so-called broadcasting problem is about disseminating these M messages to a population of N other nodes in the least possible time, subject to capacity constraints along the links of the network. The assumption is that once a node has received one of the messages it can participate subsequently in sending that message to its neighbouring nodes.
Scheduling background and related work
The broadcasting problem has been considered for different network topologies. Comprehensive surveys can be found in [15] and [16]. On a complete graph, the problem was first solved in [8] and [10]. Their communication model was a unidirectional telephone model in which each node can either send or receive one message during each round, but cannot do both. In this model, the minimal number of rounds required is 2M − 1 + ⌊log 2 (N + 1)⌋ for even N , and 2M + ⌊log
2 (N + 1)⌋ − ⌊ M −1+2 ⌊log 2 (N+1)⌋ (N +1)/2 ⌋ for odd N . 3
In [2], the authors considered the bidirectional telephone model in which nodes can both send one message and receive one message simultaneously, but they must be matched pairwise. That is, in each given round, a node can only receive a message from the same node to which it sends a message. They provide an optimal algorithm for odd N , which takes M + ⌊log 2 N ⌋ rounds. For even N their algorithm is optimal up to an additive term of 3, taking M + ⌊log 2 N ⌋ + M/N + 2 rounds.
The simultaneous send/receive model [21] supposes that during each round every user may receive one message and send one message. Unlike the telephone model, it is not required that a user can send a message only to the same user from which it receives a message. The optimal number of rounds turns out to be M + ⌊log 2 N ⌋ and we will return to this result in Section 3.
In this paper, we are working with our new uplink-sharing model designed for P2P file dissemination (cf. Section 2). It is closely related to the simultaneous send/receive model, but is set in continuous time. Moreover, we permit users to have different upload capacities which are the constraints on the data that can be sent per unit of time. This contrasts with previous work in which the aim was to model interactions of processors and so it was natural to assume that all nodes have equal capacities. Our work also differs from previous work in that we are motivated by the evaluation of necessarily decentralized P2P file dissemination algorithms, i.e., ones that can be implemented by the users themselves, rather than by a centralized controller. Our interest in the centralized case is as a basis for comparison and to give a lower bound. We show that in the case of equal upload capacities the optimal number of rounds is M + ⌊log 2 N ⌋ as for the simultaneous send/receive model. Moreover, we provide two complementary solutions for the case of general upload capacities and investigate the performance of a decentralized strategy.
Outlook
The rest of this paper is organized as follows. In Section 2 we introduce the uplink-sharing model and relate it to the simultaneous send/receive model. Our optimal algorithm for the simultaneous send/receive broadcasting problem is presented in Section 3. We show that it also solves the problem for the uplink-sharing model with equal capacities. In Section 4 we show that the general uplink-sharing model can be solved via a finite number of mixed integer linear programming (MILP) problems. This approach is suitable for a small number of file parts M . We provide additional insight through the solution of some special cases. We then consider the limiting case that the file can be divided into infinitely many parts and provide the centralized fluid solution. We extend these results to the even more general situation where different users might have different (disjoint) files of different sizes to disseminate (Section 5). This approach is suitable for typical and for large numbers of file parts M . Finally, we turn to decentralized algorithms. In Section 6 we evaluate the performance of a very simple and natural randomized strategy, theoretically, by simulation and by direct computation. We provide results in two different information scenarios with equal capacities showing that even this naive algorithm disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to the performance bounds of the previous sections so that they are useful in practice. We conclude and present ideas for further research in Section 7.
The Uplink-Sharing Model
We now introduce an abstract model for the file dissemination scenario described in the previous section, focusing on the important features of P2P file dissemination.
Underlying the file dissemination system is the Internet. Thus, each user can connect to every other user and the network topology is a complete graph. The server S has upload capacity C S and the N peers have upload capacities C 1 , . . . , C N , measured in megabytes per second (MBps). Once a user has received a file part it can participate subsequently in uploading it to its peers (source availability). We suppose that, in principle, any number of users can simultaneously connect to the server or another peer, the available upload capacity being shared equally amongst the open connections (fair sharing). Taking the file size to be 1 MB, this means that if n users try simultaneously to download a part of the file (of size 1/M ) from the server then it takes n/M C S seconds for these downloads to complete. Observe that the rate at which an upload takes place can both increase and decrease during the time of that upload (varying according to the number of other uploads with which it shares the upload capacity), but we assume that uploads are not interrupted until complete, that is the rate is always positive (continuity). In fact, Lemma 1 below shows that the makespan is not increased if we restrict the server and all peers to carry out only a single upload at a time. We permit a user to download more than one file part simultaneously, but these must be from different sources; only one file part may be transferred from one user to another at the same time. We ignore more complicated interactions and suppose that the upload capacities, C S , C 1 , . . . , C N , impose the only constraints on the rates at which file parts can be transferred between peers which is a reasonable assumption if the underlying network is not overloaded. Finally, we assume that rates of uploads and downloads do not constrain one another.
Note that we have assumed the download rates to be unconstrained and this might be considered unrealistic. However, we shall show a posteriori in Section 3 that if the upload capacities are equal then additional download capacity constraints do not increase the minimum possible makespan, as long as these download capacities are at least as big. Indeed, this is usually the case in practice.
Typically, N is the order of several thousands and the file size is up to a few gigabytes (GB), so that there are several thousand file parts of size 1/4 MB each.
Finding the minimal makespan looks potentially very hard as upload times are interdependent and might start at arbitrary points in time. However, the following two observations help simplify it dramatically. As we see in the next section, they also relate the uplink-sharing model to the simultaneous send/receive broadcasting model.
Lemma 1
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which the server and each of the peers only carry out a single upload at a time.
Proof. Identify the server as peer 0 and, for each i = 0, 1, . . . , N consider the schedule of peer i. We shall use the term job to mean the uploading of a particular file part to a particular peer. Consider the set of jobs, say J, whose processing involves some sharing of the upload capacity C i . Pick any job, say j, in J which is last in J to finish and call the time at which it finishes t f . Now fair sharing and continuity imply that job j is amongst the last to start amongst all the jobs finishing before or at time t f . To see this, note that if some job k were to start later than j, then (by fair sharing and continuity) k must receive less processing than job j by time t f and so cannot have finished by time t f . Let t s denote the starting time of job j.
We now modify the schedule between time t s and t f as follows. Let K be the set of jobs with which job j's processing has involved some sharing of the upload capacity. Let us re-schedule job j so that it is processed on its own between times t f − 1/C i M and t f . This consumes some amount of upload capacity that had been devoted to jobs in K between t f − 1/C i M and t f . However, it releases an exactly equal amount of upload capacity between times t s and t f − 1/C i M which had been used by job j. This can now be allocated (using fair sharing) to processing jobs in K.
The result is that j can be removed from the set J. All jobs finish no later than they did under the original schedule. Moreover, job j starts later than it did under the original schedule and the scheduling before time t s and after time t f is not affected. Thus, all jobs start no earlier than they did under the original schedule. This ensures that the source availability constraints are satisfied and that we can consider the upload schedules independently. We repeatedly apply this argument until set J is empty.
Using Lemma 1, a similar argument shows the following result.
Lemma 2
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which uploads start only at times that other uploads finish or at time 0.
Proof. By the previous Lemma it suffices to consider schedules in which the server and each of the peers only carry out a single upload at a time. Consider the joint schedule of all peers i = 0, 1, . . . , N and let J be the set of jobs that start at a time other than 0 at which no other upload finishes. Pick a job, say j, that is amongst the first in J to start, say at time t s . Consider the greatest time t f such that t f < t s and t f is either 0 or the time that some other upload finishes and modify the schedule so that job j already starts at time t f .
The source availability constraints are still satisfied and all uploads finish no later than they did under the original schedule. Job j can be removed from the set J and the number of jobs in J that start at time t s is decreased by 1, although there might now be more (but at most N in total) jobs in J that start at the time that job j finished in the original schedule.
But this time is later than t s . Thus, we repeatedly apply this argument until the number of jobs in J that start at time t s becomes 0 and then move along to jobs in J that are now amongst the first in j to start at time t ′ s > t s . Note that once a job has been removed from J, it will never be included again. Thus we continue until the set J is empty.
Centralized Solution for Equal Capacities
In this section, we give the optimal centralized solution of the uplink-sharing model of the previous section with equal upload capacities. We first consider the simultaneous send/receive broadcasting model in which the server and all users have upload capacity of 1. The following theorem provides a formula for the minimal makespan and a centralized algorithm that achieves it is contained in the proof.
This agrees with a result of Bar-Noy, Kipnis and Schieber [2], who obtained it as a byproduct of their result on the bidirectional telephone model. However, they required pairwise matchings in order to apply the results from the telephone model. So, for the simultaneous send/receive model, too, they use perfect matching in each round for odd N , and perfect matching on N − 2 nodes for even N . As a result, their algorithm differs for odd and even N and it is substantially more complicated, to describe, implement and prove to be correct, than the one we present within the proof of Theorem 1. Theorem 1 has been obtained also by Kwon and Chwa [21], via an algorithm for broadcasting in hypercubes. By contrast, our explicitly constructive proof makes the structure of the algorithm very easy to see. Moreover, it makes the proof of Theorem 3, that is, the result for the uplink-sharing model, a trivial consequence (using Lemmata 1 and 2).
Essentially, the log 2 N -scaling is due to the P2P approach. This compares favourably to the linear scaling of N that we would obtain for a fixed set of servers. The factor of 1/M is due to splitting the file into parts.
T * = 1 + ⌊log 2 N ⌋ M .(1)
Proof. Suppose that N = 2 n − 1 + x, for x = 1, . . . , 2 n . So n = ⌊log 2 N ⌋. The fact that M + n is a lower bound on the number of rounds is straightforwardly seen as follows. There are M different file parts and the server can only upload one file part (or one linear combination of file parts) in each round. Thus, it takes at least M rounds until the server has made sufficiently many uploads of file parts (or linear combinations of file parts) that the whole file can be recovered. The last of these M uploads by the server contains information that is essential to recovering the file, but this information is now known to only the server and one peer. It must takes at least n further rounds to disseminate this information to the other N − 1 peers. Now we show how the bound can be achieved. The result is trivial for M = 1. It is instructive to consider the case M = 2 explicitly. If n = 0 then N = 1 and the result is trivial. If n = 1 then N is 2 or 3. Suppose N = 3. In the following diagram each line corresponds to a round; each column to a peer. The entries denote the file part that the peer downloads that round. The bold entries indicate downloads from the server; un-bold entries indicate downloads from a peer who has the corresponding part.
1 2 1 2 1 2
Thus, dissemination of the two file parts to the 3 users can be completed in 3 rounds. The case N = 2 is even easier.
If n ≥ 2, then in rounds 2 to n each user uploads his part to a peer who has no file part and the server uploads part 2 to a peer who has no file part. We reach a point, shown below, at which a set of 2 n−1 peers have file part 1, a set of 2 n−1 − 1 peers have file part 2, and a set of x peers have no file part (those denoted by * · · · * ). Let us call these three sets A 1 , A 2 and A 0 , respectively.
1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 . . . 2 1 · · · 2 1 * · · · *
In round n + 1 we let peers in A 1 upload part 1 to 2 n−1 − ⌊x/2⌋ peers in A 2 and to ⌊x/2⌋ peers in A 0 (If x = 1, to 2 n−1 − 1 peers in A 2 and to 1 peer in A 0 ). Peers in A 2 upload part 2 to 2 n−1 − ⌈x/2⌉ peers in A 1 and to another ⌈x/2⌉ − 1 peers in A 0 . The server uploads part 2 to a member of A 0 (If x = 1, to a member of A 1 ). Thus, at the end of this round 2 n − x peers have both file parts, x peers have only file part 1, and x − 1 peers have only file part 2. One more round (round n + 2) is clearly sufficient to complete the dissemination. Now consider M ≥ 3. The server uploads part 1 to one peer in round 1. In rounds j = 2, . . . , min{n, M − 1}, each peer who has a file part uploads his part to another peer who has no file part and the server uploads part j to a peer who has no file part. If M ≤ n, then in rounds M to n each peer uploads his part to a peer who has no file part and the server uploads part M to a peer who has no file part. As above, we illustrate this with a diagram. Here we show the first n rounds in the case M ≤ n.
1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 . . . M 1 · · · 2 1 . . . M 1 · · · 2 1 * · · · *
When round n ends, 2 n − 1 peers have one file part and x peers have no file part. The number of peers having file part i is given in the second column of Table 1. In this table any entry which evaluates to less than 1 is to be read as 0 (so, for example, the bottom two entries in Part Numbers of the file parts at the ends of rounds n n + 1 n + 2 n + 3 · · · n + M − 1 set peers in the set have number of peers in set B 12 parts 1 and 2 2 n−1 − ⌊x/2⌋ B 1p part 1 and a part other than 1 or 2 2 n−1 − ⌈x/2⌉
1 2 n−1 2 n N N · · · N 2 2 n−2 2 n−1 2 n N · · · N 3 2 n−3 2 n−2 2 n−1 2 n · · · N 4 2 n−4 2 n−3 2 n−2 2 n−1 · · · N . . . . . . . . . . . . . . . . . . M − 2 2 n−M+2 2 n−M+3 2 n−M+4 2 n−M+5 · · · N M − 1 2 n−M+1 2 n−M+2 2 n−M+3 2 n−M+4 · · · 2 n M 2 n−M+1 − 1 2 n−M+2 − 1 2 n−M+3 − 1 2 n−M+4 − 1 · · · 2 n − 1B 1 just part 1 x B 2 just part 2 ⌊x/2⌋ B p
just a part other than 1 or 2 ⌈x/2⌉ − 1 column 2 and the bottom entry in column 3 are 0 for n = M − 2). Now in round n + 1, by downloading from every peer who has a file part, and downloading part min{n + 1, M } from the server, we can obtain the numbers shown in the third column. Moreover, we can easily arrange so that peers can be divided into the sets B 12 , B 1p , B 1 , B 2 and B p as shown in Table 2. In round n + 2, x − 1 of the peers in B 1 upload part 1 to peers in B 2 and B p . Peers in B 12 and B 2 each upload part 2 to the peers in B 1p and to ⌈x/2⌉ of the peers in B 1 . The server and the peers in B 1p and B p each upload a part other than 1 or 2 to the peers in B 12 and to the other ⌊x/2⌋ peers in B 1 . The server uploads part min{n + 2, M } and so we obtain the numbers in the fourth column of Table 1. Now all peers have part 1 and so it can be disregarded subsequently. Moreover, we can make the downloads from the server, B 1p and B p so that (disregarding part 1) the number of peers who ultimately have only part 3 is ⌊x/2⌋. This is possible because the size of B p is no more than ⌊x/2⌋; so if j peers in B p have part 3 then we can upload part 3 to exactly ⌊x/2⌋ − j peers in B 1 . Thus, a similar partitioning into sets as in Table 2 will hold as we start step n + 3 (when parts 2 and 3 takes over the roles of parts 1 and 2 respectively). Note that the optimal strategy above follows two principles. As many different peers as possible obtain file parts early on so that they can start uploading themselves and the maximal possible upload capacity is used. Moreover, there is a certain balance in the upload of different file parts so that no part gets circulated too late.
It is interesting that not all the available upload capacity is used. Suppose M ≥ 2. Observe that in round k, for each k = n + 2, . . . , n + M − 1, only x − 1 of the x peers (in set B 1 ) who have only file part k − n − 1 make an upload. This happens M − 2 times. Also, in round n + M there are only 2x − 1 uploads, whereas N + 1 are possible. Overall, we use N + M − 2x less uploads than we might. It can be checked that this number is the same for M = 1.
Suppose we were to follow a schedule that uses only x uploads during round n + 1, when the last peer gets its first file part. We would be using 2 n − x less uploads than we might in this round. Since 2 n − x ≤ N + M − 2x, we see that the schedule used in the proof above wastes at least as many uploads. So the mathematically interesting question arises as to whether or not it is necessary to use more than x uploads in round n + 1. In fact,
(N + M − 2x) − (2 n − x) = M − 1,
so, in terms of the total number of uploads, such a scheduling could still afford not to use one upload during each of the last M − 1 rounds. The question is whether or not each file part can be made available sufficiently often.
The following example shows that if we are not to use more than x uploads in round n + 1 we will have to do something quite subtle. We cannot simply pick any x out of the 2 n uploads possible and still hope that an optimal schedule will be shiftable: by which we mean that the number of copies of part j at the end of round k will be the same as the number of copies of part j − 1 at the end of round k − 1. It is the fact that the optimal schedule used in Theorem 1 is shiftable that makes its optimality so easy to see.
Example 1 Suppose M = 4 and N = 13 = 2 3 + 6 − 1, so M + ⌊log 2 N ⌋ = 7.
If we follow the same schedule as in Theorem 1, we reach after round 3,
1 2 1 3 1 2 1 · · · · · ·
Now if we only make x = 6 uploads during round 4, then there are eight ways to choose which six parts to upload and which two parts not to upload. One can check that in no case is it possible to arrange so that once this is done and uploads are made for round 5 then the resulting state has the same numbers of parts 2, 3 and 4, respectively, as the numbers of parts 1, 2 and 3 at the end of round 4. That is, there is no shiftable optimal schedule. In fact, if our six uploads has been four part 1s and two part 2s, then it would not even be possible to achieve (1).
In some cases, we can achieve (1), if we relax the demand that the schedule be shiftable. Indeed, we conjecture that this is always possible for at least one schedule that uses only x uploads during round n + 1. However, the fact that we cannot use essentially the same strategy in each round makes the general description of a non-shiftable optimal schedule very complicated. Our aim has been to find an optimal (shiftable) schedule that is easy to describe. We have shown that this is possible if we do use the spare capacity at round n + 1. For practical purposes this is desirable anyway, since even if it does not affect the makespan it is better if users obtain file parts earlier.
When x = 2 n our schedule can be realized using matchings between the 2 n peers holding the part that is to be completed next and the server together with the 2 n − 1 peers holding the remaining parts. But otherwise this is not always possible to schedule only with matchings. This is why our solution would not work for the more constrained telephone-like model considered in [2] (where, in fact, the answer differs as N is even or odd). to describe.
The solution of the simultaneous send/receive broadcasting model problem now gives the solution of our original uplink-sharing model when all capacities are the same.
Theorem 2 Consider the uplink-sharing model with all upload capacities equal to 1. The minimal makespan is given by (1), for all M , N , the same as in the simultaneous send/receive model with all upload capacities equal to 1.
Proof. Note that under the assumptions of the theorem and with application of Lemmas 1 and 2, the optimal solution to the uplink-sharing model is the same as that of the simultaneous send/receive broadcast model when all upload capacities equal to 1.
In the proof of Theorem 1 we explicitly gave an optimal schedule which also satisfies the constraints that no peer downloads more than a single file part at a time. Thus, we also have the following result.
Centralized Solution for General Capacities
We now consider the optimal centralized solution in the general case of the uplink-sharing model in which the upload capacities may be different. Essentially, we have an unusual type of precedence-constrained job scheduling problem. In Section 4.1 we formulate it as a mixed integer linear program (MILP). The MILP can also be used to find approximate solutions of bounded size of sub-optimality. In practice, it is suitable for a small number of file parts M . We discuss its implementation in Section 4.2. Finally, we provide additional insight into the solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different (Sections 4.3 and 4.4).
MILP formulation
In order to give the MILP formulation, we need the following Lemma. Essentially, it shows that time can be discretized suitably. We next show how the solution to the general problem can be found by solving a number of linear programs. Let time interval t be the interval [tτ, tτ + τ ), t = 0, . . . . Identify the server as peer 0. Let x ijk (t) be 1 or 0 as peer i downloads file part k from peer j during interval t or not. Let p ik (t) denote the proportion of file part k that peer i has downloaded by time t. Our problem is then is to find the minimal T such that the optimal value of the following MILP is M N . Since this T is certainly greater than 1/C S and less than N/C S , we can search for its value by a simple bisection search, solving this LP for various T :
maximize i,k p ik (T )(2)
subject to the constraints given below. The source availability constraint (6) guarantees that a user has completely downloaded a part before he can upload it to his peers. The connection constraint (7) requires that each user only carries out a single upload at a time. This is justified by Lemma 1 which also saves us another essential constraint and variable to control the actual download rates: The single user downloading from peer j at time t will do so at rate C j as expressed in the link constraint (5). Continuity and stopping constraints (8,9) require that a download that has started will not be interrupted until completion and then be stopped. The exclusivity constraint (10) ensures that each user downloads a given file part only from one peer, not from several ones. Stopping and exclusivity constraints are not based on assumptions, but obvious constraints to exclude redundant uploads.
Regional constraints
x ijk (t) ∈ {0, 1} for all i, j, k, t (3) p ik (t) ∈ [0, 1] for all i, k, t(4)
Link constraints between variables
p ik (t) = M τ t−τ t ′ =0 N j=0 x ijk (t ′ )C j for all i, k, t(5)
Essential constraints
x ijk (t) − ξ jk (t) ≤ 0 for all i, j, k, t (Source availability constraint) (6) i,k
x ijk (t) ≤ 1 for all j, t (Connection constraint)
x ijk (t) − ξ ik (t + 1) − x ijk (t + 1) ≤ 0 for all i, j, k, t (Continuity constraint)
(8) x ijk (t) + ξ ik (t) ≤ 1 for all i, j, k, t (Stopping constraint) (9) j x ijk (t) ≤ 1 for all i, k, t (Exclusivity constraint)(10)
Initial conditions p 0k (0) = 1 for all k (11) p ik (0) = 0 for all i, k
Constraints (8)- (6) have been linearized. Background can be found in [34]. For this, we used the auxiliary variable ξ ik (t) = 1 {p ik (t) = 1}. This definition can be expressed through the following linear constraints.
Linearization constraints
ξ ik (t) ∈ {0, 1} for all i, k, t (13) p ik (t) − ξ ik (t) ≥ 0 and p ik (t) − ξ ik (t) < 1 for all i, k, t(14)
It can be checked that together with (8)-(6), indeed, this gives
x ijk (t) = 1 and p ik (t + 1) < 1 =⇒ x ijk (t + 1) = 1 for all i, j, k, t
p ik (t) = 1 =⇒ x ijk (t) = 0 for all i, j, k, t (16) p jk (t) < 1 =⇒ x ijk (t) = 0 for all i, j, k, t(15)
that is, continuity, stopping and source availability constraints respectively.
Implementation of the MILP
MILPs are well-understood and there exist efficient computational methods and program codes. The simplex method introduced by Dantzig in 1947, in particular, has been found to yield an efficient algorithm in practice as well as providing insight into the theory. Since then, the method has been specialized to take advantage of the particular structure of certain classes of problems and various interior point methods have been introduced. For integer programming there are branch-and-bound, cutting plane (branch-and-cut) and column generation (branch-and-price) methods as well as dynamic programming algorithms. Moreover, there are various approximation algorithms and heuristics. These methods have been implemented in many commercial optimization libraries such as OSL or CPLEX. For further reading on these issues the reader is referred to [28], [4] and [38]. Thus, implementing and solving the MILPs gives the minimal makespan solution. Although, as the numbers of variables and constraints in the LP grows exponentially in N and M , this approach is not practical for large N and M .
Even so, we can use the LP formulation to obtain a bounded approximation to the solution. If we look at the problem with a greater τ , then the job end and start times are not guaranteed to lie at integer multiples of τ . However, if we imagine that each job does take until the end of an τ -length interval to finish (rather than finishing before the end), then we will overestimate the time that each job takes by at most τ . Since there are N M jobs in total, we overestimate the total time taken by at most N M τ . Thus, the approximation gives us an upper bound on the time taken and is at most N M τ greater than the true optimum. So we obtain both upper and lower bounds on the minimal makespan. Even for this approximation, the computing required is formidable for large N and M .
Insight for special cases with small N and M
We now provide some insight into the minimal makespan solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different. This addresses the case of the server having a significantly higher upload capacity than the end users.
Suppose N = 2 and M = 1, that is, the file has not been split. Only the server has the file initially, thus either (a) both peers download from the server, in which case the makespan is T = 2/C S , or (b) one peer downloads from the server and then the second peer downloads from the first; in this case T = 1/C S + 1/C 1 . Thus, the minimal makespan is T * = 1/C S + min{1/C S , 1/C 1 }.
If N = M = 2 we can again adopt a brute force approach. There are 16 possible cases, each specifying the download source that each peer uses for each part. These can be reduced to four by symmetry.
Case A: Everything is downloaded from the server. This is effectively the same as case (a) above. When C 1 is small compared to C S , this is the optimal strategy. Case B: One peer downloads everything from the server. The second peer downloads from the first. This is as case (b) above, but since the file is split in two, T is less. Case C: One peer downloads from the server. The other peer downloads one part of the file from the server and the other part from the first peer. Case D: Each peer downloads exactly one part from the server and the other part from the other peer. When C 1 is large compared to C S , this is the optimal strategy.
In each case, we can find the optimal scheduling and hence the minimal makespan. This is shown in Table 3.
case makespan The optimal strategy arises from A, C or D as C 1 /C S lies in the intervals [0, 1/3], [1/3, 1] or [1, ∞) respectively. In [1, ∞), B and D yield the same. See Figure 1. Note that under the optimal schedule for case C one peer has to wait while the other starts downloading. This illustrates that greedy-type distributed algorithms may not be optimal and that restricting uploaders to a single upload is sometimes necessary for an optimal scheduling (cf. Section 2).
A 2 C S B 1 2C S + 1 2C 1 + max 1 2C S , 1 2C 1 C 1 2C S + max 1 C S , 1 2C 1 D 1 C S + 1 2C 1
Insight for special cases with large M
We still assume C 1 = C 2 = · · · = C N , but C S might be different. In the limiting case that the file can be divided into infinitely many parts, the problem can be easily solved for any number N of users. Let each user download a fraction 1− α directly from the server at rate C S /N and a fraction α/(N − 1) from each of the other N − 1 peers, at rate min{C S /N, C 1 /(N − 1)} from each. The makespan is minimized by choosing α such that the times for these two downloads are equal, if possible. Equating them, we find the minimal makespan as follows.
Case 1: C 1 /(N − 1) ≤ C S /N : (1 − α)N C S = α C 1 =⇒ α = N C 1 C S + N C 1 =⇒ T = N C S + N C 1 .(18)Case 2: C 1 /(N − 1) ≥ C S /N : (1 − α)N C S = αN (N − 1)C S =⇒ α = N − 1 N =⇒ T = 1 C S .(19)
In total, there are N MB to upload and the total available upload capacity is C S + N C 1 MBps. Thus, a lower bound on the makespan is N/(C S + N C 1 ) seconds. Moreover, the server has to upload his file to at least one user. Hence another lower bound on the makespan is 1/C S . The former bound dominates in case 1 and we have shown that it can be achieved. The latter bound dominates in case 2 and we have shown that it can be achieved. As a result, the minimal makespan is
T * = max 1 C S , N C S + N C 1 .
(20) Figure 2 shows the minimal makespan when the file is split in 1, 2 and infinitely many file parts when N = 2. It illustrates how the makespan decreases with M . In the next section, we extend the results in this limiting case to a much more general scenario.
Centralized Fluid Limit Solution
In this section, we generalize the results of Section 4.4 to allow for general capacities C i . Moreover, instead of limiting the number of sources to one designated server with a file to disseminate, we now allow every user i to have a file that is to be disseminated to all other users. We provide the centralized solution in the limiting case that the file can be divided into infinitely many parts.
Let F i ≥ 0 denote the size of the file that user i disseminates to all other users. Seeing that in this situation there is no longer one particular server and everything is symmetric, we change notation for the rest of this section so that there are N ≥ 2 users 1, 2, . . . , N .
Moreover, let F = N i=1 F i and C = N i=1 C i .
We will prove the following result.
Theorem 4 In the fluid limit, the minimal makespan is
T * = max F 1 C 1 , F 2 C 2 , . . . , F N C N , (N − 1)F C (21)
and this can be achieved with a two-hop strategy, i.e., one in which users i's file is uploaded to user j, either directly from user i, or via at most one intermediate user.
Proof. The result is obvious for N = 2. Then the minimal makespan is max{F 1 /C 1 , F 2 /C 2 } and this is exactly the value of T * in (21).
So we consider N ≥ 3. It is easy to see that each of the N + 1 terms within the braces on the right hand side of (21) are lower bounds on the makespan. Each user has to upload his file at least to one user, which takes time F i /C i . Moreover, the total volume of files to be uploaded is (N − 1)F and the total available capacity is C. Thus, the makespan is at least T * , and it remains to be shown that a makespan of T * can be achieved. There are two cases to consider.
Case 1: (N − 1)F/C ≥ max i F i /C i for all i.
In this case, T * = (N − 1)F/C. Let us consider the 2-hop strategy in which each user uploads a fraction α ii of its file F i directly to all (N − 1) peers, simultaneously and at equal rates. Moreover, he uploads a fraction α ij to peer j who in turn then uploads it to the remaining (N − 2) peers, again simultaneously and at equal rates. Note that N j=1 α ij = 1. Explicitly constructing a suitable set α ij , we thus obtain the problem min T (22) subject to, for all i,
1 C i α ii F i (N − 1) + k =i α ik F i + k =i α ki F k (N − 2) ≤ T .(23)
We minimize T by choosing the α ij in such a way as to equate the N left hand sides of the constraints, if possible. Rewriting the expression in square brackets, equating the constraints for i and j and then summing over all j we obtain
C α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = C i (N − 2) j α jj F j + F + (N − 2)(F − j α jj F j ) = (N − 1)C i F.(24)
Thus,
α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = (N − 1) C i C F.(25)
Note that there is a lot of freedom in the choice of the α so let us specify that we require α ki to be constant in k for k = i, that is α ki = α * i for k = i. This means that i has the capacity to take over a certain part of the dissemination from some peer, then it can and will also take over the same proportion from any other peer. Put another way, user i splits excess capacity equally between its peers. Thus,
α ii F i (N − 2) + F i + α * i (N − 2)(F − F i ) = (N − 1) C i C F(26)
Still, we have twice as many variables as constraints. Let us also specify that α * i = α ii for all i. Similarly as above, this says that the proportion of its own file F i that i uploads to all its peers (rather than just to one of them) is the same as the proportion of the files that it takes over from its peers. Then
α * i = (N − 1)(C i /C)F − F i (N − 2)F = (N − 1)C i (N − 2)C − F i (N − 2)F ,(27)
where i α * i = 1 and α * i ≥ 0, because in case 1 F i /C i ≤ (N − 1)F/C. With these α ij , we obtain the time for i to complete its upload and hence the time for everyone to complete their upload as
T = 1 C i α * i F i (N − 2) + F i + k =i α * i F k (N − 2) = (N − 1)F i C − F i 2 C i F + F i C i + (N − 1)(F − F i ) C − F i (F − F i ) C i F = (N − 1)F/C.(28)
Note that there is no problem with precedence constraints. All uploads happen simultaneously stretched out from time 0 to T . User i uploads to j a fraction α ij of F i . Thus, he does so at constant rate α ij F i /T i = α ij F i /T . User j passes on the same amount of data to each of the other users in the same time, hence at the same rate α ij F i /T j = α ij F i /T .
Thus, we have shown that if the aggregate lower bound dominates the others, it can be achieved. It remains to be shown that if an individual lower bound dominates, than this can be achieved also.
Case 2: F i /C i > (N − 1)F/C for some i.
By contradiction it is easily seen that this cannot be the case for all i. Let us order the users in decreasing order of F i /C i , so that F 1 /C 1 is the largest of the F i /C i . We wish to show that all files can be disseminated within a time of F 1 /C 1 . To do this we construct new capacities C ′ i with the following properties:
C ′ 1 = C 1 ,(29)C ′ i ≤ C i for i = 1,(30)(N − 1)F/C ′ = F 1 /C ′ 1 = F 1 /C 1 and (31) F i /C ′ i ≤ F 1 /C 1 .(32)
This new problem satisfies the condition of Case 1 and so the minimal makespan is T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem is T = F 1 /C 1 also, because the unprimed capacities are greater or equal to the primed capacities by property (30).
To explicitly construct capacities satisfying (29)-(32), let us define
C ′ i = (N − 1) C 1 F 1 γ i F i(33)
with constants γ i ≥ 0 such that
i γ i F i = F .(34)
Then (N − 1)F/C ′ = F 1 /C 1 , that is (31) holds. Moreover, choosing
γ i ≤ 1 N − 1 C i F i F 1 C 1(35)
ensures C ′ i ≤ C i , i.e. property (30) and choosing
γ i ≥ 1 N − 1(36)
ensures F i /C ′ i ≤ F 1 /C 1 , that is property (32). Furthermore, the previous two conditions together ensure that γ 1 = 1/(N − 1) and thus C ′ 1 = C 1 , that is property (29). It remains to construct a set of parameters γ i that satisfies (34), (35) and (36).
Putting all γ i equal to the lower bound (36) gives i γ i F i = F/(N − 1), that is too small to satisfy (34). Putting all equal to the upper bound (35) gives i γ i F i = F 1 C/(N − 1)C 1 , that is too large to satisfy (34). So we pick a suitably weighted average instead. Namely,
γ i = 1 N − 1 δ C i F i F 1 C 1 + (1 − δ)(37)
such that δ C N − 1
F 1 C 1 + (1 − δ) F N − 1 = F(38)that is δ = (N − 2)F C 1 F 1 C − F C 1 .(39)
Substituting back in we obtain
γ i = 1 N − 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 (F 1 C − F C 1 )F i(40)
and thus
C ′ i = C 1 F 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 F 1 C − F C 1(41)
By construction, these C ′ i satisfy properties (29)-(32) and hence, by the results in Case 1, T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem T = F 1 /C 1 also.
It is worth noting that there is a lot of freedom in the choice of the α ij . We have chosen a symmetric approach, but other choices are possible.
In practice, the file will not be infinitely divisible. However, we often have M >> log(N ) and this appears to be sufficient for (21) to be a good approximation. Thus, the fluid limit approach of this section is suitable for typical and for large values of M .
Decentralized Solution for Equal Capacities
In order to give a lower bound on the minimal makespan, we have been assuming a centralized controller does the scheduling. We now consider a naive randomized strategy and investigate the loss in performance that is due to the lack of centralized control. We do this for equal capacities and in two different information scenarios, evaluating its performance by analytic bounds, simulation as well as direct computation. In Section 6.1 we consider the special case of one file part, in Section 6.2 we consider the general case of M file parts. We find that even this naive strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller (cf. Section 3). This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bounds so that they are useful in practice.
The special case of one file part
Assumptions Let us start with the case M = 1. We must first specify what information is available to users. It makes sense to assume that each peer knows the number of parts into which the file is divided, M , and the address of the server. However, a peer might not know N , the total number of peers, nor its peers' addresses, nor if they have the file, nor whether they are at present occupied uploading to someone else.
We consider two different information scenarios. In the first one, List, the number of peers holding the file and their addresses are known. In the second one, NoList, the number and addresses of all peers are known, but not which of them currently hold the file. Thus, in List, downloading users choose uniformly at random between the server and the peers already having the file. In NoList, downloading users choose uniformly amongst the server and all their peers. If a peer receives a query from a single peer, he uploads the file to that peer. If a peer receives queries from multiple peers, he chooses one of them uniformly at random. The others remain unsuccessful in that round. Thus, in List transmission can fail only if too many users try to download simultaneously from the same uploader. In NoList, transmission might also fail if a user tries to download from a peer who does not yet have the file.
Theoretical Bounds
The following theorem explains how the expected makespan that is achieved by the randomized strategy grows with N , in both the List and the NoList scenarios.
Theorem 5 In the uplink-sharing model, with equal upload capacities, the expected number of rounds required to disseminate a single file to all peers in either the List or NoList scenario is Θ(log N ).
Proof. In the List scenario our simple randomized algorithm runs in less time than in the NoList scenario. Since already have the lower bound given by Theorem 1, it suffices to prove that the expected runing time in the NoList scenario is O(log N ). There is also similar direct proof that the expected running time under the List scenario is O(log N ).
Suppose we have reached a stage in the dissemination at which n 1 peers (including the server) have the file and n 0 peers do not, with n 0 +n 1 = N +1. (The base case is n 1 = 1, when only the server has the file.) Each of the peers that does not have the file randomly chooses amongst the server and all his peers (NoList) and tries to download the file. If more than one peer tries to download from the same place then only one of the downloads is successful. The proof has two steps.
(i) Suppose that n 1 ≤ n 0 . Let i be the server or a peer who has the file and let I i be an indicator random variable that is 0 or 1 as i does or does not upload it. Let Y = i I i , where the sum is taken over all n 1 peers who have the file. Thus n 1 − Y is the number of uploads that take place. Then
EI i = 1 − 1 N n 0 ≤ 1 − 1 2n 0 n 0 ≤ 1 √ e .(42)
Now since E( i I i ) = i EI i , we have EY ≤ n 1 / √ e. Thus, by the Markov inequality, that for a nonnegative random variable Y we have that for any k (not necessarily an integer) P (Y ≥ k) ≤ (1/k)EY , we have by taking k = (2/3)n 1 ,
P n 1 − Y ≡ number of uploads ≤ 1 3 n 1 = P (Y ≥ 2 3 n 1 ) ≤ n 1 / √ e 2 3 n 1 = 3/(2 √ e) < 1 .(43)
Thus the expected number of steps required for the number of peers who have the file to increases from n 1 to at least n 1 + (1/3)n 1 = (4/3)n 1 is bounded by a geometric random variable with mean µ = 1/(1 − 3/(2 √ e)). This implies that we will reach a state in which more peers have the file than do not in an expected time that is O(log N ). From that point we continue with step (ii) of the proof.
(ii) Suppose n 1 > n 0 . Let j be a peer who does not have the file and let J j be an indicator random variable that is 0 or 1 as peer j does or does not succeed in downloading it. Let Z = j J j , where the sum is taken over all n 0 peers who do not have the file. Suppose X is the number of the other n 0 − 1 peers that try to download from the same place as does peer j. Then
P (J j = 0) = E n 1 N 1 1 + X ≥ E n 1 N (1 − X) = n 1 N 1 − n 0 − 1 N = n 1 N 1 − N − n 1 N = n 2 1 N 2 ≥ 1/4 .(44)
Hence EZ ≤ (3/4)n 0 and so, again using the Markov inequality,
P n 0 − Z ≡ number of downloads ≤ 1 8 n 0 = P Z ≥ 7 8 n 0 ≤ 3 4 n 0 7 8 n 0 = 6 7 .(45)
It follows that the number of peers who do not yet have the file decreases from n 0 to no more than (7/8)n 0 in an expected number of steps no more than µ ′ = 1/(1 − 6 7 ) = 7. Thus the number of steps needed for the number of peers without the file to decrease from n 0 to 0 is O(log n 0 ) = O(log N ). In fact, this is a weak upper bound. By more complicated arguments we can show that if n 0 = aN , where a ≤ 1/2, then the expected remaining time for our algorithm to complete under NoList is Θ(log log N ). For a > 1/2 the expected time remains Θ(log N ).
Simulation
For the problem with one server and N users we have carried out 1000 independent simulation runs 4 for a large range of parameters, N = 2, 4, . . . , 2 25 . We found that the achieved expected makespan appears to grow as a + b × log 2 N . Motivated by this and the theoretical bound from Theorem 5 we fitted the linear model
y ij = α + βx i + ǫ ij ,(46)
where y ij is the makespan for x i = log 2 2 i , obtained in run j, j = 1, . . . , 1000. Indeed, the model fits the data very well in both scenarios. We obtain the following results that enable us to compare the expected makespan of the naive randomized strategy to the that of a centralized controller. For List, the regression analysis gives a good fit, with Multiple R-squared value of 0.9975 and significant p-and t-values. The makespan increases as
1.1392 + 1.1021 × log 2 N .(47)
For NoList, there is more variation in the data than for List, but, again, the linear regression gives a good fit, with Multiple R-squared of 0.9864 and significant p-and t-values. The makespan increases as 1.7561 + 1.5755 × log 2 N .
As expected, the additional information for List leads to a significantly lesser makespan when compared to NoList, in particular the log-term coefficient is significantly smaller. In the List scenario, the randomized strategy achieves a makespan that is very close to the centralized optimum of 1 + ⌊log 2 N ⌋ of Section 3: It is only suboptimal by about 10%. Hence even this simple randomized strategy performs well in both cases and very well when state information is available, suggesting that our bounds are useful in practice.
Computations
Alternatively, it is possible to compute the mean makespan analytically by considering a Markov Chain on the state space 0, 1, 2, . . . , N , where state i corresponds to i of the N peers having the file. We can calculate the transition probabilities p ij . In the NoList case, for example, following the Occupancy Distribution (e.g., [18]), we obtain
p ii+m = i j=i−m (−1) j−i+m i! (i − j)!(i − m)!(j − i + m)! N − 1 − j N − 1 N −i .(49)
Hence we can successively compute the expected hitting times k(i) of state N starting from state i via
k(i) = 1 + j>i k(j)p ij 1 − p ii .(50)
The resulting formula is rather complicated, but can be evaluated exactly using arbitrary precision arithmetic on a computer. Computation times are long, so to keep them shorter we only work out the transition probabilities of the associated Markov Chain exactly. Hitting times are then computed in double arithmetic, that is, to 16 significant digits. Even so, computations are only feasible up to N = 512 with our equipment, despite repeatedly enhanced efficiency. This suggests that simulation is the more computationally efficient approach to our problem. The computed mean values for List and NoList are shown in Tables 4 and 5 respectively. The difference to the simulated values is small without any apparent trend. It can also be checked by computing the standard deviation that the computed mean makespan is contained in the approximate 95% confidence interval of the simulated mean makespan. The only exception is for N = 128 for NoList where it is just outside by approximately 0.0016.
Thus, the computations prove our simulation results accurate. Since simulation results are also obtained more efficiently, we shall stick to simulation when investigating the general case of M file parts in the next section.
The general case of M file parts
Assumptions
We now consider splitting the file into several file parts. With the same assumptions as in the previous section, we repeat the analysis for List for various values of M . Thus, in each round, a downloading user connects to a peer chosen uniformly at random from those peers that have at least one file part that the user does not yet have. An uploading peer randomly chooses one out of the peers requesting a download from him. He uploads to that peer a file part that is randomly chosen from amongst those that he has and the peer still needs.
Simulation
Again, we consider a large range of parameter. We carried out 100 independent runs for each N = 2, 4, . . . , 2 15 . For each value of M = 1 − 5, 8, 10, 15, 20, 50 we fitted the linear model (46). Table 6 summarizes the simulation results. The Multiple R-squared values indicate a good fit, although the fact that these decrease with M suggests there may be a finer dependence on M or N . In fact, we obtain a better fit using Generalized Additive Models (cf. [14]). However, our interest here is not in fitting the best possible model, but to compare the growth rate with N to the one obtained in the centralized case in Section 3. Moreover, from the diagnostic plots we note that the actual performance for large N is better than given by the regression line, increasingly so for increasing M . In each case, we obtain significant p-and t-values. The regression 0.7856+1.1520×log 2 N for M = 1 does not quite agree with 1.1392+1.1021×log 2 N found in (47). It can be checked, by repeating the analysis there for N = 2, 4, . . . , 2 15 that this is due to the different range of N . Thus, our earlier result of 1.1021 might be regarded more reliable, being based on N ranging up to 2 25 .
We conclude that, as in the centralized scenario, the makespan can also be reduced significantly in a decentralized scenario even when a simple randomized strategy is used to disseminate the file parts. However, as we note by comparing the second and fourth columns of Table 6, as M increases the achieved makespan compares less well relative to the centralized minimum of 1 + (1/M )⌊log 2 N ⌋. In particular, note the slower decrease of the log-term coefficient. This is depicted in Figure 3.
Still, we have seen that even this naive randomized strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller in Section 3, confirming our performance bounds are useful in practice. This is confirmed also by initial results of current work on the performance evaluation of the Bullet' system [20].
The program code for simulations as well as the computations and the diagnostic plots used in this section are available on request and will be made available via the Internet 5 .
Discussion
In this paper, we have given three complementary solutions for the minimal time to fully disseminate a file of M parts from a server to N end users in a centralized scenario, thereby providing a lower bound on and a performance benchmark for P2P file dissemination systems. Our results illustrate how the P2P approach, together with splitting the file into M parts, can achieve a significant reduction in makespan. Moreover, the server has a reduced workload when compared to the traditional client/server approach in which it does all the uploads itself. We also investigate the part of the loss in efficiency that is due to the lack of centralized control in practice. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bound confirming their practical use. It would now be very interesting to compare dissemination times of the various efficient real overlay networks directly to our performance bound. A mathematical analysis of the protocols is rarely tractable, but simulation or measurements such as in [17] and [30] for the BitTorrent protocol can be carried out in an environment suitable for this comparison. Cf. also testbed results for Slurpie [33] and simulation results for Avalanche [12]. It is current work to compare our bounds to the makespan obtained by Bullet' [20]. Initial results confirm their practical use further.
In practice, splitting the file and passing on extra information has an overhead cost. Moreover, with the Transmission Control Protocol (TCP), longer connections are more efficient than shorter ones. TCP is used practically everywhere except for the Internet Control Message Protocol (ICMP) and User Datagram Protocol (UDP) for real-time applications. For further details see [35]. Still, with an overhead cost it will not be optimal to increase M beyond a certain value. This could be investigated in more detail.
In the proof of Lemma 1 and Lemma 2 we have used fair sharing and continuity assumptions. It would be of interest to investigate whether one of them or both can be relaxed. Table 6: the decentralized List scenario (solid) and the idealized centralized scenario (dashed).
It would be interesting to generalize our results to account for a dynamic setting with peers arriving and perhaps leaving when they have completed the download of the file. In Internet applications users often connect for only relatively short times. Work in this direction, using a fluid model to study the steady-state performance, is pursued in [31] and there is other relevant work in [37].
Also of interest would be to extend our model to consider users who prefer to free-ride and do not wish to contribute uploading effort. Or, to users who might want to leave the system once they have downloaded the whole file, a behaviour sometimes referred to as easy-riding. The BitTorrent protocol, for example, implements a choking algorithm to limit free-riding.
In another scenario it might be appropriate to assume that users push messages rather than pull them. See [11] for an investigation of the design space for distributed information systems. The push-pull distinction is also part of their classification. In a push system, the centralized case would remain the same. However, we expect the decentralized case to be different. There are a number of other interesting questions which could be investigated in this context. For example, what happens if only a subset of the users is actually interested in the file, but the uploaders do not know which.
From a mathematical point of view it would also be interesting to consider additional download constraints explicitly as part of the model, in particular when up-and download capacities are all different and not positively correlated. We might suppose that user i can upload at a rate C i and simultaneously download at rate D i .
More generally, one might want to assume different capacities for all links between pairs. Or, phrased in terms of transmission times, let us assume that for a file to be sent from user i to user j it takes time t ij . Then we obtain a transportation network, where instead of link costs we now have link delays. This problem can be phrased as a one-to-all shortest path problem if C j is at least N +1. This suggests that there might be some relation which could be exploited. On the other hand, the problem is sufficiently different so that greedy algorithms, induction on nodes and Dynamic Programming do not appear to work. Background on these can be found in [4] and [3]. For M = 1, Prüfer's (N + 1) N −1 labelled trees [6] together with the obvious O(N ) algorithm for the optimal scheduling given a tree is an exhaustive search. A Branch and Bound algorithm can be formulated.
| 11,555 |
cs0606110
|
2949837610
|
Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equally-sized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplink-sharing version of the well-known broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice.
|
Slurpie @cite_24 is a very similar protocol, although, unlike BitTorrent, it does not fix the number of neighbours and it adapts to varying bandwidth conditions. Other P2P overlay networks have also been proposed. For example see SplitStream @cite_32 and Bullet' @cite_22 .
|
{
"abstract": [
"We present Slurpie: a peer-to-peer protocol for bulk data transfer. Slurpie is specifically designed to reduce client download times for large, popular files, and to reduce load on servers that serve these files. Slurpie employs a novel adaptive downloading strategy to increase client performance, and employs a randomized backoff strategy to precisely control load on the server. We describe a full implementation of the Slurpie protocol, and present results from both controlled local-area and wide-area testbeds. Our results show that Slurpie clients improve performance as the size of the network increases, and the server is completely insulated from large flash crowds entering the Slurpie network.",
"The need to distribute large files across multiple wide-area sites is becoming increasingly common, for instance, in support of scientific computing, configuring distributed systems, distributing software updates such as open source ISOs or Windows patches, or disseminating multimedia content. Recently a number of techniques have been proposed for simultaneously retrieving portions of a file from multiple remote sites with the twin goals of filling the client's pipe and overcoming any performance bottlenecks between the client and any individual server. While there are a number of interesting tradeoffs in locating appropriate download sites in the face of dynamically changing network conditions, to date there has been no systematic evaluation of the merits of different protocols. This paper explores the design space of file distribution protocols and conducts a detailed performance evaluation of a number of competing systems running in both controlled emulation environments and live across the Internet. Based on our experience with these systems under a variety of conditions, we propose, implement and evaluate Bullet' (Bullet prime), a mesh based high bandwidth data dissemination system that outperforms previous techniques under both static and dynamic conditions.",
"In tree-based multicast systems, a relatively small number of interior nodes carry the load of forwarding multicast messages. This works well when the interior nodes are highly-available, dedicated infrastructure routers but it poses a problem for application-level multicast in peer-to-peer systems. SplitStream addresses this problem by striping the content across a forest of interior-node-disjoint multicast trees that distributes the forwarding load among all participating peers. For example, it is possible to construct efficient SplitStream forests in which each peer contributes only as much forwarding bandwidth as it receives. Furthermore, with appropriate content encodings, SplitStream is highly robust to failures because a node failure causes the loss of a single stripe on average. We present the design and implementation of SplitStream and show experimental results obtained on an Internet testbed and via large-scale network simulation. The results show that SplitStream distributes the forwarding load among all peers and can accommodate peers with different bandwidth capacities while imposing low overhead for forest construction and maintenance."
],
"cite_N": [
"@cite_24",
"@cite_22",
"@cite_32"
],
"mid": [
"2135039403",
"1656271119",
"2127494222"
]
}
|
Optimal Scheduling of Peer-to-Peer File Dissemination
|
Suppose that M messages of equal length are initially known only at a single source node in a network. The so-called broadcasting problem is about disseminating these M messages to a population of N other nodes in the least possible time, subject to capacity constraints along the links of the network. The assumption is that once a node has received one of the messages it can participate subsequently in sending that message to its neighbouring nodes.
Scheduling background and related work
The broadcasting problem has been considered for different network topologies. Comprehensive surveys can be found in [15] and [16]. On a complete graph, the problem was first solved in [8] and [10]. Their communication model was a unidirectional telephone model in which each node can either send or receive one message during each round, but cannot do both. In this model, the minimal number of rounds required is 2M − 1 + ⌊log 2 (N + 1)⌋ for even N , and 2M + ⌊log
2 (N + 1)⌋ − ⌊ M −1+2 ⌊log 2 (N+1)⌋ (N +1)/2 ⌋ for odd N . 3
In [2], the authors considered the bidirectional telephone model in which nodes can both send one message and receive one message simultaneously, but they must be matched pairwise. That is, in each given round, a node can only receive a message from the same node to which it sends a message. They provide an optimal algorithm for odd N , which takes M + ⌊log 2 N ⌋ rounds. For even N their algorithm is optimal up to an additive term of 3, taking M + ⌊log 2 N ⌋ + M/N + 2 rounds.
The simultaneous send/receive model [21] supposes that during each round every user may receive one message and send one message. Unlike the telephone model, it is not required that a user can send a message only to the same user from which it receives a message. The optimal number of rounds turns out to be M + ⌊log 2 N ⌋ and we will return to this result in Section 3.
In this paper, we are working with our new uplink-sharing model designed for P2P file dissemination (cf. Section 2). It is closely related to the simultaneous send/receive model, but is set in continuous time. Moreover, we permit users to have different upload capacities which are the constraints on the data that can be sent per unit of time. This contrasts with previous work in which the aim was to model interactions of processors and so it was natural to assume that all nodes have equal capacities. Our work also differs from previous work in that we are motivated by the evaluation of necessarily decentralized P2P file dissemination algorithms, i.e., ones that can be implemented by the users themselves, rather than by a centralized controller. Our interest in the centralized case is as a basis for comparison and to give a lower bound. We show that in the case of equal upload capacities the optimal number of rounds is M + ⌊log 2 N ⌋ as for the simultaneous send/receive model. Moreover, we provide two complementary solutions for the case of general upload capacities and investigate the performance of a decentralized strategy.
Outlook
The rest of this paper is organized as follows. In Section 2 we introduce the uplink-sharing model and relate it to the simultaneous send/receive model. Our optimal algorithm for the simultaneous send/receive broadcasting problem is presented in Section 3. We show that it also solves the problem for the uplink-sharing model with equal capacities. In Section 4 we show that the general uplink-sharing model can be solved via a finite number of mixed integer linear programming (MILP) problems. This approach is suitable for a small number of file parts M . We provide additional insight through the solution of some special cases. We then consider the limiting case that the file can be divided into infinitely many parts and provide the centralized fluid solution. We extend these results to the even more general situation where different users might have different (disjoint) files of different sizes to disseminate (Section 5). This approach is suitable for typical and for large numbers of file parts M . Finally, we turn to decentralized algorithms. In Section 6 we evaluate the performance of a very simple and natural randomized strategy, theoretically, by simulation and by direct computation. We provide results in two different information scenarios with equal capacities showing that even this naive algorithm disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to the performance bounds of the previous sections so that they are useful in practice. We conclude and present ideas for further research in Section 7.
The Uplink-Sharing Model
We now introduce an abstract model for the file dissemination scenario described in the previous section, focusing on the important features of P2P file dissemination.
Underlying the file dissemination system is the Internet. Thus, each user can connect to every other user and the network topology is a complete graph. The server S has upload capacity C S and the N peers have upload capacities C 1 , . . . , C N , measured in megabytes per second (MBps). Once a user has received a file part it can participate subsequently in uploading it to its peers (source availability). We suppose that, in principle, any number of users can simultaneously connect to the server or another peer, the available upload capacity being shared equally amongst the open connections (fair sharing). Taking the file size to be 1 MB, this means that if n users try simultaneously to download a part of the file (of size 1/M ) from the server then it takes n/M C S seconds for these downloads to complete. Observe that the rate at which an upload takes place can both increase and decrease during the time of that upload (varying according to the number of other uploads with which it shares the upload capacity), but we assume that uploads are not interrupted until complete, that is the rate is always positive (continuity). In fact, Lemma 1 below shows that the makespan is not increased if we restrict the server and all peers to carry out only a single upload at a time. We permit a user to download more than one file part simultaneously, but these must be from different sources; only one file part may be transferred from one user to another at the same time. We ignore more complicated interactions and suppose that the upload capacities, C S , C 1 , . . . , C N , impose the only constraints on the rates at which file parts can be transferred between peers which is a reasonable assumption if the underlying network is not overloaded. Finally, we assume that rates of uploads and downloads do not constrain one another.
Note that we have assumed the download rates to be unconstrained and this might be considered unrealistic. However, we shall show a posteriori in Section 3 that if the upload capacities are equal then additional download capacity constraints do not increase the minimum possible makespan, as long as these download capacities are at least as big. Indeed, this is usually the case in practice.
Typically, N is the order of several thousands and the file size is up to a few gigabytes (GB), so that there are several thousand file parts of size 1/4 MB each.
Finding the minimal makespan looks potentially very hard as upload times are interdependent and might start at arbitrary points in time. However, the following two observations help simplify it dramatically. As we see in the next section, they also relate the uplink-sharing model to the simultaneous send/receive broadcasting model.
Lemma 1
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which the server and each of the peers only carry out a single upload at a time.
Proof. Identify the server as peer 0 and, for each i = 0, 1, . . . , N consider the schedule of peer i. We shall use the term job to mean the uploading of a particular file part to a particular peer. Consider the set of jobs, say J, whose processing involves some sharing of the upload capacity C i . Pick any job, say j, in J which is last in J to finish and call the time at which it finishes t f . Now fair sharing and continuity imply that job j is amongst the last to start amongst all the jobs finishing before or at time t f . To see this, note that if some job k were to start later than j, then (by fair sharing and continuity) k must receive less processing than job j by time t f and so cannot have finished by time t f . Let t s denote the starting time of job j.
We now modify the schedule between time t s and t f as follows. Let K be the set of jobs with which job j's processing has involved some sharing of the upload capacity. Let us re-schedule job j so that it is processed on its own between times t f − 1/C i M and t f . This consumes some amount of upload capacity that had been devoted to jobs in K between t f − 1/C i M and t f . However, it releases an exactly equal amount of upload capacity between times t s and t f − 1/C i M which had been used by job j. This can now be allocated (using fair sharing) to processing jobs in K.
The result is that j can be removed from the set J. All jobs finish no later than they did under the original schedule. Moreover, job j starts later than it did under the original schedule and the scheduling before time t s and after time t f is not affected. Thus, all jobs start no earlier than they did under the original schedule. This ensures that the source availability constraints are satisfied and that we can consider the upload schedules independently. We repeatedly apply this argument until set J is empty.
Using Lemma 1, a similar argument shows the following result.
Lemma 2
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which uploads start only at times that other uploads finish or at time 0.
Proof. By the previous Lemma it suffices to consider schedules in which the server and each of the peers only carry out a single upload at a time. Consider the joint schedule of all peers i = 0, 1, . . . , N and let J be the set of jobs that start at a time other than 0 at which no other upload finishes. Pick a job, say j, that is amongst the first in J to start, say at time t s . Consider the greatest time t f such that t f < t s and t f is either 0 or the time that some other upload finishes and modify the schedule so that job j already starts at time t f .
The source availability constraints are still satisfied and all uploads finish no later than they did under the original schedule. Job j can be removed from the set J and the number of jobs in J that start at time t s is decreased by 1, although there might now be more (but at most N in total) jobs in J that start at the time that job j finished in the original schedule.
But this time is later than t s . Thus, we repeatedly apply this argument until the number of jobs in J that start at time t s becomes 0 and then move along to jobs in J that are now amongst the first in j to start at time t ′ s > t s . Note that once a job has been removed from J, it will never be included again. Thus we continue until the set J is empty.
Centralized Solution for Equal Capacities
In this section, we give the optimal centralized solution of the uplink-sharing model of the previous section with equal upload capacities. We first consider the simultaneous send/receive broadcasting model in which the server and all users have upload capacity of 1. The following theorem provides a formula for the minimal makespan and a centralized algorithm that achieves it is contained in the proof.
This agrees with a result of Bar-Noy, Kipnis and Schieber [2], who obtained it as a byproduct of their result on the bidirectional telephone model. However, they required pairwise matchings in order to apply the results from the telephone model. So, for the simultaneous send/receive model, too, they use perfect matching in each round for odd N , and perfect matching on N − 2 nodes for even N . As a result, their algorithm differs for odd and even N and it is substantially more complicated, to describe, implement and prove to be correct, than the one we present within the proof of Theorem 1. Theorem 1 has been obtained also by Kwon and Chwa [21], via an algorithm for broadcasting in hypercubes. By contrast, our explicitly constructive proof makes the structure of the algorithm very easy to see. Moreover, it makes the proof of Theorem 3, that is, the result for the uplink-sharing model, a trivial consequence (using Lemmata 1 and 2).
Essentially, the log 2 N -scaling is due to the P2P approach. This compares favourably to the linear scaling of N that we would obtain for a fixed set of servers. The factor of 1/M is due to splitting the file into parts.
T * = 1 + ⌊log 2 N ⌋ M .(1)
Proof. Suppose that N = 2 n − 1 + x, for x = 1, . . . , 2 n . So n = ⌊log 2 N ⌋. The fact that M + n is a lower bound on the number of rounds is straightforwardly seen as follows. There are M different file parts and the server can only upload one file part (or one linear combination of file parts) in each round. Thus, it takes at least M rounds until the server has made sufficiently many uploads of file parts (or linear combinations of file parts) that the whole file can be recovered. The last of these M uploads by the server contains information that is essential to recovering the file, but this information is now known to only the server and one peer. It must takes at least n further rounds to disseminate this information to the other N − 1 peers. Now we show how the bound can be achieved. The result is trivial for M = 1. It is instructive to consider the case M = 2 explicitly. If n = 0 then N = 1 and the result is trivial. If n = 1 then N is 2 or 3. Suppose N = 3. In the following diagram each line corresponds to a round; each column to a peer. The entries denote the file part that the peer downloads that round. The bold entries indicate downloads from the server; un-bold entries indicate downloads from a peer who has the corresponding part.
1 2 1 2 1 2
Thus, dissemination of the two file parts to the 3 users can be completed in 3 rounds. The case N = 2 is even easier.
If n ≥ 2, then in rounds 2 to n each user uploads his part to a peer who has no file part and the server uploads part 2 to a peer who has no file part. We reach a point, shown below, at which a set of 2 n−1 peers have file part 1, a set of 2 n−1 − 1 peers have file part 2, and a set of x peers have no file part (those denoted by * · · · * ). Let us call these three sets A 1 , A 2 and A 0 , respectively.
1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 . . . 2 1 · · · 2 1 * · · · *
In round n + 1 we let peers in A 1 upload part 1 to 2 n−1 − ⌊x/2⌋ peers in A 2 and to ⌊x/2⌋ peers in A 0 (If x = 1, to 2 n−1 − 1 peers in A 2 and to 1 peer in A 0 ). Peers in A 2 upload part 2 to 2 n−1 − ⌈x/2⌉ peers in A 1 and to another ⌈x/2⌉ − 1 peers in A 0 . The server uploads part 2 to a member of A 0 (If x = 1, to a member of A 1 ). Thus, at the end of this round 2 n − x peers have both file parts, x peers have only file part 1, and x − 1 peers have only file part 2. One more round (round n + 2) is clearly sufficient to complete the dissemination. Now consider M ≥ 3. The server uploads part 1 to one peer in round 1. In rounds j = 2, . . . , min{n, M − 1}, each peer who has a file part uploads his part to another peer who has no file part and the server uploads part j to a peer who has no file part. If M ≤ n, then in rounds M to n each peer uploads his part to a peer who has no file part and the server uploads part M to a peer who has no file part. As above, we illustrate this with a diagram. Here we show the first n rounds in the case M ≤ n.
1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 . . . M 1 · · · 2 1 . . . M 1 · · · 2 1 * · · · *
When round n ends, 2 n − 1 peers have one file part and x peers have no file part. The number of peers having file part i is given in the second column of Table 1. In this table any entry which evaluates to less than 1 is to be read as 0 (so, for example, the bottom two entries in Part Numbers of the file parts at the ends of rounds n n + 1 n + 2 n + 3 · · · n + M − 1 set peers in the set have number of peers in set B 12 parts 1 and 2 2 n−1 − ⌊x/2⌋ B 1p part 1 and a part other than 1 or 2 2 n−1 − ⌈x/2⌉
1 2 n−1 2 n N N · · · N 2 2 n−2 2 n−1 2 n N · · · N 3 2 n−3 2 n−2 2 n−1 2 n · · · N 4 2 n−4 2 n−3 2 n−2 2 n−1 · · · N . . . . . . . . . . . . . . . . . . M − 2 2 n−M+2 2 n−M+3 2 n−M+4 2 n−M+5 · · · N M − 1 2 n−M+1 2 n−M+2 2 n−M+3 2 n−M+4 · · · 2 n M 2 n−M+1 − 1 2 n−M+2 − 1 2 n−M+3 − 1 2 n−M+4 − 1 · · · 2 n − 1B 1 just part 1 x B 2 just part 2 ⌊x/2⌋ B p
just a part other than 1 or 2 ⌈x/2⌉ − 1 column 2 and the bottom entry in column 3 are 0 for n = M − 2). Now in round n + 1, by downloading from every peer who has a file part, and downloading part min{n + 1, M } from the server, we can obtain the numbers shown in the third column. Moreover, we can easily arrange so that peers can be divided into the sets B 12 , B 1p , B 1 , B 2 and B p as shown in Table 2. In round n + 2, x − 1 of the peers in B 1 upload part 1 to peers in B 2 and B p . Peers in B 12 and B 2 each upload part 2 to the peers in B 1p and to ⌈x/2⌉ of the peers in B 1 . The server and the peers in B 1p and B p each upload a part other than 1 or 2 to the peers in B 12 and to the other ⌊x/2⌋ peers in B 1 . The server uploads part min{n + 2, M } and so we obtain the numbers in the fourth column of Table 1. Now all peers have part 1 and so it can be disregarded subsequently. Moreover, we can make the downloads from the server, B 1p and B p so that (disregarding part 1) the number of peers who ultimately have only part 3 is ⌊x/2⌋. This is possible because the size of B p is no more than ⌊x/2⌋; so if j peers in B p have part 3 then we can upload part 3 to exactly ⌊x/2⌋ − j peers in B 1 . Thus, a similar partitioning into sets as in Table 2 will hold as we start step n + 3 (when parts 2 and 3 takes over the roles of parts 1 and 2 respectively). Note that the optimal strategy above follows two principles. As many different peers as possible obtain file parts early on so that they can start uploading themselves and the maximal possible upload capacity is used. Moreover, there is a certain balance in the upload of different file parts so that no part gets circulated too late.
It is interesting that not all the available upload capacity is used. Suppose M ≥ 2. Observe that in round k, for each k = n + 2, . . . , n + M − 1, only x − 1 of the x peers (in set B 1 ) who have only file part k − n − 1 make an upload. This happens M − 2 times. Also, in round n + M there are only 2x − 1 uploads, whereas N + 1 are possible. Overall, we use N + M − 2x less uploads than we might. It can be checked that this number is the same for M = 1.
Suppose we were to follow a schedule that uses only x uploads during round n + 1, when the last peer gets its first file part. We would be using 2 n − x less uploads than we might in this round. Since 2 n − x ≤ N + M − 2x, we see that the schedule used in the proof above wastes at least as many uploads. So the mathematically interesting question arises as to whether or not it is necessary to use more than x uploads in round n + 1. In fact,
(N + M − 2x) − (2 n − x) = M − 1,
so, in terms of the total number of uploads, such a scheduling could still afford not to use one upload during each of the last M − 1 rounds. The question is whether or not each file part can be made available sufficiently often.
The following example shows that if we are not to use more than x uploads in round n + 1 we will have to do something quite subtle. We cannot simply pick any x out of the 2 n uploads possible and still hope that an optimal schedule will be shiftable: by which we mean that the number of copies of part j at the end of round k will be the same as the number of copies of part j − 1 at the end of round k − 1. It is the fact that the optimal schedule used in Theorem 1 is shiftable that makes its optimality so easy to see.
Example 1 Suppose M = 4 and N = 13 = 2 3 + 6 − 1, so M + ⌊log 2 N ⌋ = 7.
If we follow the same schedule as in Theorem 1, we reach after round 3,
1 2 1 3 1 2 1 · · · · · ·
Now if we only make x = 6 uploads during round 4, then there are eight ways to choose which six parts to upload and which two parts not to upload. One can check that in no case is it possible to arrange so that once this is done and uploads are made for round 5 then the resulting state has the same numbers of parts 2, 3 and 4, respectively, as the numbers of parts 1, 2 and 3 at the end of round 4. That is, there is no shiftable optimal schedule. In fact, if our six uploads has been four part 1s and two part 2s, then it would not even be possible to achieve (1).
In some cases, we can achieve (1), if we relax the demand that the schedule be shiftable. Indeed, we conjecture that this is always possible for at least one schedule that uses only x uploads during round n + 1. However, the fact that we cannot use essentially the same strategy in each round makes the general description of a non-shiftable optimal schedule very complicated. Our aim has been to find an optimal (shiftable) schedule that is easy to describe. We have shown that this is possible if we do use the spare capacity at round n + 1. For practical purposes this is desirable anyway, since even if it does not affect the makespan it is better if users obtain file parts earlier.
When x = 2 n our schedule can be realized using matchings between the 2 n peers holding the part that is to be completed next and the server together with the 2 n − 1 peers holding the remaining parts. But otherwise this is not always possible to schedule only with matchings. This is why our solution would not work for the more constrained telephone-like model considered in [2] (where, in fact, the answer differs as N is even or odd). to describe.
The solution of the simultaneous send/receive broadcasting model problem now gives the solution of our original uplink-sharing model when all capacities are the same.
Theorem 2 Consider the uplink-sharing model with all upload capacities equal to 1. The minimal makespan is given by (1), for all M , N , the same as in the simultaneous send/receive model with all upload capacities equal to 1.
Proof. Note that under the assumptions of the theorem and with application of Lemmas 1 and 2, the optimal solution to the uplink-sharing model is the same as that of the simultaneous send/receive broadcast model when all upload capacities equal to 1.
In the proof of Theorem 1 we explicitly gave an optimal schedule which also satisfies the constraints that no peer downloads more than a single file part at a time. Thus, we also have the following result.
Centralized Solution for General Capacities
We now consider the optimal centralized solution in the general case of the uplink-sharing model in which the upload capacities may be different. Essentially, we have an unusual type of precedence-constrained job scheduling problem. In Section 4.1 we formulate it as a mixed integer linear program (MILP). The MILP can also be used to find approximate solutions of bounded size of sub-optimality. In practice, it is suitable for a small number of file parts M . We discuss its implementation in Section 4.2. Finally, we provide additional insight into the solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different (Sections 4.3 and 4.4).
MILP formulation
In order to give the MILP formulation, we need the following Lemma. Essentially, it shows that time can be discretized suitably. We next show how the solution to the general problem can be found by solving a number of linear programs. Let time interval t be the interval [tτ, tτ + τ ), t = 0, . . . . Identify the server as peer 0. Let x ijk (t) be 1 or 0 as peer i downloads file part k from peer j during interval t or not. Let p ik (t) denote the proportion of file part k that peer i has downloaded by time t. Our problem is then is to find the minimal T such that the optimal value of the following MILP is M N . Since this T is certainly greater than 1/C S and less than N/C S , we can search for its value by a simple bisection search, solving this LP for various T :
maximize i,k p ik (T )(2)
subject to the constraints given below. The source availability constraint (6) guarantees that a user has completely downloaded a part before he can upload it to his peers. The connection constraint (7) requires that each user only carries out a single upload at a time. This is justified by Lemma 1 which also saves us another essential constraint and variable to control the actual download rates: The single user downloading from peer j at time t will do so at rate C j as expressed in the link constraint (5). Continuity and stopping constraints (8,9) require that a download that has started will not be interrupted until completion and then be stopped. The exclusivity constraint (10) ensures that each user downloads a given file part only from one peer, not from several ones. Stopping and exclusivity constraints are not based on assumptions, but obvious constraints to exclude redundant uploads.
Regional constraints
x ijk (t) ∈ {0, 1} for all i, j, k, t (3) p ik (t) ∈ [0, 1] for all i, k, t(4)
Link constraints between variables
p ik (t) = M τ t−τ t ′ =0 N j=0 x ijk (t ′ )C j for all i, k, t(5)
Essential constraints
x ijk (t) − ξ jk (t) ≤ 0 for all i, j, k, t (Source availability constraint) (6) i,k
x ijk (t) ≤ 1 for all j, t (Connection constraint)
x ijk (t) − ξ ik (t + 1) − x ijk (t + 1) ≤ 0 for all i, j, k, t (Continuity constraint)
(8) x ijk (t) + ξ ik (t) ≤ 1 for all i, j, k, t (Stopping constraint) (9) j x ijk (t) ≤ 1 for all i, k, t (Exclusivity constraint)(10)
Initial conditions p 0k (0) = 1 for all k (11) p ik (0) = 0 for all i, k
Constraints (8)- (6) have been linearized. Background can be found in [34]. For this, we used the auxiliary variable ξ ik (t) = 1 {p ik (t) = 1}. This definition can be expressed through the following linear constraints.
Linearization constraints
ξ ik (t) ∈ {0, 1} for all i, k, t (13) p ik (t) − ξ ik (t) ≥ 0 and p ik (t) − ξ ik (t) < 1 for all i, k, t(14)
It can be checked that together with (8)-(6), indeed, this gives
x ijk (t) = 1 and p ik (t + 1) < 1 =⇒ x ijk (t + 1) = 1 for all i, j, k, t
p ik (t) = 1 =⇒ x ijk (t) = 0 for all i, j, k, t (16) p jk (t) < 1 =⇒ x ijk (t) = 0 for all i, j, k, t(15)
that is, continuity, stopping and source availability constraints respectively.
Implementation of the MILP
MILPs are well-understood and there exist efficient computational methods and program codes. The simplex method introduced by Dantzig in 1947, in particular, has been found to yield an efficient algorithm in practice as well as providing insight into the theory. Since then, the method has been specialized to take advantage of the particular structure of certain classes of problems and various interior point methods have been introduced. For integer programming there are branch-and-bound, cutting plane (branch-and-cut) and column generation (branch-and-price) methods as well as dynamic programming algorithms. Moreover, there are various approximation algorithms and heuristics. These methods have been implemented in many commercial optimization libraries such as OSL or CPLEX. For further reading on these issues the reader is referred to [28], [4] and [38]. Thus, implementing and solving the MILPs gives the minimal makespan solution. Although, as the numbers of variables and constraints in the LP grows exponentially in N and M , this approach is not practical for large N and M .
Even so, we can use the LP formulation to obtain a bounded approximation to the solution. If we look at the problem with a greater τ , then the job end and start times are not guaranteed to lie at integer multiples of τ . However, if we imagine that each job does take until the end of an τ -length interval to finish (rather than finishing before the end), then we will overestimate the time that each job takes by at most τ . Since there are N M jobs in total, we overestimate the total time taken by at most N M τ . Thus, the approximation gives us an upper bound on the time taken and is at most N M τ greater than the true optimum. So we obtain both upper and lower bounds on the minimal makespan. Even for this approximation, the computing required is formidable for large N and M .
Insight for special cases with small N and M
We now provide some insight into the minimal makespan solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different. This addresses the case of the server having a significantly higher upload capacity than the end users.
Suppose N = 2 and M = 1, that is, the file has not been split. Only the server has the file initially, thus either (a) both peers download from the server, in which case the makespan is T = 2/C S , or (b) one peer downloads from the server and then the second peer downloads from the first; in this case T = 1/C S + 1/C 1 . Thus, the minimal makespan is T * = 1/C S + min{1/C S , 1/C 1 }.
If N = M = 2 we can again adopt a brute force approach. There are 16 possible cases, each specifying the download source that each peer uses for each part. These can be reduced to four by symmetry.
Case A: Everything is downloaded from the server. This is effectively the same as case (a) above. When C 1 is small compared to C S , this is the optimal strategy. Case B: One peer downloads everything from the server. The second peer downloads from the first. This is as case (b) above, but since the file is split in two, T is less. Case C: One peer downloads from the server. The other peer downloads one part of the file from the server and the other part from the first peer. Case D: Each peer downloads exactly one part from the server and the other part from the other peer. When C 1 is large compared to C S , this is the optimal strategy.
In each case, we can find the optimal scheduling and hence the minimal makespan. This is shown in Table 3.
case makespan The optimal strategy arises from A, C or D as C 1 /C S lies in the intervals [0, 1/3], [1/3, 1] or [1, ∞) respectively. In [1, ∞), B and D yield the same. See Figure 1. Note that under the optimal schedule for case C one peer has to wait while the other starts downloading. This illustrates that greedy-type distributed algorithms may not be optimal and that restricting uploaders to a single upload is sometimes necessary for an optimal scheduling (cf. Section 2).
A 2 C S B 1 2C S + 1 2C 1 + max 1 2C S , 1 2C 1 C 1 2C S + max 1 C S , 1 2C 1 D 1 C S + 1 2C 1
Insight for special cases with large M
We still assume C 1 = C 2 = · · · = C N , but C S might be different. In the limiting case that the file can be divided into infinitely many parts, the problem can be easily solved for any number N of users. Let each user download a fraction 1− α directly from the server at rate C S /N and a fraction α/(N − 1) from each of the other N − 1 peers, at rate min{C S /N, C 1 /(N − 1)} from each. The makespan is minimized by choosing α such that the times for these two downloads are equal, if possible. Equating them, we find the minimal makespan as follows.
Case 1: C 1 /(N − 1) ≤ C S /N : (1 − α)N C S = α C 1 =⇒ α = N C 1 C S + N C 1 =⇒ T = N C S + N C 1 .(18)Case 2: C 1 /(N − 1) ≥ C S /N : (1 − α)N C S = αN (N − 1)C S =⇒ α = N − 1 N =⇒ T = 1 C S .(19)
In total, there are N MB to upload and the total available upload capacity is C S + N C 1 MBps. Thus, a lower bound on the makespan is N/(C S + N C 1 ) seconds. Moreover, the server has to upload his file to at least one user. Hence another lower bound on the makespan is 1/C S . The former bound dominates in case 1 and we have shown that it can be achieved. The latter bound dominates in case 2 and we have shown that it can be achieved. As a result, the minimal makespan is
T * = max 1 C S , N C S + N C 1 .
(20) Figure 2 shows the minimal makespan when the file is split in 1, 2 and infinitely many file parts when N = 2. It illustrates how the makespan decreases with M . In the next section, we extend the results in this limiting case to a much more general scenario.
Centralized Fluid Limit Solution
In this section, we generalize the results of Section 4.4 to allow for general capacities C i . Moreover, instead of limiting the number of sources to one designated server with a file to disseminate, we now allow every user i to have a file that is to be disseminated to all other users. We provide the centralized solution in the limiting case that the file can be divided into infinitely many parts.
Let F i ≥ 0 denote the size of the file that user i disseminates to all other users. Seeing that in this situation there is no longer one particular server and everything is symmetric, we change notation for the rest of this section so that there are N ≥ 2 users 1, 2, . . . , N .
Moreover, let F = N i=1 F i and C = N i=1 C i .
We will prove the following result.
Theorem 4 In the fluid limit, the minimal makespan is
T * = max F 1 C 1 , F 2 C 2 , . . . , F N C N , (N − 1)F C (21)
and this can be achieved with a two-hop strategy, i.e., one in which users i's file is uploaded to user j, either directly from user i, or via at most one intermediate user.
Proof. The result is obvious for N = 2. Then the minimal makespan is max{F 1 /C 1 , F 2 /C 2 } and this is exactly the value of T * in (21).
So we consider N ≥ 3. It is easy to see that each of the N + 1 terms within the braces on the right hand side of (21) are lower bounds on the makespan. Each user has to upload his file at least to one user, which takes time F i /C i . Moreover, the total volume of files to be uploaded is (N − 1)F and the total available capacity is C. Thus, the makespan is at least T * , and it remains to be shown that a makespan of T * can be achieved. There are two cases to consider.
Case 1: (N − 1)F/C ≥ max i F i /C i for all i.
In this case, T * = (N − 1)F/C. Let us consider the 2-hop strategy in which each user uploads a fraction α ii of its file F i directly to all (N − 1) peers, simultaneously and at equal rates. Moreover, he uploads a fraction α ij to peer j who in turn then uploads it to the remaining (N − 2) peers, again simultaneously and at equal rates. Note that N j=1 α ij = 1. Explicitly constructing a suitable set α ij , we thus obtain the problem min T (22) subject to, for all i,
1 C i α ii F i (N − 1) + k =i α ik F i + k =i α ki F k (N − 2) ≤ T .(23)
We minimize T by choosing the α ij in such a way as to equate the N left hand sides of the constraints, if possible. Rewriting the expression in square brackets, equating the constraints for i and j and then summing over all j we obtain
C α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = C i (N − 2) j α jj F j + F + (N − 2)(F − j α jj F j ) = (N − 1)C i F.(24)
Thus,
α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = (N − 1) C i C F.(25)
Note that there is a lot of freedom in the choice of the α so let us specify that we require α ki to be constant in k for k = i, that is α ki = α * i for k = i. This means that i has the capacity to take over a certain part of the dissemination from some peer, then it can and will also take over the same proportion from any other peer. Put another way, user i splits excess capacity equally between its peers. Thus,
α ii F i (N − 2) + F i + α * i (N − 2)(F − F i ) = (N − 1) C i C F(26)
Still, we have twice as many variables as constraints. Let us also specify that α * i = α ii for all i. Similarly as above, this says that the proportion of its own file F i that i uploads to all its peers (rather than just to one of them) is the same as the proportion of the files that it takes over from its peers. Then
α * i = (N − 1)(C i /C)F − F i (N − 2)F = (N − 1)C i (N − 2)C − F i (N − 2)F ,(27)
where i α * i = 1 and α * i ≥ 0, because in case 1 F i /C i ≤ (N − 1)F/C. With these α ij , we obtain the time for i to complete its upload and hence the time for everyone to complete their upload as
T = 1 C i α * i F i (N − 2) + F i + k =i α * i F k (N − 2) = (N − 1)F i C − F i 2 C i F + F i C i + (N − 1)(F − F i ) C − F i (F − F i ) C i F = (N − 1)F/C.(28)
Note that there is no problem with precedence constraints. All uploads happen simultaneously stretched out from time 0 to T . User i uploads to j a fraction α ij of F i . Thus, he does so at constant rate α ij F i /T i = α ij F i /T . User j passes on the same amount of data to each of the other users in the same time, hence at the same rate α ij F i /T j = α ij F i /T .
Thus, we have shown that if the aggregate lower bound dominates the others, it can be achieved. It remains to be shown that if an individual lower bound dominates, than this can be achieved also.
Case 2: F i /C i > (N − 1)F/C for some i.
By contradiction it is easily seen that this cannot be the case for all i. Let us order the users in decreasing order of F i /C i , so that F 1 /C 1 is the largest of the F i /C i . We wish to show that all files can be disseminated within a time of F 1 /C 1 . To do this we construct new capacities C ′ i with the following properties:
C ′ 1 = C 1 ,(29)C ′ i ≤ C i for i = 1,(30)(N − 1)F/C ′ = F 1 /C ′ 1 = F 1 /C 1 and (31) F i /C ′ i ≤ F 1 /C 1 .(32)
This new problem satisfies the condition of Case 1 and so the minimal makespan is T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem is T = F 1 /C 1 also, because the unprimed capacities are greater or equal to the primed capacities by property (30).
To explicitly construct capacities satisfying (29)-(32), let us define
C ′ i = (N − 1) C 1 F 1 γ i F i(33)
with constants γ i ≥ 0 such that
i γ i F i = F .(34)
Then (N − 1)F/C ′ = F 1 /C 1 , that is (31) holds. Moreover, choosing
γ i ≤ 1 N − 1 C i F i F 1 C 1(35)
ensures C ′ i ≤ C i , i.e. property (30) and choosing
γ i ≥ 1 N − 1(36)
ensures F i /C ′ i ≤ F 1 /C 1 , that is property (32). Furthermore, the previous two conditions together ensure that γ 1 = 1/(N − 1) and thus C ′ 1 = C 1 , that is property (29). It remains to construct a set of parameters γ i that satisfies (34), (35) and (36).
Putting all γ i equal to the lower bound (36) gives i γ i F i = F/(N − 1), that is too small to satisfy (34). Putting all equal to the upper bound (35) gives i γ i F i = F 1 C/(N − 1)C 1 , that is too large to satisfy (34). So we pick a suitably weighted average instead. Namely,
γ i = 1 N − 1 δ C i F i F 1 C 1 + (1 − δ)(37)
such that δ C N − 1
F 1 C 1 + (1 − δ) F N − 1 = F(38)that is δ = (N − 2)F C 1 F 1 C − F C 1 .(39)
Substituting back in we obtain
γ i = 1 N − 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 (F 1 C − F C 1 )F i(40)
and thus
C ′ i = C 1 F 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 F 1 C − F C 1(41)
By construction, these C ′ i satisfy properties (29)-(32) and hence, by the results in Case 1, T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem T = F 1 /C 1 also.
It is worth noting that there is a lot of freedom in the choice of the α ij . We have chosen a symmetric approach, but other choices are possible.
In practice, the file will not be infinitely divisible. However, we often have M >> log(N ) and this appears to be sufficient for (21) to be a good approximation. Thus, the fluid limit approach of this section is suitable for typical and for large values of M .
Decentralized Solution for Equal Capacities
In order to give a lower bound on the minimal makespan, we have been assuming a centralized controller does the scheduling. We now consider a naive randomized strategy and investigate the loss in performance that is due to the lack of centralized control. We do this for equal capacities and in two different information scenarios, evaluating its performance by analytic bounds, simulation as well as direct computation. In Section 6.1 we consider the special case of one file part, in Section 6.2 we consider the general case of M file parts. We find that even this naive strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller (cf. Section 3). This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bounds so that they are useful in practice.
The special case of one file part
Assumptions Let us start with the case M = 1. We must first specify what information is available to users. It makes sense to assume that each peer knows the number of parts into which the file is divided, M , and the address of the server. However, a peer might not know N , the total number of peers, nor its peers' addresses, nor if they have the file, nor whether they are at present occupied uploading to someone else.
We consider two different information scenarios. In the first one, List, the number of peers holding the file and their addresses are known. In the second one, NoList, the number and addresses of all peers are known, but not which of them currently hold the file. Thus, in List, downloading users choose uniformly at random between the server and the peers already having the file. In NoList, downloading users choose uniformly amongst the server and all their peers. If a peer receives a query from a single peer, he uploads the file to that peer. If a peer receives queries from multiple peers, he chooses one of them uniformly at random. The others remain unsuccessful in that round. Thus, in List transmission can fail only if too many users try to download simultaneously from the same uploader. In NoList, transmission might also fail if a user tries to download from a peer who does not yet have the file.
Theoretical Bounds
The following theorem explains how the expected makespan that is achieved by the randomized strategy grows with N , in both the List and the NoList scenarios.
Theorem 5 In the uplink-sharing model, with equal upload capacities, the expected number of rounds required to disseminate a single file to all peers in either the List or NoList scenario is Θ(log N ).
Proof. In the List scenario our simple randomized algorithm runs in less time than in the NoList scenario. Since already have the lower bound given by Theorem 1, it suffices to prove that the expected runing time in the NoList scenario is O(log N ). There is also similar direct proof that the expected running time under the List scenario is O(log N ).
Suppose we have reached a stage in the dissemination at which n 1 peers (including the server) have the file and n 0 peers do not, with n 0 +n 1 = N +1. (The base case is n 1 = 1, when only the server has the file.) Each of the peers that does not have the file randomly chooses amongst the server and all his peers (NoList) and tries to download the file. If more than one peer tries to download from the same place then only one of the downloads is successful. The proof has two steps.
(i) Suppose that n 1 ≤ n 0 . Let i be the server or a peer who has the file and let I i be an indicator random variable that is 0 or 1 as i does or does not upload it. Let Y = i I i , where the sum is taken over all n 1 peers who have the file. Thus n 1 − Y is the number of uploads that take place. Then
EI i = 1 − 1 N n 0 ≤ 1 − 1 2n 0 n 0 ≤ 1 √ e .(42)
Now since E( i I i ) = i EI i , we have EY ≤ n 1 / √ e. Thus, by the Markov inequality, that for a nonnegative random variable Y we have that for any k (not necessarily an integer) P (Y ≥ k) ≤ (1/k)EY , we have by taking k = (2/3)n 1 ,
P n 1 − Y ≡ number of uploads ≤ 1 3 n 1 = P (Y ≥ 2 3 n 1 ) ≤ n 1 / √ e 2 3 n 1 = 3/(2 √ e) < 1 .(43)
Thus the expected number of steps required for the number of peers who have the file to increases from n 1 to at least n 1 + (1/3)n 1 = (4/3)n 1 is bounded by a geometric random variable with mean µ = 1/(1 − 3/(2 √ e)). This implies that we will reach a state in which more peers have the file than do not in an expected time that is O(log N ). From that point we continue with step (ii) of the proof.
(ii) Suppose n 1 > n 0 . Let j be a peer who does not have the file and let J j be an indicator random variable that is 0 or 1 as peer j does or does not succeed in downloading it. Let Z = j J j , where the sum is taken over all n 0 peers who do not have the file. Suppose X is the number of the other n 0 − 1 peers that try to download from the same place as does peer j. Then
P (J j = 0) = E n 1 N 1 1 + X ≥ E n 1 N (1 − X) = n 1 N 1 − n 0 − 1 N = n 1 N 1 − N − n 1 N = n 2 1 N 2 ≥ 1/4 .(44)
Hence EZ ≤ (3/4)n 0 and so, again using the Markov inequality,
P n 0 − Z ≡ number of downloads ≤ 1 8 n 0 = P Z ≥ 7 8 n 0 ≤ 3 4 n 0 7 8 n 0 = 6 7 .(45)
It follows that the number of peers who do not yet have the file decreases from n 0 to no more than (7/8)n 0 in an expected number of steps no more than µ ′ = 1/(1 − 6 7 ) = 7. Thus the number of steps needed for the number of peers without the file to decrease from n 0 to 0 is O(log n 0 ) = O(log N ). In fact, this is a weak upper bound. By more complicated arguments we can show that if n 0 = aN , where a ≤ 1/2, then the expected remaining time for our algorithm to complete under NoList is Θ(log log N ). For a > 1/2 the expected time remains Θ(log N ).
Simulation
For the problem with one server and N users we have carried out 1000 independent simulation runs 4 for a large range of parameters, N = 2, 4, . . . , 2 25 . We found that the achieved expected makespan appears to grow as a + b × log 2 N . Motivated by this and the theoretical bound from Theorem 5 we fitted the linear model
y ij = α + βx i + ǫ ij ,(46)
where y ij is the makespan for x i = log 2 2 i , obtained in run j, j = 1, . . . , 1000. Indeed, the model fits the data very well in both scenarios. We obtain the following results that enable us to compare the expected makespan of the naive randomized strategy to the that of a centralized controller. For List, the regression analysis gives a good fit, with Multiple R-squared value of 0.9975 and significant p-and t-values. The makespan increases as
1.1392 + 1.1021 × log 2 N .(47)
For NoList, there is more variation in the data than for List, but, again, the linear regression gives a good fit, with Multiple R-squared of 0.9864 and significant p-and t-values. The makespan increases as 1.7561 + 1.5755 × log 2 N .
As expected, the additional information for List leads to a significantly lesser makespan when compared to NoList, in particular the log-term coefficient is significantly smaller. In the List scenario, the randomized strategy achieves a makespan that is very close to the centralized optimum of 1 + ⌊log 2 N ⌋ of Section 3: It is only suboptimal by about 10%. Hence even this simple randomized strategy performs well in both cases and very well when state information is available, suggesting that our bounds are useful in practice.
Computations
Alternatively, it is possible to compute the mean makespan analytically by considering a Markov Chain on the state space 0, 1, 2, . . . , N , where state i corresponds to i of the N peers having the file. We can calculate the transition probabilities p ij . In the NoList case, for example, following the Occupancy Distribution (e.g., [18]), we obtain
p ii+m = i j=i−m (−1) j−i+m i! (i − j)!(i − m)!(j − i + m)! N − 1 − j N − 1 N −i .(49)
Hence we can successively compute the expected hitting times k(i) of state N starting from state i via
k(i) = 1 + j>i k(j)p ij 1 − p ii .(50)
The resulting formula is rather complicated, but can be evaluated exactly using arbitrary precision arithmetic on a computer. Computation times are long, so to keep them shorter we only work out the transition probabilities of the associated Markov Chain exactly. Hitting times are then computed in double arithmetic, that is, to 16 significant digits. Even so, computations are only feasible up to N = 512 with our equipment, despite repeatedly enhanced efficiency. This suggests that simulation is the more computationally efficient approach to our problem. The computed mean values for List and NoList are shown in Tables 4 and 5 respectively. The difference to the simulated values is small without any apparent trend. It can also be checked by computing the standard deviation that the computed mean makespan is contained in the approximate 95% confidence interval of the simulated mean makespan. The only exception is for N = 128 for NoList where it is just outside by approximately 0.0016.
Thus, the computations prove our simulation results accurate. Since simulation results are also obtained more efficiently, we shall stick to simulation when investigating the general case of M file parts in the next section.
The general case of M file parts
Assumptions
We now consider splitting the file into several file parts. With the same assumptions as in the previous section, we repeat the analysis for List for various values of M . Thus, in each round, a downloading user connects to a peer chosen uniformly at random from those peers that have at least one file part that the user does not yet have. An uploading peer randomly chooses one out of the peers requesting a download from him. He uploads to that peer a file part that is randomly chosen from amongst those that he has and the peer still needs.
Simulation
Again, we consider a large range of parameter. We carried out 100 independent runs for each N = 2, 4, . . . , 2 15 . For each value of M = 1 − 5, 8, 10, 15, 20, 50 we fitted the linear model (46). Table 6 summarizes the simulation results. The Multiple R-squared values indicate a good fit, although the fact that these decrease with M suggests there may be a finer dependence on M or N . In fact, we obtain a better fit using Generalized Additive Models (cf. [14]). However, our interest here is not in fitting the best possible model, but to compare the growth rate with N to the one obtained in the centralized case in Section 3. Moreover, from the diagnostic plots we note that the actual performance for large N is better than given by the regression line, increasingly so for increasing M . In each case, we obtain significant p-and t-values. The regression 0.7856+1.1520×log 2 N for M = 1 does not quite agree with 1.1392+1.1021×log 2 N found in (47). It can be checked, by repeating the analysis there for N = 2, 4, . . . , 2 15 that this is due to the different range of N . Thus, our earlier result of 1.1021 might be regarded more reliable, being based on N ranging up to 2 25 .
We conclude that, as in the centralized scenario, the makespan can also be reduced significantly in a decentralized scenario even when a simple randomized strategy is used to disseminate the file parts. However, as we note by comparing the second and fourth columns of Table 6, as M increases the achieved makespan compares less well relative to the centralized minimum of 1 + (1/M )⌊log 2 N ⌋. In particular, note the slower decrease of the log-term coefficient. This is depicted in Figure 3.
Still, we have seen that even this naive randomized strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller in Section 3, confirming our performance bounds are useful in practice. This is confirmed also by initial results of current work on the performance evaluation of the Bullet' system [20].
The program code for simulations as well as the computations and the diagnostic plots used in this section are available on request and will be made available via the Internet 5 .
Discussion
In this paper, we have given three complementary solutions for the minimal time to fully disseminate a file of M parts from a server to N end users in a centralized scenario, thereby providing a lower bound on and a performance benchmark for P2P file dissemination systems. Our results illustrate how the P2P approach, together with splitting the file into M parts, can achieve a significant reduction in makespan. Moreover, the server has a reduced workload when compared to the traditional client/server approach in which it does all the uploads itself. We also investigate the part of the loss in efficiency that is due to the lack of centralized control in practice. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bound confirming their practical use. It would now be very interesting to compare dissemination times of the various efficient real overlay networks directly to our performance bound. A mathematical analysis of the protocols is rarely tractable, but simulation or measurements such as in [17] and [30] for the BitTorrent protocol can be carried out in an environment suitable for this comparison. Cf. also testbed results for Slurpie [33] and simulation results for Avalanche [12]. It is current work to compare our bounds to the makespan obtained by Bullet' [20]. Initial results confirm their practical use further.
In practice, splitting the file and passing on extra information has an overhead cost. Moreover, with the Transmission Control Protocol (TCP), longer connections are more efficient than shorter ones. TCP is used practically everywhere except for the Internet Control Message Protocol (ICMP) and User Datagram Protocol (UDP) for real-time applications. For further details see [35]. Still, with an overhead cost it will not be optimal to increase M beyond a certain value. This could be investigated in more detail.
In the proof of Lemma 1 and Lemma 2 we have used fair sharing and continuity assumptions. It would be of interest to investigate whether one of them or both can be relaxed. Table 6: the decentralized List scenario (solid) and the idealized centralized scenario (dashed).
It would be interesting to generalize our results to account for a dynamic setting with peers arriving and perhaps leaving when they have completed the download of the file. In Internet applications users often connect for only relatively short times. Work in this direction, using a fluid model to study the steady-state performance, is pursued in [31] and there is other relevant work in [37].
Also of interest would be to extend our model to consider users who prefer to free-ride and do not wish to contribute uploading effort. Or, to users who might want to leave the system once they have downloaded the whole file, a behaviour sometimes referred to as easy-riding. The BitTorrent protocol, for example, implements a choking algorithm to limit free-riding.
In another scenario it might be appropriate to assume that users push messages rather than pull them. See [11] for an investigation of the design space for distributed information systems. The push-pull distinction is also part of their classification. In a push system, the centralized case would remain the same. However, we expect the decentralized case to be different. There are a number of other interesting questions which could be investigated in this context. For example, what happens if only a subset of the users is actually interested in the file, but the uploaders do not know which.
From a mathematical point of view it would also be interesting to consider additional download constraints explicitly as part of the model, in particular when up-and download capacities are all different and not positively correlated. We might suppose that user i can upload at a rate C i and simultaneously download at rate D i .
More generally, one might want to assume different capacities for all links between pairs. Or, phrased in terms of transmission times, let us assume that for a file to be sent from user i to user j it takes time t ij . Then we obtain a transportation network, where instead of link costs we now have link delays. This problem can be phrased as a one-to-all shortest path problem if C j is at least N +1. This suggests that there might be some relation which could be exploited. On the other hand, the problem is sufficiently different so that greedy algorithms, induction on nodes and Dynamic Programming do not appear to work. Background on these can be found in [4] and [3]. For M = 1, Prüfer's (N + 1) N −1 labelled trees [6] together with the obvious O(N ) algorithm for the optimal scheduling given a tree is an exhaustive search. A Branch and Bound algorithm can be formulated.
| 11,555 |
cs0606110
|
2949837610
|
Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equally-sized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplink-sharing version of the well-known broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice.
|
In this paper, we provide the scheduling background, proofs and discussion of the results in our extended abstracts @cite_16 and @cite_34 . It is essentially Chapter 2 of @cite_27 , but we have added Theorem and the part on theoretical bounds in Section . @cite_29 the authors also consider problems concerned with the service capacity of P2P networks, however, they only give a heuristic argument for the makespan with equal upload capacities when @math is of the simple form @math . @cite_26 a fluid model for BitTorrent-like networks is introduced and studied, also looking at the effect of incentive mechanisms to address free-riding. Link utilization and fairness are issues in @cite_13 . @cite_31 , also motivated by the BitTorrent protocol and file swarming systems in general, the authors consider a probabilistic model of coupon replication systems. Multi-torrent systems are discussed in @cite_17 . There is other related work in @cite_10 .
|
{
"abstract": [
"In this paper, we develop simple models to study the performance of BitTorrent, a second generation peer-to-peer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a file-sharing mechanism. We then consider the built-in incentive mechanism of BitTorrent and study its effect on network performance. We also provide numerical results based on both simulations and real traces obtained from the Internet.",
"This paper presents an analytic framework to evaluate the performance of peer to peer (P2P) networks. Using the time to download or replicate an arbitrary file as the metric, we present a model which accurately captures the impact of various network and peer level characteristics on the performance of a P2P network. We propose a queueing model which evaluates the delays in the routers using a single class open queueing network and the peers as M G 1 K processor sharing queues. The framework takes into account the underlying physical network topology and arbitrary file sizes, the search time, load distribution at peers and number of concurrent downloads allowed by a peer. The model has been validated using extensive simulations with campus level, power law AS level and ISP level topologies. The paper also describes the impact of various parameters associated with the network and peers including external traffic rates, service variability, file popularity etc. on the download times. We also show that in scenarios with multi-part downloads from different peers, a rate proportional allocation strategy minimizes the download times.",
"",
"",
"",
"Motivated by the study of peer-to-peer file swarming systems a la BitTorrent, we introduce a probabilistic model of coupon replication systems. These systems consist of users, aiming to complete a collection of distinct coupons. Users are characterised by their current collection of coupons, and leave the system once they complete their coupon collection. The system evolution is then specified by describing how users of distinct types meet, and which coupons get replicated upon such encounters.For open systems, with exogenous user arrivals, we derive necessary and sufficient stability conditions in a layered scenario, where encounters are between users holding the same number of coupons. We also consider a system where encounters are between users chosen uniformly at random from the whole population. We show that performance, captured by sojourn time, is asymptotically optimal in both systems as the number of coupon types becomes large.We also consider closed systems with no exogenous user arrivals. In a special scenario where users have only one missing coupon, we evaluate the size of the population ultimately remaining in the system, as the initial number of users, N, goes to infinity. We show that this decreases geometrically with the number of coupons, K. In particular, when the ratio K log(N) is above a critical threshold, we prove that this number of left-overs is of order log(log(N)).These results suggest that performance of file swarming systems does not depend critically on either altruistic user behavior, or on load balancing strategies such as rarest first.",
"",
"In this paper, we present a simulation-based study of BitTorrent. Our results confirm that BitTorrent performs near-optimally in terms of uplink bandwidth utilization and download time, except under certain extreme conditions. On fairness, however, our work shows that low bandwidth peers systematically download more than they upload to the network when high bandwidth peers are present. We find that the rate-based tit-for-tat policy is not effective in preventing unfairness. We show how simple changes to the tracker and a stricter, block-based tit-for-tat policy, greatly improves fairness, while maintaining high utilization.",
"Existing studies on BitTorrent systems are single-torrent based, while more than 85 of all peers participate in multiple torrents according to our trace analysis. In addition, these studies are not sufficiently insightful and accurate even for single-torrent models, due to some unrealistic assumptions. Our analysis of representative Bit-Torrent traffic provides several new findings regarding the limitations of BitTorrent systems: (1) Due to the exponentially decreasing peer arrival rate in reality, service availability in such systems becomes poor quickly, after which it is difficult for the file to be located and downloaded. (2) Client performance in the BitTorrent-like systems is unstable, and fluctuates widely with the peer population. (3) Existing systems could provide unfair services to peers, where peers with high downloading speed tend to download more and upload less. In this paper, we study these limitations on torrent evolution in realistic environments. Motivated by the analysis and modeling results, we further build a graph based multi-torrent model to study inter-torrent collaboration. Our model quantitatively provides strong motivation for inter-torrent collaboration instead of directly stimulating seeds to stay longer. We also discuss a system design to show the feasibility of multi-torrent collaboration."
],
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_29",
"@cite_34",
"@cite_27",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2166245380",
"2137722920",
"",
"",
"",
"2010859647",
"",
"1975941674",
"2111346848"
]
}
|
Optimal Scheduling of Peer-to-Peer File Dissemination
|
Suppose that M messages of equal length are initially known only at a single source node in a network. The so-called broadcasting problem is about disseminating these M messages to a population of N other nodes in the least possible time, subject to capacity constraints along the links of the network. The assumption is that once a node has received one of the messages it can participate subsequently in sending that message to its neighbouring nodes.
Scheduling background and related work
The broadcasting problem has been considered for different network topologies. Comprehensive surveys can be found in [15] and [16]. On a complete graph, the problem was first solved in [8] and [10]. Their communication model was a unidirectional telephone model in which each node can either send or receive one message during each round, but cannot do both. In this model, the minimal number of rounds required is 2M − 1 + ⌊log 2 (N + 1)⌋ for even N , and 2M + ⌊log
2 (N + 1)⌋ − ⌊ M −1+2 ⌊log 2 (N+1)⌋ (N +1)/2 ⌋ for odd N . 3
In [2], the authors considered the bidirectional telephone model in which nodes can both send one message and receive one message simultaneously, but they must be matched pairwise. That is, in each given round, a node can only receive a message from the same node to which it sends a message. They provide an optimal algorithm for odd N , which takes M + ⌊log 2 N ⌋ rounds. For even N their algorithm is optimal up to an additive term of 3, taking M + ⌊log 2 N ⌋ + M/N + 2 rounds.
The simultaneous send/receive model [21] supposes that during each round every user may receive one message and send one message. Unlike the telephone model, it is not required that a user can send a message only to the same user from which it receives a message. The optimal number of rounds turns out to be M + ⌊log 2 N ⌋ and we will return to this result in Section 3.
In this paper, we are working with our new uplink-sharing model designed for P2P file dissemination (cf. Section 2). It is closely related to the simultaneous send/receive model, but is set in continuous time. Moreover, we permit users to have different upload capacities which are the constraints on the data that can be sent per unit of time. This contrasts with previous work in which the aim was to model interactions of processors and so it was natural to assume that all nodes have equal capacities. Our work also differs from previous work in that we are motivated by the evaluation of necessarily decentralized P2P file dissemination algorithms, i.e., ones that can be implemented by the users themselves, rather than by a centralized controller. Our interest in the centralized case is as a basis for comparison and to give a lower bound. We show that in the case of equal upload capacities the optimal number of rounds is M + ⌊log 2 N ⌋ as for the simultaneous send/receive model. Moreover, we provide two complementary solutions for the case of general upload capacities and investigate the performance of a decentralized strategy.
Outlook
The rest of this paper is organized as follows. In Section 2 we introduce the uplink-sharing model and relate it to the simultaneous send/receive model. Our optimal algorithm for the simultaneous send/receive broadcasting problem is presented in Section 3. We show that it also solves the problem for the uplink-sharing model with equal capacities. In Section 4 we show that the general uplink-sharing model can be solved via a finite number of mixed integer linear programming (MILP) problems. This approach is suitable for a small number of file parts M . We provide additional insight through the solution of some special cases. We then consider the limiting case that the file can be divided into infinitely many parts and provide the centralized fluid solution. We extend these results to the even more general situation where different users might have different (disjoint) files of different sizes to disseminate (Section 5). This approach is suitable for typical and for large numbers of file parts M . Finally, we turn to decentralized algorithms. In Section 6 we evaluate the performance of a very simple and natural randomized strategy, theoretically, by simulation and by direct computation. We provide results in two different information scenarios with equal capacities showing that even this naive algorithm disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to the performance bounds of the previous sections so that they are useful in practice. We conclude and present ideas for further research in Section 7.
The Uplink-Sharing Model
We now introduce an abstract model for the file dissemination scenario described in the previous section, focusing on the important features of P2P file dissemination.
Underlying the file dissemination system is the Internet. Thus, each user can connect to every other user and the network topology is a complete graph. The server S has upload capacity C S and the N peers have upload capacities C 1 , . . . , C N , measured in megabytes per second (MBps). Once a user has received a file part it can participate subsequently in uploading it to its peers (source availability). We suppose that, in principle, any number of users can simultaneously connect to the server or another peer, the available upload capacity being shared equally amongst the open connections (fair sharing). Taking the file size to be 1 MB, this means that if n users try simultaneously to download a part of the file (of size 1/M ) from the server then it takes n/M C S seconds for these downloads to complete. Observe that the rate at which an upload takes place can both increase and decrease during the time of that upload (varying according to the number of other uploads with which it shares the upload capacity), but we assume that uploads are not interrupted until complete, that is the rate is always positive (continuity). In fact, Lemma 1 below shows that the makespan is not increased if we restrict the server and all peers to carry out only a single upload at a time. We permit a user to download more than one file part simultaneously, but these must be from different sources; only one file part may be transferred from one user to another at the same time. We ignore more complicated interactions and suppose that the upload capacities, C S , C 1 , . . . , C N , impose the only constraints on the rates at which file parts can be transferred between peers which is a reasonable assumption if the underlying network is not overloaded. Finally, we assume that rates of uploads and downloads do not constrain one another.
Note that we have assumed the download rates to be unconstrained and this might be considered unrealistic. However, we shall show a posteriori in Section 3 that if the upload capacities are equal then additional download capacity constraints do not increase the minimum possible makespan, as long as these download capacities are at least as big. Indeed, this is usually the case in practice.
Typically, N is the order of several thousands and the file size is up to a few gigabytes (GB), so that there are several thousand file parts of size 1/4 MB each.
Finding the minimal makespan looks potentially very hard as upload times are interdependent and might start at arbitrary points in time. However, the following two observations help simplify it dramatically. As we see in the next section, they also relate the uplink-sharing model to the simultaneous send/receive broadcasting model.
Lemma 1
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which the server and each of the peers only carry out a single upload at a time.
Proof. Identify the server as peer 0 and, for each i = 0, 1, . . . , N consider the schedule of peer i. We shall use the term job to mean the uploading of a particular file part to a particular peer. Consider the set of jobs, say J, whose processing involves some sharing of the upload capacity C i . Pick any job, say j, in J which is last in J to finish and call the time at which it finishes t f . Now fair sharing and continuity imply that job j is amongst the last to start amongst all the jobs finishing before or at time t f . To see this, note that if some job k were to start later than j, then (by fair sharing and continuity) k must receive less processing than job j by time t f and so cannot have finished by time t f . Let t s denote the starting time of job j.
We now modify the schedule between time t s and t f as follows. Let K be the set of jobs with which job j's processing has involved some sharing of the upload capacity. Let us re-schedule job j so that it is processed on its own between times t f − 1/C i M and t f . This consumes some amount of upload capacity that had been devoted to jobs in K between t f − 1/C i M and t f . However, it releases an exactly equal amount of upload capacity between times t s and t f − 1/C i M which had been used by job j. This can now be allocated (using fair sharing) to processing jobs in K.
The result is that j can be removed from the set J. All jobs finish no later than they did under the original schedule. Moreover, job j starts later than it did under the original schedule and the scheduling before time t s and after time t f is not affected. Thus, all jobs start no earlier than they did under the original schedule. This ensures that the source availability constraints are satisfied and that we can consider the upload schedules independently. We repeatedly apply this argument until set J is empty.
Using Lemma 1, a similar argument shows the following result.
Lemma 2
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which uploads start only at times that other uploads finish or at time 0.
Proof. By the previous Lemma it suffices to consider schedules in which the server and each of the peers only carry out a single upload at a time. Consider the joint schedule of all peers i = 0, 1, . . . , N and let J be the set of jobs that start at a time other than 0 at which no other upload finishes. Pick a job, say j, that is amongst the first in J to start, say at time t s . Consider the greatest time t f such that t f < t s and t f is either 0 or the time that some other upload finishes and modify the schedule so that job j already starts at time t f .
The source availability constraints are still satisfied and all uploads finish no later than they did under the original schedule. Job j can be removed from the set J and the number of jobs in J that start at time t s is decreased by 1, although there might now be more (but at most N in total) jobs in J that start at the time that job j finished in the original schedule.
But this time is later than t s . Thus, we repeatedly apply this argument until the number of jobs in J that start at time t s becomes 0 and then move along to jobs in J that are now amongst the first in j to start at time t ′ s > t s . Note that once a job has been removed from J, it will never be included again. Thus we continue until the set J is empty.
Centralized Solution for Equal Capacities
In this section, we give the optimal centralized solution of the uplink-sharing model of the previous section with equal upload capacities. We first consider the simultaneous send/receive broadcasting model in which the server and all users have upload capacity of 1. The following theorem provides a formula for the minimal makespan and a centralized algorithm that achieves it is contained in the proof.
This agrees with a result of Bar-Noy, Kipnis and Schieber [2], who obtained it as a byproduct of their result on the bidirectional telephone model. However, they required pairwise matchings in order to apply the results from the telephone model. So, for the simultaneous send/receive model, too, they use perfect matching in each round for odd N , and perfect matching on N − 2 nodes for even N . As a result, their algorithm differs for odd and even N and it is substantially more complicated, to describe, implement and prove to be correct, than the one we present within the proof of Theorem 1. Theorem 1 has been obtained also by Kwon and Chwa [21], via an algorithm for broadcasting in hypercubes. By contrast, our explicitly constructive proof makes the structure of the algorithm very easy to see. Moreover, it makes the proof of Theorem 3, that is, the result for the uplink-sharing model, a trivial consequence (using Lemmata 1 and 2).
Essentially, the log 2 N -scaling is due to the P2P approach. This compares favourably to the linear scaling of N that we would obtain for a fixed set of servers. The factor of 1/M is due to splitting the file into parts.
T * = 1 + ⌊log 2 N ⌋ M .(1)
Proof. Suppose that N = 2 n − 1 + x, for x = 1, . . . , 2 n . So n = ⌊log 2 N ⌋. The fact that M + n is a lower bound on the number of rounds is straightforwardly seen as follows. There are M different file parts and the server can only upload one file part (or one linear combination of file parts) in each round. Thus, it takes at least M rounds until the server has made sufficiently many uploads of file parts (or linear combinations of file parts) that the whole file can be recovered. The last of these M uploads by the server contains information that is essential to recovering the file, but this information is now known to only the server and one peer. It must takes at least n further rounds to disseminate this information to the other N − 1 peers. Now we show how the bound can be achieved. The result is trivial for M = 1. It is instructive to consider the case M = 2 explicitly. If n = 0 then N = 1 and the result is trivial. If n = 1 then N is 2 or 3. Suppose N = 3. In the following diagram each line corresponds to a round; each column to a peer. The entries denote the file part that the peer downloads that round. The bold entries indicate downloads from the server; un-bold entries indicate downloads from a peer who has the corresponding part.
1 2 1 2 1 2
Thus, dissemination of the two file parts to the 3 users can be completed in 3 rounds. The case N = 2 is even easier.
If n ≥ 2, then in rounds 2 to n each user uploads his part to a peer who has no file part and the server uploads part 2 to a peer who has no file part. We reach a point, shown below, at which a set of 2 n−1 peers have file part 1, a set of 2 n−1 − 1 peers have file part 2, and a set of x peers have no file part (those denoted by * · · · * ). Let us call these three sets A 1 , A 2 and A 0 , respectively.
1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 . . . 2 1 · · · 2 1 * · · · *
In round n + 1 we let peers in A 1 upload part 1 to 2 n−1 − ⌊x/2⌋ peers in A 2 and to ⌊x/2⌋ peers in A 0 (If x = 1, to 2 n−1 − 1 peers in A 2 and to 1 peer in A 0 ). Peers in A 2 upload part 2 to 2 n−1 − ⌈x/2⌉ peers in A 1 and to another ⌈x/2⌉ − 1 peers in A 0 . The server uploads part 2 to a member of A 0 (If x = 1, to a member of A 1 ). Thus, at the end of this round 2 n − x peers have both file parts, x peers have only file part 1, and x − 1 peers have only file part 2. One more round (round n + 2) is clearly sufficient to complete the dissemination. Now consider M ≥ 3. The server uploads part 1 to one peer in round 1. In rounds j = 2, . . . , min{n, M − 1}, each peer who has a file part uploads his part to another peer who has no file part and the server uploads part j to a peer who has no file part. If M ≤ n, then in rounds M to n each peer uploads his part to a peer who has no file part and the server uploads part M to a peer who has no file part. As above, we illustrate this with a diagram. Here we show the first n rounds in the case M ≤ n.
1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 . . . M 1 · · · 2 1 . . . M 1 · · · 2 1 * · · · *
When round n ends, 2 n − 1 peers have one file part and x peers have no file part. The number of peers having file part i is given in the second column of Table 1. In this table any entry which evaluates to less than 1 is to be read as 0 (so, for example, the bottom two entries in Part Numbers of the file parts at the ends of rounds n n + 1 n + 2 n + 3 · · · n + M − 1 set peers in the set have number of peers in set B 12 parts 1 and 2 2 n−1 − ⌊x/2⌋ B 1p part 1 and a part other than 1 or 2 2 n−1 − ⌈x/2⌉
1 2 n−1 2 n N N · · · N 2 2 n−2 2 n−1 2 n N · · · N 3 2 n−3 2 n−2 2 n−1 2 n · · · N 4 2 n−4 2 n−3 2 n−2 2 n−1 · · · N . . . . . . . . . . . . . . . . . . M − 2 2 n−M+2 2 n−M+3 2 n−M+4 2 n−M+5 · · · N M − 1 2 n−M+1 2 n−M+2 2 n−M+3 2 n−M+4 · · · 2 n M 2 n−M+1 − 1 2 n−M+2 − 1 2 n−M+3 − 1 2 n−M+4 − 1 · · · 2 n − 1B 1 just part 1 x B 2 just part 2 ⌊x/2⌋ B p
just a part other than 1 or 2 ⌈x/2⌉ − 1 column 2 and the bottom entry in column 3 are 0 for n = M − 2). Now in round n + 1, by downloading from every peer who has a file part, and downloading part min{n + 1, M } from the server, we can obtain the numbers shown in the third column. Moreover, we can easily arrange so that peers can be divided into the sets B 12 , B 1p , B 1 , B 2 and B p as shown in Table 2. In round n + 2, x − 1 of the peers in B 1 upload part 1 to peers in B 2 and B p . Peers in B 12 and B 2 each upload part 2 to the peers in B 1p and to ⌈x/2⌉ of the peers in B 1 . The server and the peers in B 1p and B p each upload a part other than 1 or 2 to the peers in B 12 and to the other ⌊x/2⌋ peers in B 1 . The server uploads part min{n + 2, M } and so we obtain the numbers in the fourth column of Table 1. Now all peers have part 1 and so it can be disregarded subsequently. Moreover, we can make the downloads from the server, B 1p and B p so that (disregarding part 1) the number of peers who ultimately have only part 3 is ⌊x/2⌋. This is possible because the size of B p is no more than ⌊x/2⌋; so if j peers in B p have part 3 then we can upload part 3 to exactly ⌊x/2⌋ − j peers in B 1 . Thus, a similar partitioning into sets as in Table 2 will hold as we start step n + 3 (when parts 2 and 3 takes over the roles of parts 1 and 2 respectively). Note that the optimal strategy above follows two principles. As many different peers as possible obtain file parts early on so that they can start uploading themselves and the maximal possible upload capacity is used. Moreover, there is a certain balance in the upload of different file parts so that no part gets circulated too late.
It is interesting that not all the available upload capacity is used. Suppose M ≥ 2. Observe that in round k, for each k = n + 2, . . . , n + M − 1, only x − 1 of the x peers (in set B 1 ) who have only file part k − n − 1 make an upload. This happens M − 2 times. Also, in round n + M there are only 2x − 1 uploads, whereas N + 1 are possible. Overall, we use N + M − 2x less uploads than we might. It can be checked that this number is the same for M = 1.
Suppose we were to follow a schedule that uses only x uploads during round n + 1, when the last peer gets its first file part. We would be using 2 n − x less uploads than we might in this round. Since 2 n − x ≤ N + M − 2x, we see that the schedule used in the proof above wastes at least as many uploads. So the mathematically interesting question arises as to whether or not it is necessary to use more than x uploads in round n + 1. In fact,
(N + M − 2x) − (2 n − x) = M − 1,
so, in terms of the total number of uploads, such a scheduling could still afford not to use one upload during each of the last M − 1 rounds. The question is whether or not each file part can be made available sufficiently often.
The following example shows that if we are not to use more than x uploads in round n + 1 we will have to do something quite subtle. We cannot simply pick any x out of the 2 n uploads possible and still hope that an optimal schedule will be shiftable: by which we mean that the number of copies of part j at the end of round k will be the same as the number of copies of part j − 1 at the end of round k − 1. It is the fact that the optimal schedule used in Theorem 1 is shiftable that makes its optimality so easy to see.
Example 1 Suppose M = 4 and N = 13 = 2 3 + 6 − 1, so M + ⌊log 2 N ⌋ = 7.
If we follow the same schedule as in Theorem 1, we reach after round 3,
1 2 1 3 1 2 1 · · · · · ·
Now if we only make x = 6 uploads during round 4, then there are eight ways to choose which six parts to upload and which two parts not to upload. One can check that in no case is it possible to arrange so that once this is done and uploads are made for round 5 then the resulting state has the same numbers of parts 2, 3 and 4, respectively, as the numbers of parts 1, 2 and 3 at the end of round 4. That is, there is no shiftable optimal schedule. In fact, if our six uploads has been four part 1s and two part 2s, then it would not even be possible to achieve (1).
In some cases, we can achieve (1), if we relax the demand that the schedule be shiftable. Indeed, we conjecture that this is always possible for at least one schedule that uses only x uploads during round n + 1. However, the fact that we cannot use essentially the same strategy in each round makes the general description of a non-shiftable optimal schedule very complicated. Our aim has been to find an optimal (shiftable) schedule that is easy to describe. We have shown that this is possible if we do use the spare capacity at round n + 1. For practical purposes this is desirable anyway, since even if it does not affect the makespan it is better if users obtain file parts earlier.
When x = 2 n our schedule can be realized using matchings between the 2 n peers holding the part that is to be completed next and the server together with the 2 n − 1 peers holding the remaining parts. But otherwise this is not always possible to schedule only with matchings. This is why our solution would not work for the more constrained telephone-like model considered in [2] (where, in fact, the answer differs as N is even or odd). to describe.
The solution of the simultaneous send/receive broadcasting model problem now gives the solution of our original uplink-sharing model when all capacities are the same.
Theorem 2 Consider the uplink-sharing model with all upload capacities equal to 1. The minimal makespan is given by (1), for all M , N , the same as in the simultaneous send/receive model with all upload capacities equal to 1.
Proof. Note that under the assumptions of the theorem and with application of Lemmas 1 and 2, the optimal solution to the uplink-sharing model is the same as that of the simultaneous send/receive broadcast model when all upload capacities equal to 1.
In the proof of Theorem 1 we explicitly gave an optimal schedule which also satisfies the constraints that no peer downloads more than a single file part at a time. Thus, we also have the following result.
Centralized Solution for General Capacities
We now consider the optimal centralized solution in the general case of the uplink-sharing model in which the upload capacities may be different. Essentially, we have an unusual type of precedence-constrained job scheduling problem. In Section 4.1 we formulate it as a mixed integer linear program (MILP). The MILP can also be used to find approximate solutions of bounded size of sub-optimality. In practice, it is suitable for a small number of file parts M . We discuss its implementation in Section 4.2. Finally, we provide additional insight into the solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different (Sections 4.3 and 4.4).
MILP formulation
In order to give the MILP formulation, we need the following Lemma. Essentially, it shows that time can be discretized suitably. We next show how the solution to the general problem can be found by solving a number of linear programs. Let time interval t be the interval [tτ, tτ + τ ), t = 0, . . . . Identify the server as peer 0. Let x ijk (t) be 1 or 0 as peer i downloads file part k from peer j during interval t or not. Let p ik (t) denote the proportion of file part k that peer i has downloaded by time t. Our problem is then is to find the minimal T such that the optimal value of the following MILP is M N . Since this T is certainly greater than 1/C S and less than N/C S , we can search for its value by a simple bisection search, solving this LP for various T :
maximize i,k p ik (T )(2)
subject to the constraints given below. The source availability constraint (6) guarantees that a user has completely downloaded a part before he can upload it to his peers. The connection constraint (7) requires that each user only carries out a single upload at a time. This is justified by Lemma 1 which also saves us another essential constraint and variable to control the actual download rates: The single user downloading from peer j at time t will do so at rate C j as expressed in the link constraint (5). Continuity and stopping constraints (8,9) require that a download that has started will not be interrupted until completion and then be stopped. The exclusivity constraint (10) ensures that each user downloads a given file part only from one peer, not from several ones. Stopping and exclusivity constraints are not based on assumptions, but obvious constraints to exclude redundant uploads.
Regional constraints
x ijk (t) ∈ {0, 1} for all i, j, k, t (3) p ik (t) ∈ [0, 1] for all i, k, t(4)
Link constraints between variables
p ik (t) = M τ t−τ t ′ =0 N j=0 x ijk (t ′ )C j for all i, k, t(5)
Essential constraints
x ijk (t) − ξ jk (t) ≤ 0 for all i, j, k, t (Source availability constraint) (6) i,k
x ijk (t) ≤ 1 for all j, t (Connection constraint)
x ijk (t) − ξ ik (t + 1) − x ijk (t + 1) ≤ 0 for all i, j, k, t (Continuity constraint)
(8) x ijk (t) + ξ ik (t) ≤ 1 for all i, j, k, t (Stopping constraint) (9) j x ijk (t) ≤ 1 for all i, k, t (Exclusivity constraint)(10)
Initial conditions p 0k (0) = 1 for all k (11) p ik (0) = 0 for all i, k
Constraints (8)- (6) have been linearized. Background can be found in [34]. For this, we used the auxiliary variable ξ ik (t) = 1 {p ik (t) = 1}. This definition can be expressed through the following linear constraints.
Linearization constraints
ξ ik (t) ∈ {0, 1} for all i, k, t (13) p ik (t) − ξ ik (t) ≥ 0 and p ik (t) − ξ ik (t) < 1 for all i, k, t(14)
It can be checked that together with (8)-(6), indeed, this gives
x ijk (t) = 1 and p ik (t + 1) < 1 =⇒ x ijk (t + 1) = 1 for all i, j, k, t
p ik (t) = 1 =⇒ x ijk (t) = 0 for all i, j, k, t (16) p jk (t) < 1 =⇒ x ijk (t) = 0 for all i, j, k, t(15)
that is, continuity, stopping and source availability constraints respectively.
Implementation of the MILP
MILPs are well-understood and there exist efficient computational methods and program codes. The simplex method introduced by Dantzig in 1947, in particular, has been found to yield an efficient algorithm in practice as well as providing insight into the theory. Since then, the method has been specialized to take advantage of the particular structure of certain classes of problems and various interior point methods have been introduced. For integer programming there are branch-and-bound, cutting plane (branch-and-cut) and column generation (branch-and-price) methods as well as dynamic programming algorithms. Moreover, there are various approximation algorithms and heuristics. These methods have been implemented in many commercial optimization libraries such as OSL or CPLEX. For further reading on these issues the reader is referred to [28], [4] and [38]. Thus, implementing and solving the MILPs gives the minimal makespan solution. Although, as the numbers of variables and constraints in the LP grows exponentially in N and M , this approach is not practical for large N and M .
Even so, we can use the LP formulation to obtain a bounded approximation to the solution. If we look at the problem with a greater τ , then the job end and start times are not guaranteed to lie at integer multiples of τ . However, if we imagine that each job does take until the end of an τ -length interval to finish (rather than finishing before the end), then we will overestimate the time that each job takes by at most τ . Since there are N M jobs in total, we overestimate the total time taken by at most N M τ . Thus, the approximation gives us an upper bound on the time taken and is at most N M τ greater than the true optimum. So we obtain both upper and lower bounds on the minimal makespan. Even for this approximation, the computing required is formidable for large N and M .
Insight for special cases with small N and M
We now provide some insight into the minimal makespan solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different. This addresses the case of the server having a significantly higher upload capacity than the end users.
Suppose N = 2 and M = 1, that is, the file has not been split. Only the server has the file initially, thus either (a) both peers download from the server, in which case the makespan is T = 2/C S , or (b) one peer downloads from the server and then the second peer downloads from the first; in this case T = 1/C S + 1/C 1 . Thus, the minimal makespan is T * = 1/C S + min{1/C S , 1/C 1 }.
If N = M = 2 we can again adopt a brute force approach. There are 16 possible cases, each specifying the download source that each peer uses for each part. These can be reduced to four by symmetry.
Case A: Everything is downloaded from the server. This is effectively the same as case (a) above. When C 1 is small compared to C S , this is the optimal strategy. Case B: One peer downloads everything from the server. The second peer downloads from the first. This is as case (b) above, but since the file is split in two, T is less. Case C: One peer downloads from the server. The other peer downloads one part of the file from the server and the other part from the first peer. Case D: Each peer downloads exactly one part from the server and the other part from the other peer. When C 1 is large compared to C S , this is the optimal strategy.
In each case, we can find the optimal scheduling and hence the minimal makespan. This is shown in Table 3.
case makespan The optimal strategy arises from A, C or D as C 1 /C S lies in the intervals [0, 1/3], [1/3, 1] or [1, ∞) respectively. In [1, ∞), B and D yield the same. See Figure 1. Note that under the optimal schedule for case C one peer has to wait while the other starts downloading. This illustrates that greedy-type distributed algorithms may not be optimal and that restricting uploaders to a single upload is sometimes necessary for an optimal scheduling (cf. Section 2).
A 2 C S B 1 2C S + 1 2C 1 + max 1 2C S , 1 2C 1 C 1 2C S + max 1 C S , 1 2C 1 D 1 C S + 1 2C 1
Insight for special cases with large M
We still assume C 1 = C 2 = · · · = C N , but C S might be different. In the limiting case that the file can be divided into infinitely many parts, the problem can be easily solved for any number N of users. Let each user download a fraction 1− α directly from the server at rate C S /N and a fraction α/(N − 1) from each of the other N − 1 peers, at rate min{C S /N, C 1 /(N − 1)} from each. The makespan is minimized by choosing α such that the times for these two downloads are equal, if possible. Equating them, we find the minimal makespan as follows.
Case 1: C 1 /(N − 1) ≤ C S /N : (1 − α)N C S = α C 1 =⇒ α = N C 1 C S + N C 1 =⇒ T = N C S + N C 1 .(18)Case 2: C 1 /(N − 1) ≥ C S /N : (1 − α)N C S = αN (N − 1)C S =⇒ α = N − 1 N =⇒ T = 1 C S .(19)
In total, there are N MB to upload and the total available upload capacity is C S + N C 1 MBps. Thus, a lower bound on the makespan is N/(C S + N C 1 ) seconds. Moreover, the server has to upload his file to at least one user. Hence another lower bound on the makespan is 1/C S . The former bound dominates in case 1 and we have shown that it can be achieved. The latter bound dominates in case 2 and we have shown that it can be achieved. As a result, the minimal makespan is
T * = max 1 C S , N C S + N C 1 .
(20) Figure 2 shows the minimal makespan when the file is split in 1, 2 and infinitely many file parts when N = 2. It illustrates how the makespan decreases with M . In the next section, we extend the results in this limiting case to a much more general scenario.
Centralized Fluid Limit Solution
In this section, we generalize the results of Section 4.4 to allow for general capacities C i . Moreover, instead of limiting the number of sources to one designated server with a file to disseminate, we now allow every user i to have a file that is to be disseminated to all other users. We provide the centralized solution in the limiting case that the file can be divided into infinitely many parts.
Let F i ≥ 0 denote the size of the file that user i disseminates to all other users. Seeing that in this situation there is no longer one particular server and everything is symmetric, we change notation for the rest of this section so that there are N ≥ 2 users 1, 2, . . . , N .
Moreover, let F = N i=1 F i and C = N i=1 C i .
We will prove the following result.
Theorem 4 In the fluid limit, the minimal makespan is
T * = max F 1 C 1 , F 2 C 2 , . . . , F N C N , (N − 1)F C (21)
and this can be achieved with a two-hop strategy, i.e., one in which users i's file is uploaded to user j, either directly from user i, or via at most one intermediate user.
Proof. The result is obvious for N = 2. Then the minimal makespan is max{F 1 /C 1 , F 2 /C 2 } and this is exactly the value of T * in (21).
So we consider N ≥ 3. It is easy to see that each of the N + 1 terms within the braces on the right hand side of (21) are lower bounds on the makespan. Each user has to upload his file at least to one user, which takes time F i /C i . Moreover, the total volume of files to be uploaded is (N − 1)F and the total available capacity is C. Thus, the makespan is at least T * , and it remains to be shown that a makespan of T * can be achieved. There are two cases to consider.
Case 1: (N − 1)F/C ≥ max i F i /C i for all i.
In this case, T * = (N − 1)F/C. Let us consider the 2-hop strategy in which each user uploads a fraction α ii of its file F i directly to all (N − 1) peers, simultaneously and at equal rates. Moreover, he uploads a fraction α ij to peer j who in turn then uploads it to the remaining (N − 2) peers, again simultaneously and at equal rates. Note that N j=1 α ij = 1. Explicitly constructing a suitable set α ij , we thus obtain the problem min T (22) subject to, for all i,
1 C i α ii F i (N − 1) + k =i α ik F i + k =i α ki F k (N − 2) ≤ T .(23)
We minimize T by choosing the α ij in such a way as to equate the N left hand sides of the constraints, if possible. Rewriting the expression in square brackets, equating the constraints for i and j and then summing over all j we obtain
C α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = C i (N − 2) j α jj F j + F + (N − 2)(F − j α jj F j ) = (N − 1)C i F.(24)
Thus,
α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = (N − 1) C i C F.(25)
Note that there is a lot of freedom in the choice of the α so let us specify that we require α ki to be constant in k for k = i, that is α ki = α * i for k = i. This means that i has the capacity to take over a certain part of the dissemination from some peer, then it can and will also take over the same proportion from any other peer. Put another way, user i splits excess capacity equally between its peers. Thus,
α ii F i (N − 2) + F i + α * i (N − 2)(F − F i ) = (N − 1) C i C F(26)
Still, we have twice as many variables as constraints. Let us also specify that α * i = α ii for all i. Similarly as above, this says that the proportion of its own file F i that i uploads to all its peers (rather than just to one of them) is the same as the proportion of the files that it takes over from its peers. Then
α * i = (N − 1)(C i /C)F − F i (N − 2)F = (N − 1)C i (N − 2)C − F i (N − 2)F ,(27)
where i α * i = 1 and α * i ≥ 0, because in case 1 F i /C i ≤ (N − 1)F/C. With these α ij , we obtain the time for i to complete its upload and hence the time for everyone to complete their upload as
T = 1 C i α * i F i (N − 2) + F i + k =i α * i F k (N − 2) = (N − 1)F i C − F i 2 C i F + F i C i + (N − 1)(F − F i ) C − F i (F − F i ) C i F = (N − 1)F/C.(28)
Note that there is no problem with precedence constraints. All uploads happen simultaneously stretched out from time 0 to T . User i uploads to j a fraction α ij of F i . Thus, he does so at constant rate α ij F i /T i = α ij F i /T . User j passes on the same amount of data to each of the other users in the same time, hence at the same rate α ij F i /T j = α ij F i /T .
Thus, we have shown that if the aggregate lower bound dominates the others, it can be achieved. It remains to be shown that if an individual lower bound dominates, than this can be achieved also.
Case 2: F i /C i > (N − 1)F/C for some i.
By contradiction it is easily seen that this cannot be the case for all i. Let us order the users in decreasing order of F i /C i , so that F 1 /C 1 is the largest of the F i /C i . We wish to show that all files can be disseminated within a time of F 1 /C 1 . To do this we construct new capacities C ′ i with the following properties:
C ′ 1 = C 1 ,(29)C ′ i ≤ C i for i = 1,(30)(N − 1)F/C ′ = F 1 /C ′ 1 = F 1 /C 1 and (31) F i /C ′ i ≤ F 1 /C 1 .(32)
This new problem satisfies the condition of Case 1 and so the minimal makespan is T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem is T = F 1 /C 1 also, because the unprimed capacities are greater or equal to the primed capacities by property (30).
To explicitly construct capacities satisfying (29)-(32), let us define
C ′ i = (N − 1) C 1 F 1 γ i F i(33)
with constants γ i ≥ 0 such that
i γ i F i = F .(34)
Then (N − 1)F/C ′ = F 1 /C 1 , that is (31) holds. Moreover, choosing
γ i ≤ 1 N − 1 C i F i F 1 C 1(35)
ensures C ′ i ≤ C i , i.e. property (30) and choosing
γ i ≥ 1 N − 1(36)
ensures F i /C ′ i ≤ F 1 /C 1 , that is property (32). Furthermore, the previous two conditions together ensure that γ 1 = 1/(N − 1) and thus C ′ 1 = C 1 , that is property (29). It remains to construct a set of parameters γ i that satisfies (34), (35) and (36).
Putting all γ i equal to the lower bound (36) gives i γ i F i = F/(N − 1), that is too small to satisfy (34). Putting all equal to the upper bound (35) gives i γ i F i = F 1 C/(N − 1)C 1 , that is too large to satisfy (34). So we pick a suitably weighted average instead. Namely,
γ i = 1 N − 1 δ C i F i F 1 C 1 + (1 − δ)(37)
such that δ C N − 1
F 1 C 1 + (1 − δ) F N − 1 = F(38)that is δ = (N − 2)F C 1 F 1 C − F C 1 .(39)
Substituting back in we obtain
γ i = 1 N − 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 (F 1 C − F C 1 )F i(40)
and thus
C ′ i = C 1 F 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 F 1 C − F C 1(41)
By construction, these C ′ i satisfy properties (29)-(32) and hence, by the results in Case 1, T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem T = F 1 /C 1 also.
It is worth noting that there is a lot of freedom in the choice of the α ij . We have chosen a symmetric approach, but other choices are possible.
In practice, the file will not be infinitely divisible. However, we often have M >> log(N ) and this appears to be sufficient for (21) to be a good approximation. Thus, the fluid limit approach of this section is suitable for typical and for large values of M .
Decentralized Solution for Equal Capacities
In order to give a lower bound on the minimal makespan, we have been assuming a centralized controller does the scheduling. We now consider a naive randomized strategy and investigate the loss in performance that is due to the lack of centralized control. We do this for equal capacities and in two different information scenarios, evaluating its performance by analytic bounds, simulation as well as direct computation. In Section 6.1 we consider the special case of one file part, in Section 6.2 we consider the general case of M file parts. We find that even this naive strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller (cf. Section 3). This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bounds so that they are useful in practice.
The special case of one file part
Assumptions Let us start with the case M = 1. We must first specify what information is available to users. It makes sense to assume that each peer knows the number of parts into which the file is divided, M , and the address of the server. However, a peer might not know N , the total number of peers, nor its peers' addresses, nor if they have the file, nor whether they are at present occupied uploading to someone else.
We consider two different information scenarios. In the first one, List, the number of peers holding the file and their addresses are known. In the second one, NoList, the number and addresses of all peers are known, but not which of them currently hold the file. Thus, in List, downloading users choose uniformly at random between the server and the peers already having the file. In NoList, downloading users choose uniformly amongst the server and all their peers. If a peer receives a query from a single peer, he uploads the file to that peer. If a peer receives queries from multiple peers, he chooses one of them uniformly at random. The others remain unsuccessful in that round. Thus, in List transmission can fail only if too many users try to download simultaneously from the same uploader. In NoList, transmission might also fail if a user tries to download from a peer who does not yet have the file.
Theoretical Bounds
The following theorem explains how the expected makespan that is achieved by the randomized strategy grows with N , in both the List and the NoList scenarios.
Theorem 5 In the uplink-sharing model, with equal upload capacities, the expected number of rounds required to disseminate a single file to all peers in either the List or NoList scenario is Θ(log N ).
Proof. In the List scenario our simple randomized algorithm runs in less time than in the NoList scenario. Since already have the lower bound given by Theorem 1, it suffices to prove that the expected runing time in the NoList scenario is O(log N ). There is also similar direct proof that the expected running time under the List scenario is O(log N ).
Suppose we have reached a stage in the dissemination at which n 1 peers (including the server) have the file and n 0 peers do not, with n 0 +n 1 = N +1. (The base case is n 1 = 1, when only the server has the file.) Each of the peers that does not have the file randomly chooses amongst the server and all his peers (NoList) and tries to download the file. If more than one peer tries to download from the same place then only one of the downloads is successful. The proof has two steps.
(i) Suppose that n 1 ≤ n 0 . Let i be the server or a peer who has the file and let I i be an indicator random variable that is 0 or 1 as i does or does not upload it. Let Y = i I i , where the sum is taken over all n 1 peers who have the file. Thus n 1 − Y is the number of uploads that take place. Then
EI i = 1 − 1 N n 0 ≤ 1 − 1 2n 0 n 0 ≤ 1 √ e .(42)
Now since E( i I i ) = i EI i , we have EY ≤ n 1 / √ e. Thus, by the Markov inequality, that for a nonnegative random variable Y we have that for any k (not necessarily an integer) P (Y ≥ k) ≤ (1/k)EY , we have by taking k = (2/3)n 1 ,
P n 1 − Y ≡ number of uploads ≤ 1 3 n 1 = P (Y ≥ 2 3 n 1 ) ≤ n 1 / √ e 2 3 n 1 = 3/(2 √ e) < 1 .(43)
Thus the expected number of steps required for the number of peers who have the file to increases from n 1 to at least n 1 + (1/3)n 1 = (4/3)n 1 is bounded by a geometric random variable with mean µ = 1/(1 − 3/(2 √ e)). This implies that we will reach a state in which more peers have the file than do not in an expected time that is O(log N ). From that point we continue with step (ii) of the proof.
(ii) Suppose n 1 > n 0 . Let j be a peer who does not have the file and let J j be an indicator random variable that is 0 or 1 as peer j does or does not succeed in downloading it. Let Z = j J j , where the sum is taken over all n 0 peers who do not have the file. Suppose X is the number of the other n 0 − 1 peers that try to download from the same place as does peer j. Then
P (J j = 0) = E n 1 N 1 1 + X ≥ E n 1 N (1 − X) = n 1 N 1 − n 0 − 1 N = n 1 N 1 − N − n 1 N = n 2 1 N 2 ≥ 1/4 .(44)
Hence EZ ≤ (3/4)n 0 and so, again using the Markov inequality,
P n 0 − Z ≡ number of downloads ≤ 1 8 n 0 = P Z ≥ 7 8 n 0 ≤ 3 4 n 0 7 8 n 0 = 6 7 .(45)
It follows that the number of peers who do not yet have the file decreases from n 0 to no more than (7/8)n 0 in an expected number of steps no more than µ ′ = 1/(1 − 6 7 ) = 7. Thus the number of steps needed for the number of peers without the file to decrease from n 0 to 0 is O(log n 0 ) = O(log N ). In fact, this is a weak upper bound. By more complicated arguments we can show that if n 0 = aN , where a ≤ 1/2, then the expected remaining time for our algorithm to complete under NoList is Θ(log log N ). For a > 1/2 the expected time remains Θ(log N ).
Simulation
For the problem with one server and N users we have carried out 1000 independent simulation runs 4 for a large range of parameters, N = 2, 4, . . . , 2 25 . We found that the achieved expected makespan appears to grow as a + b × log 2 N . Motivated by this and the theoretical bound from Theorem 5 we fitted the linear model
y ij = α + βx i + ǫ ij ,(46)
where y ij is the makespan for x i = log 2 2 i , obtained in run j, j = 1, . . . , 1000. Indeed, the model fits the data very well in both scenarios. We obtain the following results that enable us to compare the expected makespan of the naive randomized strategy to the that of a centralized controller. For List, the regression analysis gives a good fit, with Multiple R-squared value of 0.9975 and significant p-and t-values. The makespan increases as
1.1392 + 1.1021 × log 2 N .(47)
For NoList, there is more variation in the data than for List, but, again, the linear regression gives a good fit, with Multiple R-squared of 0.9864 and significant p-and t-values. The makespan increases as 1.7561 + 1.5755 × log 2 N .
As expected, the additional information for List leads to a significantly lesser makespan when compared to NoList, in particular the log-term coefficient is significantly smaller. In the List scenario, the randomized strategy achieves a makespan that is very close to the centralized optimum of 1 + ⌊log 2 N ⌋ of Section 3: It is only suboptimal by about 10%. Hence even this simple randomized strategy performs well in both cases and very well when state information is available, suggesting that our bounds are useful in practice.
Computations
Alternatively, it is possible to compute the mean makespan analytically by considering a Markov Chain on the state space 0, 1, 2, . . . , N , where state i corresponds to i of the N peers having the file. We can calculate the transition probabilities p ij . In the NoList case, for example, following the Occupancy Distribution (e.g., [18]), we obtain
p ii+m = i j=i−m (−1) j−i+m i! (i − j)!(i − m)!(j − i + m)! N − 1 − j N − 1 N −i .(49)
Hence we can successively compute the expected hitting times k(i) of state N starting from state i via
k(i) = 1 + j>i k(j)p ij 1 − p ii .(50)
The resulting formula is rather complicated, but can be evaluated exactly using arbitrary precision arithmetic on a computer. Computation times are long, so to keep them shorter we only work out the transition probabilities of the associated Markov Chain exactly. Hitting times are then computed in double arithmetic, that is, to 16 significant digits. Even so, computations are only feasible up to N = 512 with our equipment, despite repeatedly enhanced efficiency. This suggests that simulation is the more computationally efficient approach to our problem. The computed mean values for List and NoList are shown in Tables 4 and 5 respectively. The difference to the simulated values is small without any apparent trend. It can also be checked by computing the standard deviation that the computed mean makespan is contained in the approximate 95% confidence interval of the simulated mean makespan. The only exception is for N = 128 for NoList where it is just outside by approximately 0.0016.
Thus, the computations prove our simulation results accurate. Since simulation results are also obtained more efficiently, we shall stick to simulation when investigating the general case of M file parts in the next section.
The general case of M file parts
Assumptions
We now consider splitting the file into several file parts. With the same assumptions as in the previous section, we repeat the analysis for List for various values of M . Thus, in each round, a downloading user connects to a peer chosen uniformly at random from those peers that have at least one file part that the user does not yet have. An uploading peer randomly chooses one out of the peers requesting a download from him. He uploads to that peer a file part that is randomly chosen from amongst those that he has and the peer still needs.
Simulation
Again, we consider a large range of parameter. We carried out 100 independent runs for each N = 2, 4, . . . , 2 15 . For each value of M = 1 − 5, 8, 10, 15, 20, 50 we fitted the linear model (46). Table 6 summarizes the simulation results. The Multiple R-squared values indicate a good fit, although the fact that these decrease with M suggests there may be a finer dependence on M or N . In fact, we obtain a better fit using Generalized Additive Models (cf. [14]). However, our interest here is not in fitting the best possible model, but to compare the growth rate with N to the one obtained in the centralized case in Section 3. Moreover, from the diagnostic plots we note that the actual performance for large N is better than given by the regression line, increasingly so for increasing M . In each case, we obtain significant p-and t-values. The regression 0.7856+1.1520×log 2 N for M = 1 does not quite agree with 1.1392+1.1021×log 2 N found in (47). It can be checked, by repeating the analysis there for N = 2, 4, . . . , 2 15 that this is due to the different range of N . Thus, our earlier result of 1.1021 might be regarded more reliable, being based on N ranging up to 2 25 .
We conclude that, as in the centralized scenario, the makespan can also be reduced significantly in a decentralized scenario even when a simple randomized strategy is used to disseminate the file parts. However, as we note by comparing the second and fourth columns of Table 6, as M increases the achieved makespan compares less well relative to the centralized minimum of 1 + (1/M )⌊log 2 N ⌋. In particular, note the slower decrease of the log-term coefficient. This is depicted in Figure 3.
Still, we have seen that even this naive randomized strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller in Section 3, confirming our performance bounds are useful in practice. This is confirmed also by initial results of current work on the performance evaluation of the Bullet' system [20].
The program code for simulations as well as the computations and the diagnostic plots used in this section are available on request and will be made available via the Internet 5 .
Discussion
In this paper, we have given three complementary solutions for the minimal time to fully disseminate a file of M parts from a server to N end users in a centralized scenario, thereby providing a lower bound on and a performance benchmark for P2P file dissemination systems. Our results illustrate how the P2P approach, together with splitting the file into M parts, can achieve a significant reduction in makespan. Moreover, the server has a reduced workload when compared to the traditional client/server approach in which it does all the uploads itself. We also investigate the part of the loss in efficiency that is due to the lack of centralized control in practice. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bound confirming their practical use. It would now be very interesting to compare dissemination times of the various efficient real overlay networks directly to our performance bound. A mathematical analysis of the protocols is rarely tractable, but simulation or measurements such as in [17] and [30] for the BitTorrent protocol can be carried out in an environment suitable for this comparison. Cf. also testbed results for Slurpie [33] and simulation results for Avalanche [12]. It is current work to compare our bounds to the makespan obtained by Bullet' [20]. Initial results confirm their practical use further.
In practice, splitting the file and passing on extra information has an overhead cost. Moreover, with the Transmission Control Protocol (TCP), longer connections are more efficient than shorter ones. TCP is used practically everywhere except for the Internet Control Message Protocol (ICMP) and User Datagram Protocol (UDP) for real-time applications. For further details see [35]. Still, with an overhead cost it will not be optimal to increase M beyond a certain value. This could be investigated in more detail.
In the proof of Lemma 1 and Lemma 2 we have used fair sharing and continuity assumptions. It would be of interest to investigate whether one of them or both can be relaxed. Table 6: the decentralized List scenario (solid) and the idealized centralized scenario (dashed).
It would be interesting to generalize our results to account for a dynamic setting with peers arriving and perhaps leaving when they have completed the download of the file. In Internet applications users often connect for only relatively short times. Work in this direction, using a fluid model to study the steady-state performance, is pursued in [31] and there is other relevant work in [37].
Also of interest would be to extend our model to consider users who prefer to free-ride and do not wish to contribute uploading effort. Or, to users who might want to leave the system once they have downloaded the whole file, a behaviour sometimes referred to as easy-riding. The BitTorrent protocol, for example, implements a choking algorithm to limit free-riding.
In another scenario it might be appropriate to assume that users push messages rather than pull them. See [11] for an investigation of the design space for distributed information systems. The push-pull distinction is also part of their classification. In a push system, the centralized case would remain the same. However, we expect the decentralized case to be different. There are a number of other interesting questions which could be investigated in this context. For example, what happens if only a subset of the users is actually interested in the file, but the uploaders do not know which.
From a mathematical point of view it would also be interesting to consider additional download constraints explicitly as part of the model, in particular when up-and download capacities are all different and not positively correlated. We might suppose that user i can upload at a rate C i and simultaneously download at rate D i .
More generally, one might want to assume different capacities for all links between pairs. Or, phrased in terms of transmission times, let us assume that for a file to be sent from user i to user j it takes time t ij . Then we obtain a transportation network, where instead of link costs we now have link delays. This problem can be phrased as a one-to-all shortest path problem if C j is at least N +1. This suggests that there might be some relation which could be exploited. On the other hand, the problem is sufficiently different so that greedy algorithms, induction on nodes and Dynamic Programming do not appear to work. Background on these can be found in [4] and [3]. For M = 1, Prüfer's (N + 1) N −1 labelled trees [6] together with the obvious O(N ) algorithm for the optimal scheduling given a tree is an exhaustive search. A Branch and Bound algorithm can be formulated.
| 11,555 |
cs0606110
|
2949837610
|
Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equally-sized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplink-sharing version of the well-known broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice.
|
In @cite_4 , the authors considered the bidirectional telephone model in which nodes can both send one message and receive one message simultaneously, but they must be matched pairwise. That is, in each given round, a node can only receive a message from the same node to which it sends a message. They provide an optimal algorithm for odd @math , which takes @math rounds. For even @math their algorithm is optimal up to an additive term of @math , taking @math rounds.
|
{
"abstract": [
"We consider the problem of broadcasting multiple messages from one processor to many processors in telephone-like communication systems. In such systems, processors communicate in rounds, where in every round, each processor can communicate with exactly one other processor by exchanging messages with it. Finding an optimal solution for this problem was open for over a decade. In this paper, we present an optimal algorithm for this problem when the number of processors is even. For an odd number of processors, we provide an algorithm which is within an additive term of 3 of the optimum. A by-product of our solution is an optimal algorithm for the problem of broadcasting multiple messages for any number of processors in the simultaneous send receive model. In this latter model, in every round, each processor can send a message to one processor and receive a message from another processor."
],
"cite_N": [
"@cite_4"
],
"mid": [
"2054243578"
]
}
|
Optimal Scheduling of Peer-to-Peer File Dissemination
|
Suppose that M messages of equal length are initially known only at a single source node in a network. The so-called broadcasting problem is about disseminating these M messages to a population of N other nodes in the least possible time, subject to capacity constraints along the links of the network. The assumption is that once a node has received one of the messages it can participate subsequently in sending that message to its neighbouring nodes.
Scheduling background and related work
The broadcasting problem has been considered for different network topologies. Comprehensive surveys can be found in [15] and [16]. On a complete graph, the problem was first solved in [8] and [10]. Their communication model was a unidirectional telephone model in which each node can either send or receive one message during each round, but cannot do both. In this model, the minimal number of rounds required is 2M − 1 + ⌊log 2 (N + 1)⌋ for even N , and 2M + ⌊log
2 (N + 1)⌋ − ⌊ M −1+2 ⌊log 2 (N+1)⌋ (N +1)/2 ⌋ for odd N . 3
In [2], the authors considered the bidirectional telephone model in which nodes can both send one message and receive one message simultaneously, but they must be matched pairwise. That is, in each given round, a node can only receive a message from the same node to which it sends a message. They provide an optimal algorithm for odd N , which takes M + ⌊log 2 N ⌋ rounds. For even N their algorithm is optimal up to an additive term of 3, taking M + ⌊log 2 N ⌋ + M/N + 2 rounds.
The simultaneous send/receive model [21] supposes that during each round every user may receive one message and send one message. Unlike the telephone model, it is not required that a user can send a message only to the same user from which it receives a message. The optimal number of rounds turns out to be M + ⌊log 2 N ⌋ and we will return to this result in Section 3.
In this paper, we are working with our new uplink-sharing model designed for P2P file dissemination (cf. Section 2). It is closely related to the simultaneous send/receive model, but is set in continuous time. Moreover, we permit users to have different upload capacities which are the constraints on the data that can be sent per unit of time. This contrasts with previous work in which the aim was to model interactions of processors and so it was natural to assume that all nodes have equal capacities. Our work also differs from previous work in that we are motivated by the evaluation of necessarily decentralized P2P file dissemination algorithms, i.e., ones that can be implemented by the users themselves, rather than by a centralized controller. Our interest in the centralized case is as a basis for comparison and to give a lower bound. We show that in the case of equal upload capacities the optimal number of rounds is M + ⌊log 2 N ⌋ as for the simultaneous send/receive model. Moreover, we provide two complementary solutions for the case of general upload capacities and investigate the performance of a decentralized strategy.
Outlook
The rest of this paper is organized as follows. In Section 2 we introduce the uplink-sharing model and relate it to the simultaneous send/receive model. Our optimal algorithm for the simultaneous send/receive broadcasting problem is presented in Section 3. We show that it also solves the problem for the uplink-sharing model with equal capacities. In Section 4 we show that the general uplink-sharing model can be solved via a finite number of mixed integer linear programming (MILP) problems. This approach is suitable for a small number of file parts M . We provide additional insight through the solution of some special cases. We then consider the limiting case that the file can be divided into infinitely many parts and provide the centralized fluid solution. We extend these results to the even more general situation where different users might have different (disjoint) files of different sizes to disseminate (Section 5). This approach is suitable for typical and for large numbers of file parts M . Finally, we turn to decentralized algorithms. In Section 6 we evaluate the performance of a very simple and natural randomized strategy, theoretically, by simulation and by direct computation. We provide results in two different information scenarios with equal capacities showing that even this naive algorithm disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to the performance bounds of the previous sections so that they are useful in practice. We conclude and present ideas for further research in Section 7.
The Uplink-Sharing Model
We now introduce an abstract model for the file dissemination scenario described in the previous section, focusing on the important features of P2P file dissemination.
Underlying the file dissemination system is the Internet. Thus, each user can connect to every other user and the network topology is a complete graph. The server S has upload capacity C S and the N peers have upload capacities C 1 , . . . , C N , measured in megabytes per second (MBps). Once a user has received a file part it can participate subsequently in uploading it to its peers (source availability). We suppose that, in principle, any number of users can simultaneously connect to the server or another peer, the available upload capacity being shared equally amongst the open connections (fair sharing). Taking the file size to be 1 MB, this means that if n users try simultaneously to download a part of the file (of size 1/M ) from the server then it takes n/M C S seconds for these downloads to complete. Observe that the rate at which an upload takes place can both increase and decrease during the time of that upload (varying according to the number of other uploads with which it shares the upload capacity), but we assume that uploads are not interrupted until complete, that is the rate is always positive (continuity). In fact, Lemma 1 below shows that the makespan is not increased if we restrict the server and all peers to carry out only a single upload at a time. We permit a user to download more than one file part simultaneously, but these must be from different sources; only one file part may be transferred from one user to another at the same time. We ignore more complicated interactions and suppose that the upload capacities, C S , C 1 , . . . , C N , impose the only constraints on the rates at which file parts can be transferred between peers which is a reasonable assumption if the underlying network is not overloaded. Finally, we assume that rates of uploads and downloads do not constrain one another.
Note that we have assumed the download rates to be unconstrained and this might be considered unrealistic. However, we shall show a posteriori in Section 3 that if the upload capacities are equal then additional download capacity constraints do not increase the minimum possible makespan, as long as these download capacities are at least as big. Indeed, this is usually the case in practice.
Typically, N is the order of several thousands and the file size is up to a few gigabytes (GB), so that there are several thousand file parts of size 1/4 MB each.
Finding the minimal makespan looks potentially very hard as upload times are interdependent and might start at arbitrary points in time. However, the following two observations help simplify it dramatically. As we see in the next section, they also relate the uplink-sharing model to the simultaneous send/receive broadcasting model.
Lemma 1
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which the server and each of the peers only carry out a single upload at a time.
Proof. Identify the server as peer 0 and, for each i = 0, 1, . . . , N consider the schedule of peer i. We shall use the term job to mean the uploading of a particular file part to a particular peer. Consider the set of jobs, say J, whose processing involves some sharing of the upload capacity C i . Pick any job, say j, in J which is last in J to finish and call the time at which it finishes t f . Now fair sharing and continuity imply that job j is amongst the last to start amongst all the jobs finishing before or at time t f . To see this, note that if some job k were to start later than j, then (by fair sharing and continuity) k must receive less processing than job j by time t f and so cannot have finished by time t f . Let t s denote the starting time of job j.
We now modify the schedule between time t s and t f as follows. Let K be the set of jobs with which job j's processing has involved some sharing of the upload capacity. Let us re-schedule job j so that it is processed on its own between times t f − 1/C i M and t f . This consumes some amount of upload capacity that had been devoted to jobs in K between t f − 1/C i M and t f . However, it releases an exactly equal amount of upload capacity between times t s and t f − 1/C i M which had been used by job j. This can now be allocated (using fair sharing) to processing jobs in K.
The result is that j can be removed from the set J. All jobs finish no later than they did under the original schedule. Moreover, job j starts later than it did under the original schedule and the scheduling before time t s and after time t f is not affected. Thus, all jobs start no earlier than they did under the original schedule. This ensures that the source availability constraints are satisfied and that we can consider the upload schedules independently. We repeatedly apply this argument until set J is empty.
Using Lemma 1, a similar argument shows the following result.
Lemma 2
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which uploads start only at times that other uploads finish or at time 0.
Proof. By the previous Lemma it suffices to consider schedules in which the server and each of the peers only carry out a single upload at a time. Consider the joint schedule of all peers i = 0, 1, . . . , N and let J be the set of jobs that start at a time other than 0 at which no other upload finishes. Pick a job, say j, that is amongst the first in J to start, say at time t s . Consider the greatest time t f such that t f < t s and t f is either 0 or the time that some other upload finishes and modify the schedule so that job j already starts at time t f .
The source availability constraints are still satisfied and all uploads finish no later than they did under the original schedule. Job j can be removed from the set J and the number of jobs in J that start at time t s is decreased by 1, although there might now be more (but at most N in total) jobs in J that start at the time that job j finished in the original schedule.
But this time is later than t s . Thus, we repeatedly apply this argument until the number of jobs in J that start at time t s becomes 0 and then move along to jobs in J that are now amongst the first in j to start at time t ′ s > t s . Note that once a job has been removed from J, it will never be included again. Thus we continue until the set J is empty.
Centralized Solution for Equal Capacities
In this section, we give the optimal centralized solution of the uplink-sharing model of the previous section with equal upload capacities. We first consider the simultaneous send/receive broadcasting model in which the server and all users have upload capacity of 1. The following theorem provides a formula for the minimal makespan and a centralized algorithm that achieves it is contained in the proof.
This agrees with a result of Bar-Noy, Kipnis and Schieber [2], who obtained it as a byproduct of their result on the bidirectional telephone model. However, they required pairwise matchings in order to apply the results from the telephone model. So, for the simultaneous send/receive model, too, they use perfect matching in each round for odd N , and perfect matching on N − 2 nodes for even N . As a result, their algorithm differs for odd and even N and it is substantially more complicated, to describe, implement and prove to be correct, than the one we present within the proof of Theorem 1. Theorem 1 has been obtained also by Kwon and Chwa [21], via an algorithm for broadcasting in hypercubes. By contrast, our explicitly constructive proof makes the structure of the algorithm very easy to see. Moreover, it makes the proof of Theorem 3, that is, the result for the uplink-sharing model, a trivial consequence (using Lemmata 1 and 2).
Essentially, the log 2 N -scaling is due to the P2P approach. This compares favourably to the linear scaling of N that we would obtain for a fixed set of servers. The factor of 1/M is due to splitting the file into parts.
T * = 1 + ⌊log 2 N ⌋ M .(1)
Proof. Suppose that N = 2 n − 1 + x, for x = 1, . . . , 2 n . So n = ⌊log 2 N ⌋. The fact that M + n is a lower bound on the number of rounds is straightforwardly seen as follows. There are M different file parts and the server can only upload one file part (or one linear combination of file parts) in each round. Thus, it takes at least M rounds until the server has made sufficiently many uploads of file parts (or linear combinations of file parts) that the whole file can be recovered. The last of these M uploads by the server contains information that is essential to recovering the file, but this information is now known to only the server and one peer. It must takes at least n further rounds to disseminate this information to the other N − 1 peers. Now we show how the bound can be achieved. The result is trivial for M = 1. It is instructive to consider the case M = 2 explicitly. If n = 0 then N = 1 and the result is trivial. If n = 1 then N is 2 or 3. Suppose N = 3. In the following diagram each line corresponds to a round; each column to a peer. The entries denote the file part that the peer downloads that round. The bold entries indicate downloads from the server; un-bold entries indicate downloads from a peer who has the corresponding part.
1 2 1 2 1 2
Thus, dissemination of the two file parts to the 3 users can be completed in 3 rounds. The case N = 2 is even easier.
If n ≥ 2, then in rounds 2 to n each user uploads his part to a peer who has no file part and the server uploads part 2 to a peer who has no file part. We reach a point, shown below, at which a set of 2 n−1 peers have file part 1, a set of 2 n−1 − 1 peers have file part 2, and a set of x peers have no file part (those denoted by * · · · * ). Let us call these three sets A 1 , A 2 and A 0 , respectively.
1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 . . . 2 1 · · · 2 1 * · · · *
In round n + 1 we let peers in A 1 upload part 1 to 2 n−1 − ⌊x/2⌋ peers in A 2 and to ⌊x/2⌋ peers in A 0 (If x = 1, to 2 n−1 − 1 peers in A 2 and to 1 peer in A 0 ). Peers in A 2 upload part 2 to 2 n−1 − ⌈x/2⌉ peers in A 1 and to another ⌈x/2⌉ − 1 peers in A 0 . The server uploads part 2 to a member of A 0 (If x = 1, to a member of A 1 ). Thus, at the end of this round 2 n − x peers have both file parts, x peers have only file part 1, and x − 1 peers have only file part 2. One more round (round n + 2) is clearly sufficient to complete the dissemination. Now consider M ≥ 3. The server uploads part 1 to one peer in round 1. In rounds j = 2, . . . , min{n, M − 1}, each peer who has a file part uploads his part to another peer who has no file part and the server uploads part j to a peer who has no file part. If M ≤ n, then in rounds M to n each peer uploads his part to a peer who has no file part and the server uploads part M to a peer who has no file part. As above, we illustrate this with a diagram. Here we show the first n rounds in the case M ≤ n.
1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 . . . M 1 · · · 2 1 . . . M 1 · · · 2 1 * · · · *
When round n ends, 2 n − 1 peers have one file part and x peers have no file part. The number of peers having file part i is given in the second column of Table 1. In this table any entry which evaluates to less than 1 is to be read as 0 (so, for example, the bottom two entries in Part Numbers of the file parts at the ends of rounds n n + 1 n + 2 n + 3 · · · n + M − 1 set peers in the set have number of peers in set B 12 parts 1 and 2 2 n−1 − ⌊x/2⌋ B 1p part 1 and a part other than 1 or 2 2 n−1 − ⌈x/2⌉
1 2 n−1 2 n N N · · · N 2 2 n−2 2 n−1 2 n N · · · N 3 2 n−3 2 n−2 2 n−1 2 n · · · N 4 2 n−4 2 n−3 2 n−2 2 n−1 · · · N . . . . . . . . . . . . . . . . . . M − 2 2 n−M+2 2 n−M+3 2 n−M+4 2 n−M+5 · · · N M − 1 2 n−M+1 2 n−M+2 2 n−M+3 2 n−M+4 · · · 2 n M 2 n−M+1 − 1 2 n−M+2 − 1 2 n−M+3 − 1 2 n−M+4 − 1 · · · 2 n − 1B 1 just part 1 x B 2 just part 2 ⌊x/2⌋ B p
just a part other than 1 or 2 ⌈x/2⌉ − 1 column 2 and the bottom entry in column 3 are 0 for n = M − 2). Now in round n + 1, by downloading from every peer who has a file part, and downloading part min{n + 1, M } from the server, we can obtain the numbers shown in the third column. Moreover, we can easily arrange so that peers can be divided into the sets B 12 , B 1p , B 1 , B 2 and B p as shown in Table 2. In round n + 2, x − 1 of the peers in B 1 upload part 1 to peers in B 2 and B p . Peers in B 12 and B 2 each upload part 2 to the peers in B 1p and to ⌈x/2⌉ of the peers in B 1 . The server and the peers in B 1p and B p each upload a part other than 1 or 2 to the peers in B 12 and to the other ⌊x/2⌋ peers in B 1 . The server uploads part min{n + 2, M } and so we obtain the numbers in the fourth column of Table 1. Now all peers have part 1 and so it can be disregarded subsequently. Moreover, we can make the downloads from the server, B 1p and B p so that (disregarding part 1) the number of peers who ultimately have only part 3 is ⌊x/2⌋. This is possible because the size of B p is no more than ⌊x/2⌋; so if j peers in B p have part 3 then we can upload part 3 to exactly ⌊x/2⌋ − j peers in B 1 . Thus, a similar partitioning into sets as in Table 2 will hold as we start step n + 3 (when parts 2 and 3 takes over the roles of parts 1 and 2 respectively). Note that the optimal strategy above follows two principles. As many different peers as possible obtain file parts early on so that they can start uploading themselves and the maximal possible upload capacity is used. Moreover, there is a certain balance in the upload of different file parts so that no part gets circulated too late.
It is interesting that not all the available upload capacity is used. Suppose M ≥ 2. Observe that in round k, for each k = n + 2, . . . , n + M − 1, only x − 1 of the x peers (in set B 1 ) who have only file part k − n − 1 make an upload. This happens M − 2 times. Also, in round n + M there are only 2x − 1 uploads, whereas N + 1 are possible. Overall, we use N + M − 2x less uploads than we might. It can be checked that this number is the same for M = 1.
Suppose we were to follow a schedule that uses only x uploads during round n + 1, when the last peer gets its first file part. We would be using 2 n − x less uploads than we might in this round. Since 2 n − x ≤ N + M − 2x, we see that the schedule used in the proof above wastes at least as many uploads. So the mathematically interesting question arises as to whether or not it is necessary to use more than x uploads in round n + 1. In fact,
(N + M − 2x) − (2 n − x) = M − 1,
so, in terms of the total number of uploads, such a scheduling could still afford not to use one upload during each of the last M − 1 rounds. The question is whether or not each file part can be made available sufficiently often.
The following example shows that if we are not to use more than x uploads in round n + 1 we will have to do something quite subtle. We cannot simply pick any x out of the 2 n uploads possible and still hope that an optimal schedule will be shiftable: by which we mean that the number of copies of part j at the end of round k will be the same as the number of copies of part j − 1 at the end of round k − 1. It is the fact that the optimal schedule used in Theorem 1 is shiftable that makes its optimality so easy to see.
Example 1 Suppose M = 4 and N = 13 = 2 3 + 6 − 1, so M + ⌊log 2 N ⌋ = 7.
If we follow the same schedule as in Theorem 1, we reach after round 3,
1 2 1 3 1 2 1 · · · · · ·
Now if we only make x = 6 uploads during round 4, then there are eight ways to choose which six parts to upload and which two parts not to upload. One can check that in no case is it possible to arrange so that once this is done and uploads are made for round 5 then the resulting state has the same numbers of parts 2, 3 and 4, respectively, as the numbers of parts 1, 2 and 3 at the end of round 4. That is, there is no shiftable optimal schedule. In fact, if our six uploads has been four part 1s and two part 2s, then it would not even be possible to achieve (1).
In some cases, we can achieve (1), if we relax the demand that the schedule be shiftable. Indeed, we conjecture that this is always possible for at least one schedule that uses only x uploads during round n + 1. However, the fact that we cannot use essentially the same strategy in each round makes the general description of a non-shiftable optimal schedule very complicated. Our aim has been to find an optimal (shiftable) schedule that is easy to describe. We have shown that this is possible if we do use the spare capacity at round n + 1. For practical purposes this is desirable anyway, since even if it does not affect the makespan it is better if users obtain file parts earlier.
When x = 2 n our schedule can be realized using matchings between the 2 n peers holding the part that is to be completed next and the server together with the 2 n − 1 peers holding the remaining parts. But otherwise this is not always possible to schedule only with matchings. This is why our solution would not work for the more constrained telephone-like model considered in [2] (where, in fact, the answer differs as N is even or odd). to describe.
The solution of the simultaneous send/receive broadcasting model problem now gives the solution of our original uplink-sharing model when all capacities are the same.
Theorem 2 Consider the uplink-sharing model with all upload capacities equal to 1. The minimal makespan is given by (1), for all M , N , the same as in the simultaneous send/receive model with all upload capacities equal to 1.
Proof. Note that under the assumptions of the theorem and with application of Lemmas 1 and 2, the optimal solution to the uplink-sharing model is the same as that of the simultaneous send/receive broadcast model when all upload capacities equal to 1.
In the proof of Theorem 1 we explicitly gave an optimal schedule which also satisfies the constraints that no peer downloads more than a single file part at a time. Thus, we also have the following result.
Centralized Solution for General Capacities
We now consider the optimal centralized solution in the general case of the uplink-sharing model in which the upload capacities may be different. Essentially, we have an unusual type of precedence-constrained job scheduling problem. In Section 4.1 we formulate it as a mixed integer linear program (MILP). The MILP can also be used to find approximate solutions of bounded size of sub-optimality. In practice, it is suitable for a small number of file parts M . We discuss its implementation in Section 4.2. Finally, we provide additional insight into the solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different (Sections 4.3 and 4.4).
MILP formulation
In order to give the MILP formulation, we need the following Lemma. Essentially, it shows that time can be discretized suitably. We next show how the solution to the general problem can be found by solving a number of linear programs. Let time interval t be the interval [tτ, tτ + τ ), t = 0, . . . . Identify the server as peer 0. Let x ijk (t) be 1 or 0 as peer i downloads file part k from peer j during interval t or not. Let p ik (t) denote the proportion of file part k that peer i has downloaded by time t. Our problem is then is to find the minimal T such that the optimal value of the following MILP is M N . Since this T is certainly greater than 1/C S and less than N/C S , we can search for its value by a simple bisection search, solving this LP for various T :
maximize i,k p ik (T )(2)
subject to the constraints given below. The source availability constraint (6) guarantees that a user has completely downloaded a part before he can upload it to his peers. The connection constraint (7) requires that each user only carries out a single upload at a time. This is justified by Lemma 1 which also saves us another essential constraint and variable to control the actual download rates: The single user downloading from peer j at time t will do so at rate C j as expressed in the link constraint (5). Continuity and stopping constraints (8,9) require that a download that has started will not be interrupted until completion and then be stopped. The exclusivity constraint (10) ensures that each user downloads a given file part only from one peer, not from several ones. Stopping and exclusivity constraints are not based on assumptions, but obvious constraints to exclude redundant uploads.
Regional constraints
x ijk (t) ∈ {0, 1} for all i, j, k, t (3) p ik (t) ∈ [0, 1] for all i, k, t(4)
Link constraints between variables
p ik (t) = M τ t−τ t ′ =0 N j=0 x ijk (t ′ )C j for all i, k, t(5)
Essential constraints
x ijk (t) − ξ jk (t) ≤ 0 for all i, j, k, t (Source availability constraint) (6) i,k
x ijk (t) ≤ 1 for all j, t (Connection constraint)
x ijk (t) − ξ ik (t + 1) − x ijk (t + 1) ≤ 0 for all i, j, k, t (Continuity constraint)
(8) x ijk (t) + ξ ik (t) ≤ 1 for all i, j, k, t (Stopping constraint) (9) j x ijk (t) ≤ 1 for all i, k, t (Exclusivity constraint)(10)
Initial conditions p 0k (0) = 1 for all k (11) p ik (0) = 0 for all i, k
Constraints (8)- (6) have been linearized. Background can be found in [34]. For this, we used the auxiliary variable ξ ik (t) = 1 {p ik (t) = 1}. This definition can be expressed through the following linear constraints.
Linearization constraints
ξ ik (t) ∈ {0, 1} for all i, k, t (13) p ik (t) − ξ ik (t) ≥ 0 and p ik (t) − ξ ik (t) < 1 for all i, k, t(14)
It can be checked that together with (8)-(6), indeed, this gives
x ijk (t) = 1 and p ik (t + 1) < 1 =⇒ x ijk (t + 1) = 1 for all i, j, k, t
p ik (t) = 1 =⇒ x ijk (t) = 0 for all i, j, k, t (16) p jk (t) < 1 =⇒ x ijk (t) = 0 for all i, j, k, t(15)
that is, continuity, stopping and source availability constraints respectively.
Implementation of the MILP
MILPs are well-understood and there exist efficient computational methods and program codes. The simplex method introduced by Dantzig in 1947, in particular, has been found to yield an efficient algorithm in practice as well as providing insight into the theory. Since then, the method has been specialized to take advantage of the particular structure of certain classes of problems and various interior point methods have been introduced. For integer programming there are branch-and-bound, cutting plane (branch-and-cut) and column generation (branch-and-price) methods as well as dynamic programming algorithms. Moreover, there are various approximation algorithms and heuristics. These methods have been implemented in many commercial optimization libraries such as OSL or CPLEX. For further reading on these issues the reader is referred to [28], [4] and [38]. Thus, implementing and solving the MILPs gives the minimal makespan solution. Although, as the numbers of variables and constraints in the LP grows exponentially in N and M , this approach is not practical for large N and M .
Even so, we can use the LP formulation to obtain a bounded approximation to the solution. If we look at the problem with a greater τ , then the job end and start times are not guaranteed to lie at integer multiples of τ . However, if we imagine that each job does take until the end of an τ -length interval to finish (rather than finishing before the end), then we will overestimate the time that each job takes by at most τ . Since there are N M jobs in total, we overestimate the total time taken by at most N M τ . Thus, the approximation gives us an upper bound on the time taken and is at most N M τ greater than the true optimum. So we obtain both upper and lower bounds on the minimal makespan. Even for this approximation, the computing required is formidable for large N and M .
Insight for special cases with small N and M
We now provide some insight into the minimal makespan solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different. This addresses the case of the server having a significantly higher upload capacity than the end users.
Suppose N = 2 and M = 1, that is, the file has not been split. Only the server has the file initially, thus either (a) both peers download from the server, in which case the makespan is T = 2/C S , or (b) one peer downloads from the server and then the second peer downloads from the first; in this case T = 1/C S + 1/C 1 . Thus, the minimal makespan is T * = 1/C S + min{1/C S , 1/C 1 }.
If N = M = 2 we can again adopt a brute force approach. There are 16 possible cases, each specifying the download source that each peer uses for each part. These can be reduced to four by symmetry.
Case A: Everything is downloaded from the server. This is effectively the same as case (a) above. When C 1 is small compared to C S , this is the optimal strategy. Case B: One peer downloads everything from the server. The second peer downloads from the first. This is as case (b) above, but since the file is split in two, T is less. Case C: One peer downloads from the server. The other peer downloads one part of the file from the server and the other part from the first peer. Case D: Each peer downloads exactly one part from the server and the other part from the other peer. When C 1 is large compared to C S , this is the optimal strategy.
In each case, we can find the optimal scheduling and hence the minimal makespan. This is shown in Table 3.
case makespan The optimal strategy arises from A, C or D as C 1 /C S lies in the intervals [0, 1/3], [1/3, 1] or [1, ∞) respectively. In [1, ∞), B and D yield the same. See Figure 1. Note that under the optimal schedule for case C one peer has to wait while the other starts downloading. This illustrates that greedy-type distributed algorithms may not be optimal and that restricting uploaders to a single upload is sometimes necessary for an optimal scheduling (cf. Section 2).
A 2 C S B 1 2C S + 1 2C 1 + max 1 2C S , 1 2C 1 C 1 2C S + max 1 C S , 1 2C 1 D 1 C S + 1 2C 1
Insight for special cases with large M
We still assume C 1 = C 2 = · · · = C N , but C S might be different. In the limiting case that the file can be divided into infinitely many parts, the problem can be easily solved for any number N of users. Let each user download a fraction 1− α directly from the server at rate C S /N and a fraction α/(N − 1) from each of the other N − 1 peers, at rate min{C S /N, C 1 /(N − 1)} from each. The makespan is minimized by choosing α such that the times for these two downloads are equal, if possible. Equating them, we find the minimal makespan as follows.
Case 1: C 1 /(N − 1) ≤ C S /N : (1 − α)N C S = α C 1 =⇒ α = N C 1 C S + N C 1 =⇒ T = N C S + N C 1 .(18)Case 2: C 1 /(N − 1) ≥ C S /N : (1 − α)N C S = αN (N − 1)C S =⇒ α = N − 1 N =⇒ T = 1 C S .(19)
In total, there are N MB to upload and the total available upload capacity is C S + N C 1 MBps. Thus, a lower bound on the makespan is N/(C S + N C 1 ) seconds. Moreover, the server has to upload his file to at least one user. Hence another lower bound on the makespan is 1/C S . The former bound dominates in case 1 and we have shown that it can be achieved. The latter bound dominates in case 2 and we have shown that it can be achieved. As a result, the minimal makespan is
T * = max 1 C S , N C S + N C 1 .
(20) Figure 2 shows the minimal makespan when the file is split in 1, 2 and infinitely many file parts when N = 2. It illustrates how the makespan decreases with M . In the next section, we extend the results in this limiting case to a much more general scenario.
Centralized Fluid Limit Solution
In this section, we generalize the results of Section 4.4 to allow for general capacities C i . Moreover, instead of limiting the number of sources to one designated server with a file to disseminate, we now allow every user i to have a file that is to be disseminated to all other users. We provide the centralized solution in the limiting case that the file can be divided into infinitely many parts.
Let F i ≥ 0 denote the size of the file that user i disseminates to all other users. Seeing that in this situation there is no longer one particular server and everything is symmetric, we change notation for the rest of this section so that there are N ≥ 2 users 1, 2, . . . , N .
Moreover, let F = N i=1 F i and C = N i=1 C i .
We will prove the following result.
Theorem 4 In the fluid limit, the minimal makespan is
T * = max F 1 C 1 , F 2 C 2 , . . . , F N C N , (N − 1)F C (21)
and this can be achieved with a two-hop strategy, i.e., one in which users i's file is uploaded to user j, either directly from user i, or via at most one intermediate user.
Proof. The result is obvious for N = 2. Then the minimal makespan is max{F 1 /C 1 , F 2 /C 2 } and this is exactly the value of T * in (21).
So we consider N ≥ 3. It is easy to see that each of the N + 1 terms within the braces on the right hand side of (21) are lower bounds on the makespan. Each user has to upload his file at least to one user, which takes time F i /C i . Moreover, the total volume of files to be uploaded is (N − 1)F and the total available capacity is C. Thus, the makespan is at least T * , and it remains to be shown that a makespan of T * can be achieved. There are two cases to consider.
Case 1: (N − 1)F/C ≥ max i F i /C i for all i.
In this case, T * = (N − 1)F/C. Let us consider the 2-hop strategy in which each user uploads a fraction α ii of its file F i directly to all (N − 1) peers, simultaneously and at equal rates. Moreover, he uploads a fraction α ij to peer j who in turn then uploads it to the remaining (N − 2) peers, again simultaneously and at equal rates. Note that N j=1 α ij = 1. Explicitly constructing a suitable set α ij , we thus obtain the problem min T (22) subject to, for all i,
1 C i α ii F i (N − 1) + k =i α ik F i + k =i α ki F k (N − 2) ≤ T .(23)
We minimize T by choosing the α ij in such a way as to equate the N left hand sides of the constraints, if possible. Rewriting the expression in square brackets, equating the constraints for i and j and then summing over all j we obtain
C α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = C i (N − 2) j α jj F j + F + (N − 2)(F − j α jj F j ) = (N − 1)C i F.(24)
Thus,
α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = (N − 1) C i C F.(25)
Note that there is a lot of freedom in the choice of the α so let us specify that we require α ki to be constant in k for k = i, that is α ki = α * i for k = i. This means that i has the capacity to take over a certain part of the dissemination from some peer, then it can and will also take over the same proportion from any other peer. Put another way, user i splits excess capacity equally between its peers. Thus,
α ii F i (N − 2) + F i + α * i (N − 2)(F − F i ) = (N − 1) C i C F(26)
Still, we have twice as many variables as constraints. Let us also specify that α * i = α ii for all i. Similarly as above, this says that the proportion of its own file F i that i uploads to all its peers (rather than just to one of them) is the same as the proportion of the files that it takes over from its peers. Then
α * i = (N − 1)(C i /C)F − F i (N − 2)F = (N − 1)C i (N − 2)C − F i (N − 2)F ,(27)
where i α * i = 1 and α * i ≥ 0, because in case 1 F i /C i ≤ (N − 1)F/C. With these α ij , we obtain the time for i to complete its upload and hence the time for everyone to complete their upload as
T = 1 C i α * i F i (N − 2) + F i + k =i α * i F k (N − 2) = (N − 1)F i C − F i 2 C i F + F i C i + (N − 1)(F − F i ) C − F i (F − F i ) C i F = (N − 1)F/C.(28)
Note that there is no problem with precedence constraints. All uploads happen simultaneously stretched out from time 0 to T . User i uploads to j a fraction α ij of F i . Thus, he does so at constant rate α ij F i /T i = α ij F i /T . User j passes on the same amount of data to each of the other users in the same time, hence at the same rate α ij F i /T j = α ij F i /T .
Thus, we have shown that if the aggregate lower bound dominates the others, it can be achieved. It remains to be shown that if an individual lower bound dominates, than this can be achieved also.
Case 2: F i /C i > (N − 1)F/C for some i.
By contradiction it is easily seen that this cannot be the case for all i. Let us order the users in decreasing order of F i /C i , so that F 1 /C 1 is the largest of the F i /C i . We wish to show that all files can be disseminated within a time of F 1 /C 1 . To do this we construct new capacities C ′ i with the following properties:
C ′ 1 = C 1 ,(29)C ′ i ≤ C i for i = 1,(30)(N − 1)F/C ′ = F 1 /C ′ 1 = F 1 /C 1 and (31) F i /C ′ i ≤ F 1 /C 1 .(32)
This new problem satisfies the condition of Case 1 and so the minimal makespan is T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem is T = F 1 /C 1 also, because the unprimed capacities are greater or equal to the primed capacities by property (30).
To explicitly construct capacities satisfying (29)-(32), let us define
C ′ i = (N − 1) C 1 F 1 γ i F i(33)
with constants γ i ≥ 0 such that
i γ i F i = F .(34)
Then (N − 1)F/C ′ = F 1 /C 1 , that is (31) holds. Moreover, choosing
γ i ≤ 1 N − 1 C i F i F 1 C 1(35)
ensures C ′ i ≤ C i , i.e. property (30) and choosing
γ i ≥ 1 N − 1(36)
ensures F i /C ′ i ≤ F 1 /C 1 , that is property (32). Furthermore, the previous two conditions together ensure that γ 1 = 1/(N − 1) and thus C ′ 1 = C 1 , that is property (29). It remains to construct a set of parameters γ i that satisfies (34), (35) and (36).
Putting all γ i equal to the lower bound (36) gives i γ i F i = F/(N − 1), that is too small to satisfy (34). Putting all equal to the upper bound (35) gives i γ i F i = F 1 C/(N − 1)C 1 , that is too large to satisfy (34). So we pick a suitably weighted average instead. Namely,
γ i = 1 N − 1 δ C i F i F 1 C 1 + (1 − δ)(37)
such that δ C N − 1
F 1 C 1 + (1 − δ) F N − 1 = F(38)that is δ = (N − 2)F C 1 F 1 C − F C 1 .(39)
Substituting back in we obtain
γ i = 1 N − 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 (F 1 C − F C 1 )F i(40)
and thus
C ′ i = C 1 F 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 F 1 C − F C 1(41)
By construction, these C ′ i satisfy properties (29)-(32) and hence, by the results in Case 1, T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem T = F 1 /C 1 also.
It is worth noting that there is a lot of freedom in the choice of the α ij . We have chosen a symmetric approach, but other choices are possible.
In practice, the file will not be infinitely divisible. However, we often have M >> log(N ) and this appears to be sufficient for (21) to be a good approximation. Thus, the fluid limit approach of this section is suitable for typical and for large values of M .
Decentralized Solution for Equal Capacities
In order to give a lower bound on the minimal makespan, we have been assuming a centralized controller does the scheduling. We now consider a naive randomized strategy and investigate the loss in performance that is due to the lack of centralized control. We do this for equal capacities and in two different information scenarios, evaluating its performance by analytic bounds, simulation as well as direct computation. In Section 6.1 we consider the special case of one file part, in Section 6.2 we consider the general case of M file parts. We find that even this naive strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller (cf. Section 3). This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bounds so that they are useful in practice.
The special case of one file part
Assumptions Let us start with the case M = 1. We must first specify what information is available to users. It makes sense to assume that each peer knows the number of parts into which the file is divided, M , and the address of the server. However, a peer might not know N , the total number of peers, nor its peers' addresses, nor if they have the file, nor whether they are at present occupied uploading to someone else.
We consider two different information scenarios. In the first one, List, the number of peers holding the file and their addresses are known. In the second one, NoList, the number and addresses of all peers are known, but not which of them currently hold the file. Thus, in List, downloading users choose uniformly at random between the server and the peers already having the file. In NoList, downloading users choose uniformly amongst the server and all their peers. If a peer receives a query from a single peer, he uploads the file to that peer. If a peer receives queries from multiple peers, he chooses one of them uniformly at random. The others remain unsuccessful in that round. Thus, in List transmission can fail only if too many users try to download simultaneously from the same uploader. In NoList, transmission might also fail if a user tries to download from a peer who does not yet have the file.
Theoretical Bounds
The following theorem explains how the expected makespan that is achieved by the randomized strategy grows with N , in both the List and the NoList scenarios.
Theorem 5 In the uplink-sharing model, with equal upload capacities, the expected number of rounds required to disseminate a single file to all peers in either the List or NoList scenario is Θ(log N ).
Proof. In the List scenario our simple randomized algorithm runs in less time than in the NoList scenario. Since already have the lower bound given by Theorem 1, it suffices to prove that the expected runing time in the NoList scenario is O(log N ). There is also similar direct proof that the expected running time under the List scenario is O(log N ).
Suppose we have reached a stage in the dissemination at which n 1 peers (including the server) have the file and n 0 peers do not, with n 0 +n 1 = N +1. (The base case is n 1 = 1, when only the server has the file.) Each of the peers that does not have the file randomly chooses amongst the server and all his peers (NoList) and tries to download the file. If more than one peer tries to download from the same place then only one of the downloads is successful. The proof has two steps.
(i) Suppose that n 1 ≤ n 0 . Let i be the server or a peer who has the file and let I i be an indicator random variable that is 0 or 1 as i does or does not upload it. Let Y = i I i , where the sum is taken over all n 1 peers who have the file. Thus n 1 − Y is the number of uploads that take place. Then
EI i = 1 − 1 N n 0 ≤ 1 − 1 2n 0 n 0 ≤ 1 √ e .(42)
Now since E( i I i ) = i EI i , we have EY ≤ n 1 / √ e. Thus, by the Markov inequality, that for a nonnegative random variable Y we have that for any k (not necessarily an integer) P (Y ≥ k) ≤ (1/k)EY , we have by taking k = (2/3)n 1 ,
P n 1 − Y ≡ number of uploads ≤ 1 3 n 1 = P (Y ≥ 2 3 n 1 ) ≤ n 1 / √ e 2 3 n 1 = 3/(2 √ e) < 1 .(43)
Thus the expected number of steps required for the number of peers who have the file to increases from n 1 to at least n 1 + (1/3)n 1 = (4/3)n 1 is bounded by a geometric random variable with mean µ = 1/(1 − 3/(2 √ e)). This implies that we will reach a state in which more peers have the file than do not in an expected time that is O(log N ). From that point we continue with step (ii) of the proof.
(ii) Suppose n 1 > n 0 . Let j be a peer who does not have the file and let J j be an indicator random variable that is 0 or 1 as peer j does or does not succeed in downloading it. Let Z = j J j , where the sum is taken over all n 0 peers who do not have the file. Suppose X is the number of the other n 0 − 1 peers that try to download from the same place as does peer j. Then
P (J j = 0) = E n 1 N 1 1 + X ≥ E n 1 N (1 − X) = n 1 N 1 − n 0 − 1 N = n 1 N 1 − N − n 1 N = n 2 1 N 2 ≥ 1/4 .(44)
Hence EZ ≤ (3/4)n 0 and so, again using the Markov inequality,
P n 0 − Z ≡ number of downloads ≤ 1 8 n 0 = P Z ≥ 7 8 n 0 ≤ 3 4 n 0 7 8 n 0 = 6 7 .(45)
It follows that the number of peers who do not yet have the file decreases from n 0 to no more than (7/8)n 0 in an expected number of steps no more than µ ′ = 1/(1 − 6 7 ) = 7. Thus the number of steps needed for the number of peers without the file to decrease from n 0 to 0 is O(log n 0 ) = O(log N ). In fact, this is a weak upper bound. By more complicated arguments we can show that if n 0 = aN , where a ≤ 1/2, then the expected remaining time for our algorithm to complete under NoList is Θ(log log N ). For a > 1/2 the expected time remains Θ(log N ).
Simulation
For the problem with one server and N users we have carried out 1000 independent simulation runs 4 for a large range of parameters, N = 2, 4, . . . , 2 25 . We found that the achieved expected makespan appears to grow as a + b × log 2 N . Motivated by this and the theoretical bound from Theorem 5 we fitted the linear model
y ij = α + βx i + ǫ ij ,(46)
where y ij is the makespan for x i = log 2 2 i , obtained in run j, j = 1, . . . , 1000. Indeed, the model fits the data very well in both scenarios. We obtain the following results that enable us to compare the expected makespan of the naive randomized strategy to the that of a centralized controller. For List, the regression analysis gives a good fit, with Multiple R-squared value of 0.9975 and significant p-and t-values. The makespan increases as
1.1392 + 1.1021 × log 2 N .(47)
For NoList, there is more variation in the data than for List, but, again, the linear regression gives a good fit, with Multiple R-squared of 0.9864 and significant p-and t-values. The makespan increases as 1.7561 + 1.5755 × log 2 N .
As expected, the additional information for List leads to a significantly lesser makespan when compared to NoList, in particular the log-term coefficient is significantly smaller. In the List scenario, the randomized strategy achieves a makespan that is very close to the centralized optimum of 1 + ⌊log 2 N ⌋ of Section 3: It is only suboptimal by about 10%. Hence even this simple randomized strategy performs well in both cases and very well when state information is available, suggesting that our bounds are useful in practice.
Computations
Alternatively, it is possible to compute the mean makespan analytically by considering a Markov Chain on the state space 0, 1, 2, . . . , N , where state i corresponds to i of the N peers having the file. We can calculate the transition probabilities p ij . In the NoList case, for example, following the Occupancy Distribution (e.g., [18]), we obtain
p ii+m = i j=i−m (−1) j−i+m i! (i − j)!(i − m)!(j − i + m)! N − 1 − j N − 1 N −i .(49)
Hence we can successively compute the expected hitting times k(i) of state N starting from state i via
k(i) = 1 + j>i k(j)p ij 1 − p ii .(50)
The resulting formula is rather complicated, but can be evaluated exactly using arbitrary precision arithmetic on a computer. Computation times are long, so to keep them shorter we only work out the transition probabilities of the associated Markov Chain exactly. Hitting times are then computed in double arithmetic, that is, to 16 significant digits. Even so, computations are only feasible up to N = 512 with our equipment, despite repeatedly enhanced efficiency. This suggests that simulation is the more computationally efficient approach to our problem. The computed mean values for List and NoList are shown in Tables 4 and 5 respectively. The difference to the simulated values is small without any apparent trend. It can also be checked by computing the standard deviation that the computed mean makespan is contained in the approximate 95% confidence interval of the simulated mean makespan. The only exception is for N = 128 for NoList where it is just outside by approximately 0.0016.
Thus, the computations prove our simulation results accurate. Since simulation results are also obtained more efficiently, we shall stick to simulation when investigating the general case of M file parts in the next section.
The general case of M file parts
Assumptions
We now consider splitting the file into several file parts. With the same assumptions as in the previous section, we repeat the analysis for List for various values of M . Thus, in each round, a downloading user connects to a peer chosen uniformly at random from those peers that have at least one file part that the user does not yet have. An uploading peer randomly chooses one out of the peers requesting a download from him. He uploads to that peer a file part that is randomly chosen from amongst those that he has and the peer still needs.
Simulation
Again, we consider a large range of parameter. We carried out 100 independent runs for each N = 2, 4, . . . , 2 15 . For each value of M = 1 − 5, 8, 10, 15, 20, 50 we fitted the linear model (46). Table 6 summarizes the simulation results. The Multiple R-squared values indicate a good fit, although the fact that these decrease with M suggests there may be a finer dependence on M or N . In fact, we obtain a better fit using Generalized Additive Models (cf. [14]). However, our interest here is not in fitting the best possible model, but to compare the growth rate with N to the one obtained in the centralized case in Section 3. Moreover, from the diagnostic plots we note that the actual performance for large N is better than given by the regression line, increasingly so for increasing M . In each case, we obtain significant p-and t-values. The regression 0.7856+1.1520×log 2 N for M = 1 does not quite agree with 1.1392+1.1021×log 2 N found in (47). It can be checked, by repeating the analysis there for N = 2, 4, . . . , 2 15 that this is due to the different range of N . Thus, our earlier result of 1.1021 might be regarded more reliable, being based on N ranging up to 2 25 .
We conclude that, as in the centralized scenario, the makespan can also be reduced significantly in a decentralized scenario even when a simple randomized strategy is used to disseminate the file parts. However, as we note by comparing the second and fourth columns of Table 6, as M increases the achieved makespan compares less well relative to the centralized minimum of 1 + (1/M )⌊log 2 N ⌋. In particular, note the slower decrease of the log-term coefficient. This is depicted in Figure 3.
Still, we have seen that even this naive randomized strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller in Section 3, confirming our performance bounds are useful in practice. This is confirmed also by initial results of current work on the performance evaluation of the Bullet' system [20].
The program code for simulations as well as the computations and the diagnostic plots used in this section are available on request and will be made available via the Internet 5 .
Discussion
In this paper, we have given three complementary solutions for the minimal time to fully disseminate a file of M parts from a server to N end users in a centralized scenario, thereby providing a lower bound on and a performance benchmark for P2P file dissemination systems. Our results illustrate how the P2P approach, together with splitting the file into M parts, can achieve a significant reduction in makespan. Moreover, the server has a reduced workload when compared to the traditional client/server approach in which it does all the uploads itself. We also investigate the part of the loss in efficiency that is due to the lack of centralized control in practice. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bound confirming their practical use. It would now be very interesting to compare dissemination times of the various efficient real overlay networks directly to our performance bound. A mathematical analysis of the protocols is rarely tractable, but simulation or measurements such as in [17] and [30] for the BitTorrent protocol can be carried out in an environment suitable for this comparison. Cf. also testbed results for Slurpie [33] and simulation results for Avalanche [12]. It is current work to compare our bounds to the makespan obtained by Bullet' [20]. Initial results confirm their practical use further.
In practice, splitting the file and passing on extra information has an overhead cost. Moreover, with the Transmission Control Protocol (TCP), longer connections are more efficient than shorter ones. TCP is used practically everywhere except for the Internet Control Message Protocol (ICMP) and User Datagram Protocol (UDP) for real-time applications. For further details see [35]. Still, with an overhead cost it will not be optimal to increase M beyond a certain value. This could be investigated in more detail.
In the proof of Lemma 1 and Lemma 2 we have used fair sharing and continuity assumptions. It would be of interest to investigate whether one of them or both can be relaxed. Table 6: the decentralized List scenario (solid) and the idealized centralized scenario (dashed).
It would be interesting to generalize our results to account for a dynamic setting with peers arriving and perhaps leaving when they have completed the download of the file. In Internet applications users often connect for only relatively short times. Work in this direction, using a fluid model to study the steady-state performance, is pursued in [31] and there is other relevant work in [37].
Also of interest would be to extend our model to consider users who prefer to free-ride and do not wish to contribute uploading effort. Or, to users who might want to leave the system once they have downloaded the whole file, a behaviour sometimes referred to as easy-riding. The BitTorrent protocol, for example, implements a choking algorithm to limit free-riding.
In another scenario it might be appropriate to assume that users push messages rather than pull them. See [11] for an investigation of the design space for distributed information systems. The push-pull distinction is also part of their classification. In a push system, the centralized case would remain the same. However, we expect the decentralized case to be different. There are a number of other interesting questions which could be investigated in this context. For example, what happens if only a subset of the users is actually interested in the file, but the uploaders do not know which.
From a mathematical point of view it would also be interesting to consider additional download constraints explicitly as part of the model, in particular when up-and download capacities are all different and not positively correlated. We might suppose that user i can upload at a rate C i and simultaneously download at rate D i .
More generally, one might want to assume different capacities for all links between pairs. Or, phrased in terms of transmission times, let us assume that for a file to be sent from user i to user j it takes time t ij . Then we obtain a transportation network, where instead of link costs we now have link delays. This problem can be phrased as a one-to-all shortest path problem if C j is at least N +1. This suggests that there might be some relation which could be exploited. On the other hand, the problem is sufficiently different so that greedy algorithms, induction on nodes and Dynamic Programming do not appear to work. Background on these can be found in [4] and [3]. For M = 1, Prüfer's (N + 1) N −1 labelled trees [6] together with the obvious O(N ) algorithm for the optimal scheduling given a tree is an exhaustive search. A Branch and Bound algorithm can be formulated.
| 11,555 |
cs0606110
|
2949837610
|
Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equally-sized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplink-sharing version of the well-known broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice.
|
The simultaneous send receive model @cite_3 supposes that during each round every user may receive one message and send one message. Unlike the telephone model, it is not required that a user can send a message only to the same user from which it receives a message. The optimal number of rounds turns out to be @math and we will return to this result in Section .
|
{
"abstract": [
"Broadcasting refers to the process of dissemination of a set of messages originating from one node to all other nodes in a communication network. We assume that, at any given time, a node can transmit a message along at most one incident link and simultaneously receive a message along at most one incident link. We first present an algorithm for determining the amount of time needed to broadcast k messages in an arbitrary tree. Second, we show that, for every n, there exists a graph with n nodes whose k-message broadcast time matches the trivial lower bound ?log n ? + k - 1 by designing a broadcast scheme for complete graphs. We call those graphs minimal broadcast graphs. Finally, we construct an n node minimal broadcast graph with fewer than ( ?log n ? + 1)2 ?log n ?-1 edges."
],
"cite_N": [
"@cite_3"
],
"mid": [
"2138489200"
]
}
|
Optimal Scheduling of Peer-to-Peer File Dissemination
|
Suppose that M messages of equal length are initially known only at a single source node in a network. The so-called broadcasting problem is about disseminating these M messages to a population of N other nodes in the least possible time, subject to capacity constraints along the links of the network. The assumption is that once a node has received one of the messages it can participate subsequently in sending that message to its neighbouring nodes.
Scheduling background and related work
The broadcasting problem has been considered for different network topologies. Comprehensive surveys can be found in [15] and [16]. On a complete graph, the problem was first solved in [8] and [10]. Their communication model was a unidirectional telephone model in which each node can either send or receive one message during each round, but cannot do both. In this model, the minimal number of rounds required is 2M − 1 + ⌊log 2 (N + 1)⌋ for even N , and 2M + ⌊log
2 (N + 1)⌋ − ⌊ M −1+2 ⌊log 2 (N+1)⌋ (N +1)/2 ⌋ for odd N . 3
In [2], the authors considered the bidirectional telephone model in which nodes can both send one message and receive one message simultaneously, but they must be matched pairwise. That is, in each given round, a node can only receive a message from the same node to which it sends a message. They provide an optimal algorithm for odd N , which takes M + ⌊log 2 N ⌋ rounds. For even N their algorithm is optimal up to an additive term of 3, taking M + ⌊log 2 N ⌋ + M/N + 2 rounds.
The simultaneous send/receive model [21] supposes that during each round every user may receive one message and send one message. Unlike the telephone model, it is not required that a user can send a message only to the same user from which it receives a message. The optimal number of rounds turns out to be M + ⌊log 2 N ⌋ and we will return to this result in Section 3.
In this paper, we are working with our new uplink-sharing model designed for P2P file dissemination (cf. Section 2). It is closely related to the simultaneous send/receive model, but is set in continuous time. Moreover, we permit users to have different upload capacities which are the constraints on the data that can be sent per unit of time. This contrasts with previous work in which the aim was to model interactions of processors and so it was natural to assume that all nodes have equal capacities. Our work also differs from previous work in that we are motivated by the evaluation of necessarily decentralized P2P file dissemination algorithms, i.e., ones that can be implemented by the users themselves, rather than by a centralized controller. Our interest in the centralized case is as a basis for comparison and to give a lower bound. We show that in the case of equal upload capacities the optimal number of rounds is M + ⌊log 2 N ⌋ as for the simultaneous send/receive model. Moreover, we provide two complementary solutions for the case of general upload capacities and investigate the performance of a decentralized strategy.
Outlook
The rest of this paper is organized as follows. In Section 2 we introduce the uplink-sharing model and relate it to the simultaneous send/receive model. Our optimal algorithm for the simultaneous send/receive broadcasting problem is presented in Section 3. We show that it also solves the problem for the uplink-sharing model with equal capacities. In Section 4 we show that the general uplink-sharing model can be solved via a finite number of mixed integer linear programming (MILP) problems. This approach is suitable for a small number of file parts M . We provide additional insight through the solution of some special cases. We then consider the limiting case that the file can be divided into infinitely many parts and provide the centralized fluid solution. We extend these results to the even more general situation where different users might have different (disjoint) files of different sizes to disseminate (Section 5). This approach is suitable for typical and for large numbers of file parts M . Finally, we turn to decentralized algorithms. In Section 6 we evaluate the performance of a very simple and natural randomized strategy, theoretically, by simulation and by direct computation. We provide results in two different information scenarios with equal capacities showing that even this naive algorithm disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to the performance bounds of the previous sections so that they are useful in practice. We conclude and present ideas for further research in Section 7.
The Uplink-Sharing Model
We now introduce an abstract model for the file dissemination scenario described in the previous section, focusing on the important features of P2P file dissemination.
Underlying the file dissemination system is the Internet. Thus, each user can connect to every other user and the network topology is a complete graph. The server S has upload capacity C S and the N peers have upload capacities C 1 , . . . , C N , measured in megabytes per second (MBps). Once a user has received a file part it can participate subsequently in uploading it to its peers (source availability). We suppose that, in principle, any number of users can simultaneously connect to the server or another peer, the available upload capacity being shared equally amongst the open connections (fair sharing). Taking the file size to be 1 MB, this means that if n users try simultaneously to download a part of the file (of size 1/M ) from the server then it takes n/M C S seconds for these downloads to complete. Observe that the rate at which an upload takes place can both increase and decrease during the time of that upload (varying according to the number of other uploads with which it shares the upload capacity), but we assume that uploads are not interrupted until complete, that is the rate is always positive (continuity). In fact, Lemma 1 below shows that the makespan is not increased if we restrict the server and all peers to carry out only a single upload at a time. We permit a user to download more than one file part simultaneously, but these must be from different sources; only one file part may be transferred from one user to another at the same time. We ignore more complicated interactions and suppose that the upload capacities, C S , C 1 , . . . , C N , impose the only constraints on the rates at which file parts can be transferred between peers which is a reasonable assumption if the underlying network is not overloaded. Finally, we assume that rates of uploads and downloads do not constrain one another.
Note that we have assumed the download rates to be unconstrained and this might be considered unrealistic. However, we shall show a posteriori in Section 3 that if the upload capacities are equal then additional download capacity constraints do not increase the minimum possible makespan, as long as these download capacities are at least as big. Indeed, this is usually the case in practice.
Typically, N is the order of several thousands and the file size is up to a few gigabytes (GB), so that there are several thousand file parts of size 1/4 MB each.
Finding the minimal makespan looks potentially very hard as upload times are interdependent and might start at arbitrary points in time. However, the following two observations help simplify it dramatically. As we see in the next section, they also relate the uplink-sharing model to the simultaneous send/receive broadcasting model.
Lemma 1
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which the server and each of the peers only carry out a single upload at a time.
Proof. Identify the server as peer 0 and, for each i = 0, 1, . . . , N consider the schedule of peer i. We shall use the term job to mean the uploading of a particular file part to a particular peer. Consider the set of jobs, say J, whose processing involves some sharing of the upload capacity C i . Pick any job, say j, in J which is last in J to finish and call the time at which it finishes t f . Now fair sharing and continuity imply that job j is amongst the last to start amongst all the jobs finishing before or at time t f . To see this, note that if some job k were to start later than j, then (by fair sharing and continuity) k must receive less processing than job j by time t f and so cannot have finished by time t f . Let t s denote the starting time of job j.
We now modify the schedule between time t s and t f as follows. Let K be the set of jobs with which job j's processing has involved some sharing of the upload capacity. Let us re-schedule job j so that it is processed on its own between times t f − 1/C i M and t f . This consumes some amount of upload capacity that had been devoted to jobs in K between t f − 1/C i M and t f . However, it releases an exactly equal amount of upload capacity between times t s and t f − 1/C i M which had been used by job j. This can now be allocated (using fair sharing) to processing jobs in K.
The result is that j can be removed from the set J. All jobs finish no later than they did under the original schedule. Moreover, job j starts later than it did under the original schedule and the scheduling before time t s and after time t f is not affected. Thus, all jobs start no earlier than they did under the original schedule. This ensures that the source availability constraints are satisfied and that we can consider the upload schedules independently. We repeatedly apply this argument until set J is empty.
Using Lemma 1, a similar argument shows the following result.
Lemma 2
In the uplink-sharing model the minimal makespan is not increased by restricting attention to schedules in which uploads start only at times that other uploads finish or at time 0.
Proof. By the previous Lemma it suffices to consider schedules in which the server and each of the peers only carry out a single upload at a time. Consider the joint schedule of all peers i = 0, 1, . . . , N and let J be the set of jobs that start at a time other than 0 at which no other upload finishes. Pick a job, say j, that is amongst the first in J to start, say at time t s . Consider the greatest time t f such that t f < t s and t f is either 0 or the time that some other upload finishes and modify the schedule so that job j already starts at time t f .
The source availability constraints are still satisfied and all uploads finish no later than they did under the original schedule. Job j can be removed from the set J and the number of jobs in J that start at time t s is decreased by 1, although there might now be more (but at most N in total) jobs in J that start at the time that job j finished in the original schedule.
But this time is later than t s . Thus, we repeatedly apply this argument until the number of jobs in J that start at time t s becomes 0 and then move along to jobs in J that are now amongst the first in j to start at time t ′ s > t s . Note that once a job has been removed from J, it will never be included again. Thus we continue until the set J is empty.
Centralized Solution for Equal Capacities
In this section, we give the optimal centralized solution of the uplink-sharing model of the previous section with equal upload capacities. We first consider the simultaneous send/receive broadcasting model in which the server and all users have upload capacity of 1. The following theorem provides a formula for the minimal makespan and a centralized algorithm that achieves it is contained in the proof.
This agrees with a result of Bar-Noy, Kipnis and Schieber [2], who obtained it as a byproduct of their result on the bidirectional telephone model. However, they required pairwise matchings in order to apply the results from the telephone model. So, for the simultaneous send/receive model, too, they use perfect matching in each round for odd N , and perfect matching on N − 2 nodes for even N . As a result, their algorithm differs for odd and even N and it is substantially more complicated, to describe, implement and prove to be correct, than the one we present within the proof of Theorem 1. Theorem 1 has been obtained also by Kwon and Chwa [21], via an algorithm for broadcasting in hypercubes. By contrast, our explicitly constructive proof makes the structure of the algorithm very easy to see. Moreover, it makes the proof of Theorem 3, that is, the result for the uplink-sharing model, a trivial consequence (using Lemmata 1 and 2).
Essentially, the log 2 N -scaling is due to the P2P approach. This compares favourably to the linear scaling of N that we would obtain for a fixed set of servers. The factor of 1/M is due to splitting the file into parts.
T * = 1 + ⌊log 2 N ⌋ M .(1)
Proof. Suppose that N = 2 n − 1 + x, for x = 1, . . . , 2 n . So n = ⌊log 2 N ⌋. The fact that M + n is a lower bound on the number of rounds is straightforwardly seen as follows. There are M different file parts and the server can only upload one file part (or one linear combination of file parts) in each round. Thus, it takes at least M rounds until the server has made sufficiently many uploads of file parts (or linear combinations of file parts) that the whole file can be recovered. The last of these M uploads by the server contains information that is essential to recovering the file, but this information is now known to only the server and one peer. It must takes at least n further rounds to disseminate this information to the other N − 1 peers. Now we show how the bound can be achieved. The result is trivial for M = 1. It is instructive to consider the case M = 2 explicitly. If n = 0 then N = 1 and the result is trivial. If n = 1 then N is 2 or 3. Suppose N = 3. In the following diagram each line corresponds to a round; each column to a peer. The entries denote the file part that the peer downloads that round. The bold entries indicate downloads from the server; un-bold entries indicate downloads from a peer who has the corresponding part.
1 2 1 2 1 2
Thus, dissemination of the two file parts to the 3 users can be completed in 3 rounds. The case N = 2 is even easier.
If n ≥ 2, then in rounds 2 to n each user uploads his part to a peer who has no file part and the server uploads part 2 to a peer who has no file part. We reach a point, shown below, at which a set of 2 n−1 peers have file part 1, a set of 2 n−1 − 1 peers have file part 2, and a set of x peers have no file part (those denoted by * · · · * ). Let us call these three sets A 1 , A 2 and A 0 , respectively.
1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 . . . 2 1 · · · 2 1 * · · · *
In round n + 1 we let peers in A 1 upload part 1 to 2 n−1 − ⌊x/2⌋ peers in A 2 and to ⌊x/2⌋ peers in A 0 (If x = 1, to 2 n−1 − 1 peers in A 2 and to 1 peer in A 0 ). Peers in A 2 upload part 2 to 2 n−1 − ⌈x/2⌉ peers in A 1 and to another ⌈x/2⌉ − 1 peers in A 0 . The server uploads part 2 to a member of A 0 (If x = 1, to a member of A 1 ). Thus, at the end of this round 2 n − x peers have both file parts, x peers have only file part 1, and x − 1 peers have only file part 2. One more round (round n + 2) is clearly sufficient to complete the dissemination. Now consider M ≥ 3. The server uploads part 1 to one peer in round 1. In rounds j = 2, . . . , min{n, M − 1}, each peer who has a file part uploads his part to another peer who has no file part and the server uploads part j to a peer who has no file part. If M ≤ n, then in rounds M to n each peer uploads his part to a peer who has no file part and the server uploads part M to a peer who has no file part. As above, we illustrate this with a diagram. Here we show the first n rounds in the case M ≤ n.
1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 . . . M 1 · · · 2 1 . . . M 1 · · · 2 1 * · · · *
When round n ends, 2 n − 1 peers have one file part and x peers have no file part. The number of peers having file part i is given in the second column of Table 1. In this table any entry which evaluates to less than 1 is to be read as 0 (so, for example, the bottom two entries in Part Numbers of the file parts at the ends of rounds n n + 1 n + 2 n + 3 · · · n + M − 1 set peers in the set have number of peers in set B 12 parts 1 and 2 2 n−1 − ⌊x/2⌋ B 1p part 1 and a part other than 1 or 2 2 n−1 − ⌈x/2⌉
1 2 n−1 2 n N N · · · N 2 2 n−2 2 n−1 2 n N · · · N 3 2 n−3 2 n−2 2 n−1 2 n · · · N 4 2 n−4 2 n−3 2 n−2 2 n−1 · · · N . . . . . . . . . . . . . . . . . . M − 2 2 n−M+2 2 n−M+3 2 n−M+4 2 n−M+5 · · · N M − 1 2 n−M+1 2 n−M+2 2 n−M+3 2 n−M+4 · · · 2 n M 2 n−M+1 − 1 2 n−M+2 − 1 2 n−M+3 − 1 2 n−M+4 − 1 · · · 2 n − 1B 1 just part 1 x B 2 just part 2 ⌊x/2⌋ B p
just a part other than 1 or 2 ⌈x/2⌉ − 1 column 2 and the bottom entry in column 3 are 0 for n = M − 2). Now in round n + 1, by downloading from every peer who has a file part, and downloading part min{n + 1, M } from the server, we can obtain the numbers shown in the third column. Moreover, we can easily arrange so that peers can be divided into the sets B 12 , B 1p , B 1 , B 2 and B p as shown in Table 2. In round n + 2, x − 1 of the peers in B 1 upload part 1 to peers in B 2 and B p . Peers in B 12 and B 2 each upload part 2 to the peers in B 1p and to ⌈x/2⌉ of the peers in B 1 . The server and the peers in B 1p and B p each upload a part other than 1 or 2 to the peers in B 12 and to the other ⌊x/2⌋ peers in B 1 . The server uploads part min{n + 2, M } and so we obtain the numbers in the fourth column of Table 1. Now all peers have part 1 and so it can be disregarded subsequently. Moreover, we can make the downloads from the server, B 1p and B p so that (disregarding part 1) the number of peers who ultimately have only part 3 is ⌊x/2⌋. This is possible because the size of B p is no more than ⌊x/2⌋; so if j peers in B p have part 3 then we can upload part 3 to exactly ⌊x/2⌋ − j peers in B 1 . Thus, a similar partitioning into sets as in Table 2 will hold as we start step n + 3 (when parts 2 and 3 takes over the roles of parts 1 and 2 respectively). Note that the optimal strategy above follows two principles. As many different peers as possible obtain file parts early on so that they can start uploading themselves and the maximal possible upload capacity is used. Moreover, there is a certain balance in the upload of different file parts so that no part gets circulated too late.
It is interesting that not all the available upload capacity is used. Suppose M ≥ 2. Observe that in round k, for each k = n + 2, . . . , n + M − 1, only x − 1 of the x peers (in set B 1 ) who have only file part k − n − 1 make an upload. This happens M − 2 times. Also, in round n + M there are only 2x − 1 uploads, whereas N + 1 are possible. Overall, we use N + M − 2x less uploads than we might. It can be checked that this number is the same for M = 1.
Suppose we were to follow a schedule that uses only x uploads during round n + 1, when the last peer gets its first file part. We would be using 2 n − x less uploads than we might in this round. Since 2 n − x ≤ N + M − 2x, we see that the schedule used in the proof above wastes at least as many uploads. So the mathematically interesting question arises as to whether or not it is necessary to use more than x uploads in round n + 1. In fact,
(N + M − 2x) − (2 n − x) = M − 1,
so, in terms of the total number of uploads, such a scheduling could still afford not to use one upload during each of the last M − 1 rounds. The question is whether or not each file part can be made available sufficiently often.
The following example shows that if we are not to use more than x uploads in round n + 1 we will have to do something quite subtle. We cannot simply pick any x out of the 2 n uploads possible and still hope that an optimal schedule will be shiftable: by which we mean that the number of copies of part j at the end of round k will be the same as the number of copies of part j − 1 at the end of round k − 1. It is the fact that the optimal schedule used in Theorem 1 is shiftable that makes its optimality so easy to see.
Example 1 Suppose M = 4 and N = 13 = 2 3 + 6 − 1, so M + ⌊log 2 N ⌋ = 7.
If we follow the same schedule as in Theorem 1, we reach after round 3,
1 2 1 3 1 2 1 · · · · · ·
Now if we only make x = 6 uploads during round 4, then there are eight ways to choose which six parts to upload and which two parts not to upload. One can check that in no case is it possible to arrange so that once this is done and uploads are made for round 5 then the resulting state has the same numbers of parts 2, 3 and 4, respectively, as the numbers of parts 1, 2 and 3 at the end of round 4. That is, there is no shiftable optimal schedule. In fact, if our six uploads has been four part 1s and two part 2s, then it would not even be possible to achieve (1).
In some cases, we can achieve (1), if we relax the demand that the schedule be shiftable. Indeed, we conjecture that this is always possible for at least one schedule that uses only x uploads during round n + 1. However, the fact that we cannot use essentially the same strategy in each round makes the general description of a non-shiftable optimal schedule very complicated. Our aim has been to find an optimal (shiftable) schedule that is easy to describe. We have shown that this is possible if we do use the spare capacity at round n + 1. For practical purposes this is desirable anyway, since even if it does not affect the makespan it is better if users obtain file parts earlier.
When x = 2 n our schedule can be realized using matchings between the 2 n peers holding the part that is to be completed next and the server together with the 2 n − 1 peers holding the remaining parts. But otherwise this is not always possible to schedule only with matchings. This is why our solution would not work for the more constrained telephone-like model considered in [2] (where, in fact, the answer differs as N is even or odd). to describe.
The solution of the simultaneous send/receive broadcasting model problem now gives the solution of our original uplink-sharing model when all capacities are the same.
Theorem 2 Consider the uplink-sharing model with all upload capacities equal to 1. The minimal makespan is given by (1), for all M , N , the same as in the simultaneous send/receive model with all upload capacities equal to 1.
Proof. Note that under the assumptions of the theorem and with application of Lemmas 1 and 2, the optimal solution to the uplink-sharing model is the same as that of the simultaneous send/receive broadcast model when all upload capacities equal to 1.
In the proof of Theorem 1 we explicitly gave an optimal schedule which also satisfies the constraints that no peer downloads more than a single file part at a time. Thus, we also have the following result.
Centralized Solution for General Capacities
We now consider the optimal centralized solution in the general case of the uplink-sharing model in which the upload capacities may be different. Essentially, we have an unusual type of precedence-constrained job scheduling problem. In Section 4.1 we formulate it as a mixed integer linear program (MILP). The MILP can also be used to find approximate solutions of bounded size of sub-optimality. In practice, it is suitable for a small number of file parts M . We discuss its implementation in Section 4.2. Finally, we provide additional insight into the solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different (Sections 4.3 and 4.4).
MILP formulation
In order to give the MILP formulation, we need the following Lemma. Essentially, it shows that time can be discretized suitably. We next show how the solution to the general problem can be found by solving a number of linear programs. Let time interval t be the interval [tτ, tτ + τ ), t = 0, . . . . Identify the server as peer 0. Let x ijk (t) be 1 or 0 as peer i downloads file part k from peer j during interval t or not. Let p ik (t) denote the proportion of file part k that peer i has downloaded by time t. Our problem is then is to find the minimal T such that the optimal value of the following MILP is M N . Since this T is certainly greater than 1/C S and less than N/C S , we can search for its value by a simple bisection search, solving this LP for various T :
maximize i,k p ik (T )(2)
subject to the constraints given below. The source availability constraint (6) guarantees that a user has completely downloaded a part before he can upload it to his peers. The connection constraint (7) requires that each user only carries out a single upload at a time. This is justified by Lemma 1 which also saves us another essential constraint and variable to control the actual download rates: The single user downloading from peer j at time t will do so at rate C j as expressed in the link constraint (5). Continuity and stopping constraints (8,9) require that a download that has started will not be interrupted until completion and then be stopped. The exclusivity constraint (10) ensures that each user downloads a given file part only from one peer, not from several ones. Stopping and exclusivity constraints are not based on assumptions, but obvious constraints to exclude redundant uploads.
Regional constraints
x ijk (t) ∈ {0, 1} for all i, j, k, t (3) p ik (t) ∈ [0, 1] for all i, k, t(4)
Link constraints between variables
p ik (t) = M τ t−τ t ′ =0 N j=0 x ijk (t ′ )C j for all i, k, t(5)
Essential constraints
x ijk (t) − ξ jk (t) ≤ 0 for all i, j, k, t (Source availability constraint) (6) i,k
x ijk (t) ≤ 1 for all j, t (Connection constraint)
x ijk (t) − ξ ik (t + 1) − x ijk (t + 1) ≤ 0 for all i, j, k, t (Continuity constraint)
(8) x ijk (t) + ξ ik (t) ≤ 1 for all i, j, k, t (Stopping constraint) (9) j x ijk (t) ≤ 1 for all i, k, t (Exclusivity constraint)(10)
Initial conditions p 0k (0) = 1 for all k (11) p ik (0) = 0 for all i, k
Constraints (8)- (6) have been linearized. Background can be found in [34]. For this, we used the auxiliary variable ξ ik (t) = 1 {p ik (t) = 1}. This definition can be expressed through the following linear constraints.
Linearization constraints
ξ ik (t) ∈ {0, 1} for all i, k, t (13) p ik (t) − ξ ik (t) ≥ 0 and p ik (t) − ξ ik (t) < 1 for all i, k, t(14)
It can be checked that together with (8)-(6), indeed, this gives
x ijk (t) = 1 and p ik (t + 1) < 1 =⇒ x ijk (t + 1) = 1 for all i, j, k, t
p ik (t) = 1 =⇒ x ijk (t) = 0 for all i, j, k, t (16) p jk (t) < 1 =⇒ x ijk (t) = 0 for all i, j, k, t(15)
that is, continuity, stopping and source availability constraints respectively.
Implementation of the MILP
MILPs are well-understood and there exist efficient computational methods and program codes. The simplex method introduced by Dantzig in 1947, in particular, has been found to yield an efficient algorithm in practice as well as providing insight into the theory. Since then, the method has been specialized to take advantage of the particular structure of certain classes of problems and various interior point methods have been introduced. For integer programming there are branch-and-bound, cutting plane (branch-and-cut) and column generation (branch-and-price) methods as well as dynamic programming algorithms. Moreover, there are various approximation algorithms and heuristics. These methods have been implemented in many commercial optimization libraries such as OSL or CPLEX. For further reading on these issues the reader is referred to [28], [4] and [38]. Thus, implementing and solving the MILPs gives the minimal makespan solution. Although, as the numbers of variables and constraints in the LP grows exponentially in N and M , this approach is not practical for large N and M .
Even so, we can use the LP formulation to obtain a bounded approximation to the solution. If we look at the problem with a greater τ , then the job end and start times are not guaranteed to lie at integer multiples of τ . However, if we imagine that each job does take until the end of an τ -length interval to finish (rather than finishing before the end), then we will overestimate the time that each job takes by at most τ . Since there are N M jobs in total, we overestimate the total time taken by at most N M τ . Thus, the approximation gives us an upper bound on the time taken and is at most N M τ greater than the true optimum. So we obtain both upper and lower bounds on the minimal makespan. Even for this approximation, the computing required is formidable for large N and M .
Insight for special cases with small N and M
We now provide some insight into the minimal makespan solution with different capacities by considering special choices for N and M when C 1 = C 2 = · · · = C N , but C S might be different. This addresses the case of the server having a significantly higher upload capacity than the end users.
Suppose N = 2 and M = 1, that is, the file has not been split. Only the server has the file initially, thus either (a) both peers download from the server, in which case the makespan is T = 2/C S , or (b) one peer downloads from the server and then the second peer downloads from the first; in this case T = 1/C S + 1/C 1 . Thus, the minimal makespan is T * = 1/C S + min{1/C S , 1/C 1 }.
If N = M = 2 we can again adopt a brute force approach. There are 16 possible cases, each specifying the download source that each peer uses for each part. These can be reduced to four by symmetry.
Case A: Everything is downloaded from the server. This is effectively the same as case (a) above. When C 1 is small compared to C S , this is the optimal strategy. Case B: One peer downloads everything from the server. The second peer downloads from the first. This is as case (b) above, but since the file is split in two, T is less. Case C: One peer downloads from the server. The other peer downloads one part of the file from the server and the other part from the first peer. Case D: Each peer downloads exactly one part from the server and the other part from the other peer. When C 1 is large compared to C S , this is the optimal strategy.
In each case, we can find the optimal scheduling and hence the minimal makespan. This is shown in Table 3.
case makespan The optimal strategy arises from A, C or D as C 1 /C S lies in the intervals [0, 1/3], [1/3, 1] or [1, ∞) respectively. In [1, ∞), B and D yield the same. See Figure 1. Note that under the optimal schedule for case C one peer has to wait while the other starts downloading. This illustrates that greedy-type distributed algorithms may not be optimal and that restricting uploaders to a single upload is sometimes necessary for an optimal scheduling (cf. Section 2).
A 2 C S B 1 2C S + 1 2C 1 + max 1 2C S , 1 2C 1 C 1 2C S + max 1 C S , 1 2C 1 D 1 C S + 1 2C 1
Insight for special cases with large M
We still assume C 1 = C 2 = · · · = C N , but C S might be different. In the limiting case that the file can be divided into infinitely many parts, the problem can be easily solved for any number N of users. Let each user download a fraction 1− α directly from the server at rate C S /N and a fraction α/(N − 1) from each of the other N − 1 peers, at rate min{C S /N, C 1 /(N − 1)} from each. The makespan is minimized by choosing α such that the times for these two downloads are equal, if possible. Equating them, we find the minimal makespan as follows.
Case 1: C 1 /(N − 1) ≤ C S /N : (1 − α)N C S = α C 1 =⇒ α = N C 1 C S + N C 1 =⇒ T = N C S + N C 1 .(18)Case 2: C 1 /(N − 1) ≥ C S /N : (1 − α)N C S = αN (N − 1)C S =⇒ α = N − 1 N =⇒ T = 1 C S .(19)
In total, there are N MB to upload and the total available upload capacity is C S + N C 1 MBps. Thus, a lower bound on the makespan is N/(C S + N C 1 ) seconds. Moreover, the server has to upload his file to at least one user. Hence another lower bound on the makespan is 1/C S . The former bound dominates in case 1 and we have shown that it can be achieved. The latter bound dominates in case 2 and we have shown that it can be achieved. As a result, the minimal makespan is
T * = max 1 C S , N C S + N C 1 .
(20) Figure 2 shows the minimal makespan when the file is split in 1, 2 and infinitely many file parts when N = 2. It illustrates how the makespan decreases with M . In the next section, we extend the results in this limiting case to a much more general scenario.
Centralized Fluid Limit Solution
In this section, we generalize the results of Section 4.4 to allow for general capacities C i . Moreover, instead of limiting the number of sources to one designated server with a file to disseminate, we now allow every user i to have a file that is to be disseminated to all other users. We provide the centralized solution in the limiting case that the file can be divided into infinitely many parts.
Let F i ≥ 0 denote the size of the file that user i disseminates to all other users. Seeing that in this situation there is no longer one particular server and everything is symmetric, we change notation for the rest of this section so that there are N ≥ 2 users 1, 2, . . . , N .
Moreover, let F = N i=1 F i and C = N i=1 C i .
We will prove the following result.
Theorem 4 In the fluid limit, the minimal makespan is
T * = max F 1 C 1 , F 2 C 2 , . . . , F N C N , (N − 1)F C (21)
and this can be achieved with a two-hop strategy, i.e., one in which users i's file is uploaded to user j, either directly from user i, or via at most one intermediate user.
Proof. The result is obvious for N = 2. Then the minimal makespan is max{F 1 /C 1 , F 2 /C 2 } and this is exactly the value of T * in (21).
So we consider N ≥ 3. It is easy to see that each of the N + 1 terms within the braces on the right hand side of (21) are lower bounds on the makespan. Each user has to upload his file at least to one user, which takes time F i /C i . Moreover, the total volume of files to be uploaded is (N − 1)F and the total available capacity is C. Thus, the makespan is at least T * , and it remains to be shown that a makespan of T * can be achieved. There are two cases to consider.
Case 1: (N − 1)F/C ≥ max i F i /C i for all i.
In this case, T * = (N − 1)F/C. Let us consider the 2-hop strategy in which each user uploads a fraction α ii of its file F i directly to all (N − 1) peers, simultaneously and at equal rates. Moreover, he uploads a fraction α ij to peer j who in turn then uploads it to the remaining (N − 2) peers, again simultaneously and at equal rates. Note that N j=1 α ij = 1. Explicitly constructing a suitable set α ij , we thus obtain the problem min T (22) subject to, for all i,
1 C i α ii F i (N − 1) + k =i α ik F i + k =i α ki F k (N − 2) ≤ T .(23)
We minimize T by choosing the α ij in such a way as to equate the N left hand sides of the constraints, if possible. Rewriting the expression in square brackets, equating the constraints for i and j and then summing over all j we obtain
C α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = C i (N − 2) j α jj F j + F + (N − 2)(F − j α jj F j ) = (N − 1)C i F.(24)
Thus,
α ii F i (N − 2) + F i + k =i α ki F k (N − 2) = (N − 1) C i C F.(25)
Note that there is a lot of freedom in the choice of the α so let us specify that we require α ki to be constant in k for k = i, that is α ki = α * i for k = i. This means that i has the capacity to take over a certain part of the dissemination from some peer, then it can and will also take over the same proportion from any other peer. Put another way, user i splits excess capacity equally between its peers. Thus,
α ii F i (N − 2) + F i + α * i (N − 2)(F − F i ) = (N − 1) C i C F(26)
Still, we have twice as many variables as constraints. Let us also specify that α * i = α ii for all i. Similarly as above, this says that the proportion of its own file F i that i uploads to all its peers (rather than just to one of them) is the same as the proportion of the files that it takes over from its peers. Then
α * i = (N − 1)(C i /C)F − F i (N − 2)F = (N − 1)C i (N − 2)C − F i (N − 2)F ,(27)
where i α * i = 1 and α * i ≥ 0, because in case 1 F i /C i ≤ (N − 1)F/C. With these α ij , we obtain the time for i to complete its upload and hence the time for everyone to complete their upload as
T = 1 C i α * i F i (N − 2) + F i + k =i α * i F k (N − 2) = (N − 1)F i C − F i 2 C i F + F i C i + (N − 1)(F − F i ) C − F i (F − F i ) C i F = (N − 1)F/C.(28)
Note that there is no problem with precedence constraints. All uploads happen simultaneously stretched out from time 0 to T . User i uploads to j a fraction α ij of F i . Thus, he does so at constant rate α ij F i /T i = α ij F i /T . User j passes on the same amount of data to each of the other users in the same time, hence at the same rate α ij F i /T j = α ij F i /T .
Thus, we have shown that if the aggregate lower bound dominates the others, it can be achieved. It remains to be shown that if an individual lower bound dominates, than this can be achieved also.
Case 2: F i /C i > (N − 1)F/C for some i.
By contradiction it is easily seen that this cannot be the case for all i. Let us order the users in decreasing order of F i /C i , so that F 1 /C 1 is the largest of the F i /C i . We wish to show that all files can be disseminated within a time of F 1 /C 1 . To do this we construct new capacities C ′ i with the following properties:
C ′ 1 = C 1 ,(29)C ′ i ≤ C i for i = 1,(30)(N − 1)F/C ′ = F 1 /C ′ 1 = F 1 /C 1 and (31) F i /C ′ i ≤ F 1 /C 1 .(32)
This new problem satisfies the condition of Case 1 and so the minimal makespan is T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem is T = F 1 /C 1 also, because the unprimed capacities are greater or equal to the primed capacities by property (30).
To explicitly construct capacities satisfying (29)-(32), let us define
C ′ i = (N − 1) C 1 F 1 γ i F i(33)
with constants γ i ≥ 0 such that
i γ i F i = F .(34)
Then (N − 1)F/C ′ = F 1 /C 1 , that is (31) holds. Moreover, choosing
γ i ≤ 1 N − 1 C i F i F 1 C 1(35)
ensures C ′ i ≤ C i , i.e. property (30) and choosing
γ i ≥ 1 N − 1(36)
ensures F i /C ′ i ≤ F 1 /C 1 , that is property (32). Furthermore, the previous two conditions together ensure that γ 1 = 1/(N − 1) and thus C ′ 1 = C 1 , that is property (29). It remains to construct a set of parameters γ i that satisfies (34), (35) and (36).
Putting all γ i equal to the lower bound (36) gives i γ i F i = F/(N − 1), that is too small to satisfy (34). Putting all equal to the upper bound (35) gives i γ i F i = F 1 C/(N − 1)C 1 , that is too large to satisfy (34). So we pick a suitably weighted average instead. Namely,
γ i = 1 N − 1 δ C i F i F 1 C 1 + (1 − δ)(37)
such that δ C N − 1
F 1 C 1 + (1 − δ) F N − 1 = F(38)that is δ = (N − 2)F C 1 F 1 C − F C 1 .(39)
Substituting back in we obtain
γ i = 1 N − 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 (F 1 C − F C 1 )F i(40)
and thus
C ′ i = C 1 F 1 (N − 2)F F 1 C i + F i F 1 C − (N − 1)F F i C 1 F 1 C − F C 1(41)
By construction, these C ′ i satisfy properties (29)-(32) and hence, by the results in Case 1, T ′ = F 1 /C 1 . Hence the minimal makespan in the original problem T = F 1 /C 1 also.
It is worth noting that there is a lot of freedom in the choice of the α ij . We have chosen a symmetric approach, but other choices are possible.
In practice, the file will not be infinitely divisible. However, we often have M >> log(N ) and this appears to be sufficient for (21) to be a good approximation. Thus, the fluid limit approach of this section is suitable for typical and for large values of M .
Decentralized Solution for Equal Capacities
In order to give a lower bound on the minimal makespan, we have been assuming a centralized controller does the scheduling. We now consider a naive randomized strategy and investigate the loss in performance that is due to the lack of centralized control. We do this for equal capacities and in two different information scenarios, evaluating its performance by analytic bounds, simulation as well as direct computation. In Section 6.1 we consider the special case of one file part, in Section 6.2 we consider the general case of M file parts. We find that even this naive strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller (cf. Section 3). This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bounds so that they are useful in practice.
The special case of one file part
Assumptions Let us start with the case M = 1. We must first specify what information is available to users. It makes sense to assume that each peer knows the number of parts into which the file is divided, M , and the address of the server. However, a peer might not know N , the total number of peers, nor its peers' addresses, nor if they have the file, nor whether they are at present occupied uploading to someone else.
We consider two different information scenarios. In the first one, List, the number of peers holding the file and their addresses are known. In the second one, NoList, the number and addresses of all peers are known, but not which of them currently hold the file. Thus, in List, downloading users choose uniformly at random between the server and the peers already having the file. In NoList, downloading users choose uniformly amongst the server and all their peers. If a peer receives a query from a single peer, he uploads the file to that peer. If a peer receives queries from multiple peers, he chooses one of them uniformly at random. The others remain unsuccessful in that round. Thus, in List transmission can fail only if too many users try to download simultaneously from the same uploader. In NoList, transmission might also fail if a user tries to download from a peer who does not yet have the file.
Theoretical Bounds
The following theorem explains how the expected makespan that is achieved by the randomized strategy grows with N , in both the List and the NoList scenarios.
Theorem 5 In the uplink-sharing model, with equal upload capacities, the expected number of rounds required to disseminate a single file to all peers in either the List or NoList scenario is Θ(log N ).
Proof. In the List scenario our simple randomized algorithm runs in less time than in the NoList scenario. Since already have the lower bound given by Theorem 1, it suffices to prove that the expected runing time in the NoList scenario is O(log N ). There is also similar direct proof that the expected running time under the List scenario is O(log N ).
Suppose we have reached a stage in the dissemination at which n 1 peers (including the server) have the file and n 0 peers do not, with n 0 +n 1 = N +1. (The base case is n 1 = 1, when only the server has the file.) Each of the peers that does not have the file randomly chooses amongst the server and all his peers (NoList) and tries to download the file. If more than one peer tries to download from the same place then only one of the downloads is successful. The proof has two steps.
(i) Suppose that n 1 ≤ n 0 . Let i be the server or a peer who has the file and let I i be an indicator random variable that is 0 or 1 as i does or does not upload it. Let Y = i I i , where the sum is taken over all n 1 peers who have the file. Thus n 1 − Y is the number of uploads that take place. Then
EI i = 1 − 1 N n 0 ≤ 1 − 1 2n 0 n 0 ≤ 1 √ e .(42)
Now since E( i I i ) = i EI i , we have EY ≤ n 1 / √ e. Thus, by the Markov inequality, that for a nonnegative random variable Y we have that for any k (not necessarily an integer) P (Y ≥ k) ≤ (1/k)EY , we have by taking k = (2/3)n 1 ,
P n 1 − Y ≡ number of uploads ≤ 1 3 n 1 = P (Y ≥ 2 3 n 1 ) ≤ n 1 / √ e 2 3 n 1 = 3/(2 √ e) < 1 .(43)
Thus the expected number of steps required for the number of peers who have the file to increases from n 1 to at least n 1 + (1/3)n 1 = (4/3)n 1 is bounded by a geometric random variable with mean µ = 1/(1 − 3/(2 √ e)). This implies that we will reach a state in which more peers have the file than do not in an expected time that is O(log N ). From that point we continue with step (ii) of the proof.
(ii) Suppose n 1 > n 0 . Let j be a peer who does not have the file and let J j be an indicator random variable that is 0 or 1 as peer j does or does not succeed in downloading it. Let Z = j J j , where the sum is taken over all n 0 peers who do not have the file. Suppose X is the number of the other n 0 − 1 peers that try to download from the same place as does peer j. Then
P (J j = 0) = E n 1 N 1 1 + X ≥ E n 1 N (1 − X) = n 1 N 1 − n 0 − 1 N = n 1 N 1 − N − n 1 N = n 2 1 N 2 ≥ 1/4 .(44)
Hence EZ ≤ (3/4)n 0 and so, again using the Markov inequality,
P n 0 − Z ≡ number of downloads ≤ 1 8 n 0 = P Z ≥ 7 8 n 0 ≤ 3 4 n 0 7 8 n 0 = 6 7 .(45)
It follows that the number of peers who do not yet have the file decreases from n 0 to no more than (7/8)n 0 in an expected number of steps no more than µ ′ = 1/(1 − 6 7 ) = 7. Thus the number of steps needed for the number of peers without the file to decrease from n 0 to 0 is O(log n 0 ) = O(log N ). In fact, this is a weak upper bound. By more complicated arguments we can show that if n 0 = aN , where a ≤ 1/2, then the expected remaining time for our algorithm to complete under NoList is Θ(log log N ). For a > 1/2 the expected time remains Θ(log N ).
Simulation
For the problem with one server and N users we have carried out 1000 independent simulation runs 4 for a large range of parameters, N = 2, 4, . . . , 2 25 . We found that the achieved expected makespan appears to grow as a + b × log 2 N . Motivated by this and the theoretical bound from Theorem 5 we fitted the linear model
y ij = α + βx i + ǫ ij ,(46)
where y ij is the makespan for x i = log 2 2 i , obtained in run j, j = 1, . . . , 1000. Indeed, the model fits the data very well in both scenarios. We obtain the following results that enable us to compare the expected makespan of the naive randomized strategy to the that of a centralized controller. For List, the regression analysis gives a good fit, with Multiple R-squared value of 0.9975 and significant p-and t-values. The makespan increases as
1.1392 + 1.1021 × log 2 N .(47)
For NoList, there is more variation in the data than for List, but, again, the linear regression gives a good fit, with Multiple R-squared of 0.9864 and significant p-and t-values. The makespan increases as 1.7561 + 1.5755 × log 2 N .
As expected, the additional information for List leads to a significantly lesser makespan when compared to NoList, in particular the log-term coefficient is significantly smaller. In the List scenario, the randomized strategy achieves a makespan that is very close to the centralized optimum of 1 + ⌊log 2 N ⌋ of Section 3: It is only suboptimal by about 10%. Hence even this simple randomized strategy performs well in both cases and very well when state information is available, suggesting that our bounds are useful in practice.
Computations
Alternatively, it is possible to compute the mean makespan analytically by considering a Markov Chain on the state space 0, 1, 2, . . . , N , where state i corresponds to i of the N peers having the file. We can calculate the transition probabilities p ij . In the NoList case, for example, following the Occupancy Distribution (e.g., [18]), we obtain
p ii+m = i j=i−m (−1) j−i+m i! (i − j)!(i − m)!(j − i + m)! N − 1 − j N − 1 N −i .(49)
Hence we can successively compute the expected hitting times k(i) of state N starting from state i via
k(i) = 1 + j>i k(j)p ij 1 − p ii .(50)
The resulting formula is rather complicated, but can be evaluated exactly using arbitrary precision arithmetic on a computer. Computation times are long, so to keep them shorter we only work out the transition probabilities of the associated Markov Chain exactly. Hitting times are then computed in double arithmetic, that is, to 16 significant digits. Even so, computations are only feasible up to N = 512 with our equipment, despite repeatedly enhanced efficiency. This suggests that simulation is the more computationally efficient approach to our problem. The computed mean values for List and NoList are shown in Tables 4 and 5 respectively. The difference to the simulated values is small without any apparent trend. It can also be checked by computing the standard deviation that the computed mean makespan is contained in the approximate 95% confidence interval of the simulated mean makespan. The only exception is for N = 128 for NoList where it is just outside by approximately 0.0016.
Thus, the computations prove our simulation results accurate. Since simulation results are also obtained more efficiently, we shall stick to simulation when investigating the general case of M file parts in the next section.
The general case of M file parts
Assumptions
We now consider splitting the file into several file parts. With the same assumptions as in the previous section, we repeat the analysis for List for various values of M . Thus, in each round, a downloading user connects to a peer chosen uniformly at random from those peers that have at least one file part that the user does not yet have. An uploading peer randomly chooses one out of the peers requesting a download from him. He uploads to that peer a file part that is randomly chosen from amongst those that he has and the peer still needs.
Simulation
Again, we consider a large range of parameter. We carried out 100 independent runs for each N = 2, 4, . . . , 2 15 . For each value of M = 1 − 5, 8, 10, 15, 20, 50 we fitted the linear model (46). Table 6 summarizes the simulation results. The Multiple R-squared values indicate a good fit, although the fact that these decrease with M suggests there may be a finer dependence on M or N . In fact, we obtain a better fit using Generalized Additive Models (cf. [14]). However, our interest here is not in fitting the best possible model, but to compare the growth rate with N to the one obtained in the centralized case in Section 3. Moreover, from the diagnostic plots we note that the actual performance for large N is better than given by the regression line, increasingly so for increasing M . In each case, we obtain significant p-and t-values. The regression 0.7856+1.1520×log 2 N for M = 1 does not quite agree with 1.1392+1.1021×log 2 N found in (47). It can be checked, by repeating the analysis there for N = 2, 4, . . . , 2 15 that this is due to the different range of N . Thus, our earlier result of 1.1021 might be regarded more reliable, being based on N ranging up to 2 25 .
We conclude that, as in the centralized scenario, the makespan can also be reduced significantly in a decentralized scenario even when a simple randomized strategy is used to disseminate the file parts. However, as we note by comparing the second and fourth columns of Table 6, as M increases the achieved makespan compares less well relative to the centralized minimum of 1 + (1/M )⌊log 2 N ⌋. In particular, note the slower decrease of the log-term coefficient. This is depicted in Figure 3.
Still, we have seen that even this naive randomized strategy disseminates the file in an expected time whose growth rate with N is similar to the growth rate of the minimal time that we have found for a centralized controller in Section 3, confirming our performance bounds are useful in practice. This is confirmed also by initial results of current work on the performance evaluation of the Bullet' system [20].
The program code for simulations as well as the computations and the diagnostic plots used in this section are available on request and will be made available via the Internet 5 .
Discussion
In this paper, we have given three complementary solutions for the minimal time to fully disseminate a file of M parts from a server to N end users in a centralized scenario, thereby providing a lower bound on and a performance benchmark for P2P file dissemination systems. Our results illustrate how the P2P approach, together with splitting the file into M parts, can achieve a significant reduction in makespan. Moreover, the server has a reduced workload when compared to the traditional client/server approach in which it does all the uploads itself. We also investigate the part of the loss in efficiency that is due to the lack of centralized control in practice. This suggests that the performance of necessarily decentralized P2P file dissemination systems should still be close to our performance bound confirming their practical use. It would now be very interesting to compare dissemination times of the various efficient real overlay networks directly to our performance bound. A mathematical analysis of the protocols is rarely tractable, but simulation or measurements such as in [17] and [30] for the BitTorrent protocol can be carried out in an environment suitable for this comparison. Cf. also testbed results for Slurpie [33] and simulation results for Avalanche [12]. It is current work to compare our bounds to the makespan obtained by Bullet' [20]. Initial results confirm their practical use further.
In practice, splitting the file and passing on extra information has an overhead cost. Moreover, with the Transmission Control Protocol (TCP), longer connections are more efficient than shorter ones. TCP is used practically everywhere except for the Internet Control Message Protocol (ICMP) and User Datagram Protocol (UDP) for real-time applications. For further details see [35]. Still, with an overhead cost it will not be optimal to increase M beyond a certain value. This could be investigated in more detail.
In the proof of Lemma 1 and Lemma 2 we have used fair sharing and continuity assumptions. It would be of interest to investigate whether one of them or both can be relaxed. Table 6: the decentralized List scenario (solid) and the idealized centralized scenario (dashed).
It would be interesting to generalize our results to account for a dynamic setting with peers arriving and perhaps leaving when they have completed the download of the file. In Internet applications users often connect for only relatively short times. Work in this direction, using a fluid model to study the steady-state performance, is pursued in [31] and there is other relevant work in [37].
Also of interest would be to extend our model to consider users who prefer to free-ride and do not wish to contribute uploading effort. Or, to users who might want to leave the system once they have downloaded the whole file, a behaviour sometimes referred to as easy-riding. The BitTorrent protocol, for example, implements a choking algorithm to limit free-riding.
In another scenario it might be appropriate to assume that users push messages rather than pull them. See [11] for an investigation of the design space for distributed information systems. The push-pull distinction is also part of their classification. In a push system, the centralized case would remain the same. However, we expect the decentralized case to be different. There are a number of other interesting questions which could be investigated in this context. For example, what happens if only a subset of the users is actually interested in the file, but the uploaders do not know which.
From a mathematical point of view it would also be interesting to consider additional download constraints explicitly as part of the model, in particular when up-and download capacities are all different and not positively correlated. We might suppose that user i can upload at a rate C i and simultaneously download at rate D i .
More generally, one might want to assume different capacities for all links between pairs. Or, phrased in terms of transmission times, let us assume that for a file to be sent from user i to user j it takes time t ij . Then we obtain a transportation network, where instead of link costs we now have link delays. This problem can be phrased as a one-to-all shortest path problem if C j is at least N +1. This suggests that there might be some relation which could be exploited. On the other hand, the problem is sufficiently different so that greedy algorithms, induction on nodes and Dynamic Programming do not appear to work. Background on these can be found in [4] and [3]. For M = 1, Prüfer's (N + 1) N −1 labelled trees [6] together with the obvious O(N ) algorithm for the optimal scheduling given a tree is an exhaustive search. A Branch and Bound algorithm can be formulated.
| 11,555 |
cs0603081
|
1579126504
|
Shock physics experiments are often complicated and expensive. As a result, researchers are unable to conduct as many experiments as they would like - leading to sparse data sets. In this paper, Support Vector Machines for regression are applied to velocimetry data sets for shock damaged and melted tin met al. Some success at interpolating between data sets is achieved. Implications for future work are discussed.
|
As noted above, SVM for regression (as opposed to SVM classification) is rarely applied in physics. There are, however, several successful examples of the support vector regression application. In @cite_3 introduced the regression type of the SVM technique to the civil engineering community and showed that SVM can be successfully applied to the problem of stream flow data estimation based on records of rainfall and other climatic data. By using three types of kernels, Polynomial, RBF, and Neural Network, and choosing the best values for SVM free parameters via trial and error, the authors point out that the SVM with the RBF kernel performs the best. Finally, this research is the first attempt to apply support vector regression in data analysis of VISAR measurements obtained from experiments on shock melted and damaged met al.
|
{
"abstract": [
"The rapid advance in information processing systems in recent decades had directed engineering research towards the development of intelligent systems that can evolve models of natural phenomena automatically—“by themselves,” so to speak. In this respect, a wide range of machine learning techniques like decision trees, artificial neural networks (ANNs), Bayesian methods, fuzzy-rule based systems, and evolutionary algorithms have been successfully applied to model different civil engineering systems. In this study, the possibility of using yet another machine learning paradigm that is firmly based on the theory of statistical learning, namely that of the support vector machine (SVM), is investigated. An interesting property of this approach is that it is an approximate implementation of a structural risk minimization (SRM) induction principle that aims at minimizing a bound on the generalization error of a model, rather than minimizing only the mean square error over the data set. In this paper, the basic ..."
],
"cite_N": [
"@cite_3"
],
"mid": [
"2037931255"
]
}
|
Application of Support Vector Regression to Interpolation of Sparse Shock Physics Data Sets
|
Experimental physics, along with many other fields in applied and basic research, uses experiments, physical tests, and observations to gain insight into various phenomena and to validate hypotheses and models. Shock physics is a field that explores the response of materials to the extremes of pressure, deformation, and temperature which are present when shock waves interact with those materials [17]. High explosive (HE) or propellant guns are often used to generate these strong shock waves. Many different diagnostic approaches have been used to probe these phenomena [8].
Because of the energetic nature of the shock wave drive, often a large amount of experimental equipment is destroyed during the test. Like many other applied sciences, the cost and complexity of repeating a significant number of experiments -or conducting a systematic study of some physical property as a function of another -are simply too costly to conduct to the degree of completeness and detail that a researcher might desire. Often a researcher is left with a sparse data set -one that numbers too few experiments or samples a systematic variation with too few points.
The present work applies Support Vector Machine techniques to the analysis of surface velocimetry data taken from HE shocked tin samples using a laser velocity interferometer called a VISAR [2,3,6]. These experiments have been described elsewhere in detail [7]. For the purposes of this paper, it is sufficient that the VISAR velocimetry data presented here describe the response of the free surface of the metal coupon to the shock loading and release from the HE generated shock wave. The time dependence of the magnitude of the velocity can be analyzed to provide information on the yield strength of the material, and the thickness of the leading damage layer that may separate from the bulk material during the shock/release of the sample.
In section 2 we describe the problem and include more details on the VISAR system (section 2.2). In section 3 the Support Vector Machine technique is presented, and its applicability for our problem is discussed. In section 4 we evaluate the results achieved. Finally, the paper ends with a discussion of related work and conclusions. cylindrically shaped metal coupon is positioned ontop of 0.5 inch thick high explosive (HE) disk. Both the metal and HE coupons are 2 inches in diameter. A point detonator is glued to the center of the HE disc in order to perform a single point ignition symmetrically. Note that all of the components of the experiment setup have a common axis of symmetry. During an experiment a VISAR probe, located on the axis above the metal sample, transmits a laser beam, and the velocity of the top surface of the metal is infered from the Doppler shifted light reflected from the coupon (see section 2.2 for more details). The time series of the velocity measured through out an experiment constitutes the VISAR velocimetry.
In the same experiment a proton beam is shot perpendicular to the experiment's axis. By focussing the beam, a Proton Radiography (PRAD) image of the current experiment state is obtained. This is somewhat similar to X-ray imagery, although Proton Radiography can produce up to 20 or 30 images in a single experiment with an image exposure time of < 50ns. This paper is devoted to VISAR velocimetry data analysis, while discussion about Proton Radiography imagery analysis may be found elsewhere [7].
There are two parameters that vary between different experiments: the metal type of the sample and the thickness of the coupon. By changing the thickness of the metal coupon and the type of metal in the initial setup of an experiment, experimentalists attempt to see the changes in physical processes across the set of experiments. For simplicity, only the experiments on tin samples are described in this paper.
VISAR data
A Velocity Interferometer System for Any Reflector (VISAR) is a system designed to measure the Doppler shift of a laser beam reflected from the moving surface under consideration so as to capture changes of the velocity of the surface. The VISAR system is able to detect very small velocity changes (few meters per second). Moreover, it is able to measure even the velocity changes of a diffusely reflecting surface. Figure 2: Schematic view of a VISAR system.
A VISAR system consists of lasers, optical elements, detectors, and other components as shown in figure 2. The light is delivered from the laser via optical fiber to the probe and is focussed in such a way that some of the light reflects from the moving surface back to the probe. The reflected laser light is transmitted to the interferometer. Note that since the reflected light is Doppler shifted, one can extract the velocity of the moving surface from the wavelength change of the light. The interferometer is able to identify the increase or decrease of the wavelength of a beam. The captured Doppler-shifted light, the frequency of which is different from that of the initial beam, is transmitted into the interferometer depicted in figure 3, where the beam is split into two. Using optics, two beams travel different optical distances. By adjusting the length of the paths of the beams, the beams are made to interfere with each other before reaching the photodetectors. Finally, the information is extracted from a VISAR system by measuring the intensity signals from the photodetectors. For more details on the VISAR system consult [2,3,6].
This method, widely used in the experiments similar to the one described in section 2.1, is reasonably reliable. For instance, the measurements obtained using a VISAR system are in agreement with the results obtained by Makaruk et al. [10] after positions of different fragments visible on a PRAD image were measured and their corresponding velocity was computed. Since this method of information extraction is independent of VISAR, it additionally validates VISAR results.
Filling gaps of VISAR data
The problem considered in this paper, given a limited number of experiments that are difficult and costly to perform, is to estimate the measurement values for the missing experiments, or the experiments, whose data recordings were not successful. This problem is also strongly related to the one of identifying "outlier" experiments, i.e., those experiments that for some reason went wrong. The data estimation methods can show which experimental data do not fit with other "good" experiments.
The task of increasing the informational output of VISAR data is important, due to the limited number of experiments, their difficult implementation, and high cost. Physicists, who attempt to explain all the phenomena of these experiments, can gain better physical insight from the combination of the VISAR data and the estimations than from the experimental data above. Another important application of velocity estimations is for comparison with various kinds of hydrocode models generated by large programs 1 that simulate shock or other hard/impossible to perform experiments. The PRAD data, and the other type of information collected during these experiments, can also be supported and even improved by extending the velocity estimations.
Our approach 3.1 Equivalent problem
Recall that each VISAR data point is a triple t, w, v , where t is the time when the measurement was recorded, w is the thickness of a sample, and v is a measured velocity. One can see that these data points lie on a 2-dimensional surface in the 3-dimensional space. Hence the problem identified in section 2.3 can be transformed into the task of reconstructing the 2-dimensional surface from the given VISAR data.
In other words, the problem is to find a regression of velocity on the time and thickness of a sample. Formally, given three random variables that map a probability space (Ω, A, P ) into a measure space (Γ, S), velocity, time, and thickness V, T, W : (Ω, A, P ) → (Γ, S), the problem is to estimate coefficients λ ∈ Λ such that the error e = V − ρ(T, W ; λ) is small. Here ρ is a regression function that is ρ : Γ 2 × Λ → Γ, where Λ ⊆ Γ is some set of coefficients. In the case of the problem considered in this article, Γ = R. Note that variables T and W are the two factors of a regression, and V is an observation.
Velocity surface reconstruction using Support Vector Regression
The Support Vector Machine (SVM) uses supervised learning to estimate a functional input/output relationship from a training data set. Formally, given the training data set of k points { x i , y i |x i ∈ X, y i ∈ Y, i = 1 . . . k}, that is independently and randomly generated by some unknown function f for each data point, the Support Vector Machine method finds an approximation of the function assuming f is of the form
f (x) = w · φ(x) + b,(1)
where φ is a nonlinear mapping φ :
X → H, b ∈ Y , w ∈ H.
Here X ⊆ R n is an input space, Y ⊆ R is an output space, and H is a high-dimensional feature space. The coefficients w and b are found by minimizing the regularized risk
[11] R = k i=1 Loss(f (x i ), y i ) + λ w 2
that is an empirical risk, defined via a loss function, complemented with a regularization term. In this paper we use an ε-intensive Loss Function [16] defined as
Loss(f (x), y) = |f (x) − y| − ε if |f (x) − y| ≥ ε 0 otherwise .
Note also that the Support Vector Machine is a method involving kernels. Recall that the kernel of an arbitrary function g : X → Y is an equivalence relation on X defined as
ker(g) = {(x 1 , x 2 )|x 1 , x 2 ∈ X, g(x 1 ) = g(x 2 )} ⊆ X × X.
Originally, the SVM technique was applied to classification problems, in which the algorithm finds the maximum-margin hyperplane in the transformed feature space H that separates the data into two classes. The result of an SVM used for regression estimation (Support Vector Regression, SVR) is a model that depends only on a subset of training data, because the loss function used during the modeling omits the training data points inside the ε-tube (points that are close to the model prediction).
We selected the SVM approach for this problem because of the attractive features pointed out by Shawe-Taylor and Cristianini [12]. One of these features is the good generalization performance which an SVM achieves by using a unique principle of structural risk minimization [15]. In addition, SVM training is equivalent to solving a linearly constrained quadratic programming problem that has a unique and globally optimal solution, hence there is no need to worry about local minima. A solution found by SVM depends only on a subset of training data points, called support vectors, making the representation of the solution sparse.
Finally, since the SVM method involves kernels, it allows us to deal with arbitrary large feature spaces without having to compute explicitly the mapping φ from the data space to the feature space, hence avoiding the need to compute the product w · φ(x) of (1). In other words, a linear algorithm that uses only dot products can be transformed by replacing dot products with a kernel function. The resulting algorithm becomes non-linear, although it is still linear in the range of the mapping φ. We do not need to compute φ explicitly, because of the application of kernels. This algorithm transformation from the linear to non-linear form is known as a so-called kernel trick [1].
On the other hand, since the available data are the VISAR measurements that capture some characteristics of the unknown function, and each data point is represented by several features, the data is suitable for the application of supervised learning methods, such as SVR. A velocity of each data point is a target value for SVR, whereas the thickness and time are feature values. Figure 4: Available VISAR data set
In figure 4 the VISAR data set is depicted. It is important to note that the data is significantly stretched along the time dimension. This happens, because the whole dataset is comprised of a number of time series corresponding to a set of measured experiments. During each experiment, the VISAR readings were recorded every 2ns for as long as 6000 time steps. However, for some of the experiments the VISAR system finished recording useful information earlier than for others. The data were cut by the shortest sequence (1656 time steps), since it has been identified experimentally that SVM performs better on the aligned data. On the other hand, if we consider VISAR measurements across the thickness dimension, the data covers the thicknesses starting from 0.25 inch up to 0.5 inch with 0.0625 inch increase. In total 5 time sequences of 1656 points comprise the data used by the SVM method. Figure 5 presents the complete data set projected on the T ime × V elocity plane. The original data is represented by dotted lines, and its smoothed with a sliding triangular window version is depicted with solid lines. Note that the amount of the time steps, where each step is equal to 2ns, is shown on the abscissa.
In order to identify the best application of the SVM method to the VISAR data, we use standard k-fold cross-validation. The data is divided into k parts, out of which k − 1 parts are used for training the learning machine, and the last part is used for its validation. The process is repeated k times using each part of the partitioning precisely once for validation.
Evaluation/Results
There are several factors affecting the quality of the resulting regression. The error of VISAR data and the errors occurring during the data preprocessing affect the accuracy of the reconstructed surface the most. It is generally agreed that a VISAR system measures the velocity values with an absolute accuracy of 3-5%. This is an approximate error calculated from differences between repeated experiments. Although the number of repeated experiments was too small to allow a more robust statistical analysis, this level of uncertainty is in the range of values generally agreed on by VISAR experimenters [2,3,6]. This error together with the noise transfers into the regression result. In addition, since the ignition time (the start of the experiment) was different for different experiments, the data has to be time-aligned so as to make each time series start exactly from the moment of the detonation. This introduces another potential error into the regression.
The accuracy of the reconstructed surface is also affected by the specific features of VISAR data. The length of each of the time series produced by the VISAR system during different experiments always differs. We have observed that the SVM performs better on the data combined from the time series of the same length than from those of different length. Hence, the length of the data was aligned. In addition, each data point of three elements (velocity, time, and thickness) has order 10 3 , 10 −6 , and 1. This is why it is important to scale the data to improve the performance of the SVM.
Unfortunately, the application of SVR directly to the set of smoothed and aligned data yields overfitted results, because the data step in the time direction is much smaller than the step in the other directions, and hence for any chosen data range there are more data points along the time axis than along the thickness axis. The overfitting problem is solved by scaling the data in such a way that the distance between two neighbor points along any axis is equal to 1.
Using nonlinear kernels achieves better performance, when the dynamics of an experiment are non-linear. It is known that Gaussian Radial Basis Function (RBF) kernels perform well under general smoothness assumption [13], hence a Gaussian RBF k(x, y) = e −γ x−y 2 has been chosen as a kernel for the reconstruction. Additionally, it has been experimentally determined that SVM techniques with simpler kernels, such as polynomial, take longer time to train and return non-satisfactory results. The performance of the SVR with RBF kernel is directly affected by three parameters, the radius γ of RBF, the upper bound C on the Lagrangian multipliers (also called a regularization constant or a capacity factor), and the size ε of the ε-tube (also called an error-insensitive zone or an ε-margin). Note that ε determines the accuracy of the regression, namely the amount by which a point from a training set is allowed to diverge from the regression. k-fold crossvalidation is performed in order to determine the optimal parameters' values under which SVR produces the best approximation of the surface. An l 2 error is computed for each parameter instantiation after finishing the cross-validation. It can be seen in figure 6 that the error increases as the radius γ goes up. The error also increases when ε becomes bigger. One can also see that the change of C affects the error the most when γ is the smallest, and the influence of C on the error decreases as γ goes up, becoming insignificant when γ exceeds 0.3. In the same time, given a small γ, parameter C affects the error more as the ε decreases. The error analysis suggests that when the tuple γ, C, ε is around 0.1, 0.75 − 1.0, 0.001 , the error is the smallest. This error analysis produces a range of suboptimal values for the parameters. The expert knowledge is used in order to identify the final model that returns the most accurate velocity surface, shown in figure 7. Once the surface is found, it is possible to obtain a velocity value for any time, thickness pair. Assuming the surface is accurate enough, the failed VISAR data that deviates considerably from the surface can be identified. The surface provides significantly more information about velocity changes across the thickness dimension than VISAR readings alone. It can also provide velocity time series for an experiment, in which only PRAD data were measured successfully, improving the quality of the analysis for this experiment, and, consequently, increasing the understanding of the whole physical system.
It should be noted that in this paper we used an implementation of the SVM technique called SVM-light. For more information about its implementation details see [9].
Conclusions and Future Work
In this paper we described the problem of VISAR data analysis in which we attempted to estimate the data between the points measured by VISAR. The Support Vector Regression method was used to reconstruct a 2-dimensional velocity surface in T ime×T hickness×V elocity data space resulting in a successful estimation. The SVR free parameters were obtained from a grid search as well as using the expert knowledge.
The velocity surface provides considerably more information about the velocity behavior as a function of time and thickness than experimentally produced VISAR measurements alone. This may significantly improve the scientific value of VISAR data into other areas of analysis of shock physics experiments, such as PRAD imagery analysis and hydrocode simulations.
On the other hand, support vector regression does not require a vast amount of data for producing good velocity estimations. This is very helpful because of the high cost and complexity of experiments, and limited amount of available data.
In addition, the estimated velocity surface can help to identify experimentsoutliers: those experiments that for some reason went wrong. The data obtained from these experiments will be significantly different than those suggested by the velocity surface.
There are several directions in which this work might advance. One of these is to investigate the possibility of using a custom kernel instead of a standard Gaussian. Intuitively, an elliptical kernel that accounts for high density of the data in one direction and sparsity in all other directions may improve the results of support vector regression.
During regression performed by the support vector machine method, we need to identify optimal values for SVM free parameters. In this paper a grid search and the expert knowledge are used (see section 4), essentially leading to suboptimal parameter values. Investigation of deriving an online learning algorithm for SVM parameter fitting specific to the VISAR data might be another direction of further research.
In addition, note that an SVM system used for regression outputs a point estimate. However, most of the time we wish to capture uncertainty in the prediction, hence estimating the conditional distribution of the target values given feature values is more attractive. There is a number of different extensions to the SVM technique and hybrids of SVM with Bayesian methods, such as relevance vector machines and Bayesian SVM, that use probabilistic approaches. Exploring these methods could give significantly more information about the underlying data.
| 3,474 |
cs0601131
|
2949631308
|
In this paper, computational aspects of the panel aggregation problem are addressed. Motivated primarily by applications of risk assessment, an algorithm is developed for aggregating large corpora of internally incoherent probability assessments. The algorithm is characterized by a provable performance guarantee, and is demonstrated to be orders of magnitude faster than existing tools when tested on several real-world data-sets. In addition, unexpected connections between research in risk assessment and wireless sensor networks are exposed, as several key ideas are illustrated to be useful in both fields.
|
Linear averaging @cite_12 , @cite_1 is arguably the most popular aggregation principle, given its simplicity, various axiomatic justifications, and documented empirical success. To illustrate this natural approach, consider the panel exhibited in Table 2. Here, three judges provide forecasts for three events, a conjunction and its conjuncts. The Aggregate" forecast is the simple un-weighted average of the three judges' forecasts. Though appealing, linear averaging is not without pitfalls, as can be illustrated with a few examples.
|
{
"abstract": [
"Abstract Considerable literature has accumulated over the years regarding the combination of forecasts. The primary conclusion of this line of research is that forecast accuracy can be substantially improved through the combination of multiple individual forecasts. Furthermore, simple combination methods often work reasonably well relative to more complex combinations. This paper provides a review and annotated bibliography of that literature, including contributions from the forecasting, psychology, statistics, and management science literatures. The objectives are to provide a guide to the literature for students and researchers and to help researchers locate contributions in specific areas, both theoretical and applied. Suggestions for future research directions include (1) examination of simple combining approaches to determine reasons for their robustness, (2) development of alternative uses of multiple forecasts in order to make better use of the information they contain, (3) use of combined forecasts as benchmarks for forecast evaluation, and (4) study of subjective combination procedures. Finally, combining forecasts should become part of the mainstream of forecasting practice. In order to achieve this, practitioners should be encouraged to combine forecasts, and software to produce combined forecasts easily should be made available.",
"This paper concerns the combination of experts' probability distributions in risk analysis, discussing a variety of combination methods and attempting to highlight the important conceptual and practical issues to be considered in designing a combination process in practice. The role of experts is important because their judgments can provide valuable information, particularly in view of the limited availability of “hard data” regarding many important uncertainties in risk analysis. Because uncertainties are represented in terms of probability distributions in probabilistic risk analysis (PRA), we consider expert information in terms of probability distributions. The motivation for the use of multiple experts is simply the desire to obtain as much information as possible. Combining experts' probability distributions summarizes the accumulated information for risk analysts and decision-makers. Procedures for combining probability distributions are often compartmentalized as mathematical aggregation methods or behavioral approaches, and we discuss both categories. However, an overall aggregation process could involve both mathematical and behavioral aspects, and no single process is best in all circumstances. An understanding of the pros and cons of different methods and the key issues to consider is valuable in the design of a combination process for a specific PRA. The output, a “combined probability distribution,” can ideally be viewed as representing a summary of the current state of expert opinion regarding the uncertainty of interest."
],
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"1993387039",
"2009083767"
]
}
| 0 |
||
cs0601131
|
2949631308
|
In this paper, computational aspects of the panel aggregation problem are addressed. Motivated primarily by applications of risk assessment, an algorithm is developed for aggregating large corpora of internally incoherent probability assessments. The algorithm is characterized by a provable performance guarantee, and is demonstrated to be orders of magnitude faster than existing tools when tested on several real-world data-sets. In addition, unexpected connections between research in risk assessment and wireless sensor networks are exposed, as several key ideas are illustrated to be useful in both fields.
|
@cite_20 consider a Bayesian approach to reconciling probability forecasts, whereby noisy" observations @math are assumed to arise from a coherent set @math . CAP can be viewed as a special-case of their model, since as discussed above, the solution to ) admits a Bayesian interpretation as the maximum-likelihood coherent forecasts given additive white noise corrupted observations @math . However, note that @cite_20 sought to eliminate incoherence from a single judge, whereas CAP was introduced to address the panel aggregation problem. Moreover, Osherson and Vardi were motivated by non-statistical interpretations of CAP and as here, addressed the computational issue of implementing CAP.
|
{
"abstract": [
"Abstract : This paper investigates the question of how to reconcile incoherent probability assessments, i.e., assessments that are inconsistent with the laws of probability. A general model for the analysis of probability assessments is introduced, and two approaches to the reconciliation problem are developed. In the internal approach, one estimates the subject's true probabilities on the basis of his assessments. In the external approach, an external observer updates his own coherent probabilities in the light of the assessments made by the subject. The two approaches are illustrated and discussed. Least-squares procedures for reconciliation are developed within the internal approach. (Author)"
],
"cite_N": [
"@cite_20"
],
"mid": [
"1509045286"
]
}
| 0 |
||
cs0601131
|
2949631308
|
In this paper, computational aspects of the panel aggregation problem are addressed. Motivated primarily by applications of risk assessment, an algorithm is developed for aggregating large corpora of internally incoherent probability assessments. The algorithm is characterized by a provable performance guarantee, and is demonstrated to be orders of magnitude faster than existing tools when tested on several real-world data-sets. In addition, unexpected connections between research in risk assessment and wireless sensor networks are exposed, as several key ideas are illustrated to be useful in both fields.
|
A panel-aggregation problem is addressed in the online" learning model, which is frequently studied in learning theory @cite_16 @cite_17 . In that setting, a panel of experts predicts the true outcome of a set of events. A central agent constructs its own forecast by fusing the experts' predictions, and upon learning the truth, suffers a loss sometimes specified by a quadratic penalty function. In repeated trials, the agent updates its fusion rule (e.g., the weights" in a weighted average), taking into account the performance of each expert. Under minimal assumptions on the evolution of these trials, bounds are derived that compare the trial-averaged performance of the central agent with that of the best (weighted combination of) expert(s). In contrast to the current framework, the online model typically assumes that each expert provides a forecast for the same event or partition of events. Thus, fusion strategies such as weighted averaging are appropriate in the online model, for the same reasons discussed above. Also, observe that the present model concerns a single trial", not many.
|
{
"abstract": [
"We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts . Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictins. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition learning algorithms with performance bounds that improve on the best results currently know in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.",
"In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line."
],
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"1979675141",
"1988790447"
]
}
| 0 |
||
cs0601131
|
2949631308
|
In this paper, computational aspects of the panel aggregation problem are addressed. Motivated primarily by applications of risk assessment, an algorithm is developed for aggregating large corpora of internally incoherent probability assessments. The algorithm is characterized by a provable performance guarantee, and is demonstrated to be orders of magnitude faster than existing tools when tested on several real-world data-sets. In addition, unexpected connections between research in risk assessment and wireless sensor networks are exposed, as several key ideas are illustrated to be useful in both fields.
|
Finally, proponents of Dempster-Shafer theory @cite_7 (and associated fusion rules) object to probability as an idiom for belief, in part because of its inability to distinguish uncertainty from ignorance. The merits of Dempster-Shafer aside, one could argue for abstention as an expression of ignorance. As the preceding examples illustrate, even abstaining experts may disagree (i.e., experts' forecasts may be mutually incoherent), and therefore the panel aggregation problem remains. Thus, CAP is a natural aggregation principle in the setting where judges express uncertainty with probability and ignorance through abstention, and thereby extends the utility of probabilistic forecasts by affording experts more expressive beliefs with abstention.
|
{
"abstract": [
"Both in science and in practical affairs we reason by combining facts only inconclusively supported by evidence. Building on an abstract understanding of this process of combination, this book constructs a new theory of epistemic probability. The theory draws on the work of A. P. Dempster but diverges from Depster's viewpoint by identifying his \"lower probabilities\" as epistemic probabilities and taking his rule for combining \"upper and lower probabilities\" as fundamental. The book opens with a critique of the well-known Bayesian theory of epistemic probability. It then proceeds to develop an alternative to the additive set functions and the rule of conditioning of the Bayesian theory: set functions that need only be what Choquet called \"monotone of order of infinity.\" and Dempster's rule for combining such set functions. This rule, together with the idea of \"weights of evidence,\" leads to both an extensive new theory and a better understanding of the Bayesian theory. The book concludes with a brief treatment of statistical inference and a discussion of the limitations of epistemic probability. Appendices contain mathematical proofs, which are relatively elementary and seldom depend on mathematics more advanced that the binomial theorem."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2797148637"
]
}
| 0 |
||
physics0512106
|
1659336119
|
Dense subgraphs of sparse graphs (communities), which appear in most real-world complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advantages: it captures well the community structure in a network, it can be computed efficiently, and it can be used in an agglomerative algorithm to compute efficiently the community structure of a network. We propose such an algorithm, called Walktrap, which runs in time O(mn^2) and space O(n^2) in the worst case, and in time O(n^2log n) and space O(n^2) in most real-world cases (n and m are respectively the number of vertices and edges in the input graph). Extensive comparison tests show that our algorithm surpasses previously proposed ones concerning the quality of the obtained community structures and that it stands among the best ones concerning the running time.
|
Many algorithms to find community structures in graphs exist. Most of them result from very recent works, but this topic is related to the classical problem of that consists in splitting a graph into a given number of groups while minimizing the cost of the edge cut @cite_18 @cite_43 @cite_45 . However, these algorithms are not well suited to our case because they need the number of communities and their size as parameters. The recent interest in the domain has started with a new approach proposed by Girvan and Newman @cite_4 @cite_39 : the edges with the largest (number of shortest paths passing through an edge) are removed one by one in order to split hierarchically the graph into communities. This algorithm runs in time @math . Similar algorithms were proposed by Radicchi @cite_13 and by Fortunato @cite_34 . The first one uses a local quantity (the number of loops of a given length containing an edge) to choose the edges to remove and runs in time @math . The second one uses a more complex notion of information centrality that gives better results but poor performances in @math .
|
{
"abstract": [
"",
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"",
"The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is, shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorith...",
"We consider the problem of partitioning the nodes of a graph with costs on its edges into subsets of given sizes so as to minimize the sum of the costs on all edges cut. This problem arises in several physical situations — for example, in assigning the components of electronic circuits to circuit boards to minimize the number of connections between boards. This paper presents a heuristic method for partitioning arbitrary graphs which is both effective in finding optimal partitions, and fast enough to be practical in solving large problems.",
"Community structures are an important feature of many social, biological and technological networks. Here we study a variation on the method for detecting such communities proposed by Girvan and Newman and based on the idea of using centrality measures to define the community boundaries ( M. Girvan and M. E. J. Newman, Community structure in social and biological networks Proc. Natl. Acad. Sci. USA 99, 7821-7826 (2002)). We develop an algorithm of hierarchical clustering that consists in finding and removing iteratively the edge with the highest information centrality. We test the algorithm on computer generated and real-world networks whose community structure is already known or has been studied by means of other methods. We show that our algorithm, although it runs to completion in a time O(n 4 ), is very effective especially when the communities are very mixed and hardly detectable by the other methods.",
"The investigation of community structures in networks is an important issue in many domains and disciplines. This problem is relevant for social tasks (objective analysis of relationships on the web), biological inquiries (functional studies in metabolic and protein networks), or technological problems (optimization of large infrastructures). Several types of algorithms exist for revealing the community structure in networks, but a general and quantitative definition of community is not implemented in the algorithms, leading to an intrinsic difficulty in the interpretation of the results without any additional nontopological information. In this article we deal with this problem by showing how quantitative definitions of community are implemented in practice in the existing algorithms. In this way the algorithms for the identification of the community structure become fully self-contained. Furthermore, we propose a local algorithm to detect communities which outperforms the existing algorithms with respect to computational cost, keeping the same level of reliability. The algorithm is tested on artificial and real-world graphs. In particular, we show how the algorithm applies to a network of scientific collaborations, which, for its size, cannot be attacked with the usual methods. This type of local algorithm could open the way to applications to large-scale technological and biological systems."
],
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_39",
"@cite_43",
"@cite_45",
"@cite_34",
"@cite_13"
],
"mid": [
"1587744656",
"1971421925",
"",
"2114030927",
"2161455936",
"2026601878",
"2038920443"
]
}
| 0 |
||
cs0508017
|
2950059513
|
Three approaches to content-and-structure XML retrieval are analysed in this paper: first by using Zettair, a full-text information retrieval system; second by using eXist, a native XML database, and third by using a hybrid XML retrieval system that uses eXist to produce the final answers from likely relevant articles retrieved by Zettair. INEX 2003 content-and-structure topics can be classified in two categories: the first retrieving full articles as final answers, and the second retrieving more specific elements within articles as final answers. We show that for both topic categories our initial hybrid system improves the retrieval effectiveness of a native XML database. For ranking the final answer elements, we propose and evaluate a novel retrieval model that utilises the structural relationships between the answer elements of a native XML database and retrieves Coherent Retrieval Elements. The final results of our experiments show that when the XML retrieval task focusses on highly relevant elements our hybrid XML retrieval system with the Coherent Retrieval Elements module is 1.8 times more effective than Zettair and 3 times more effective than eXist, and yields an effective content-and-structure XML retrieval.
|
The CSIRO group participating in INEX 2002 proposed a similar XML retrieval approach where PADRE, the core of CSIRO's Panoptic Enterprise Search Engine http: www.panopticsearch.com is used to rank full articles and elements within articles @cite_4 . Unlike many full-text information retrieval systems, PADRE combines full-text and metadata indexing and retrieval and is also capable of indexing and retrieving more specific elements within articles. A post processing module is then used to extract and re-rank the full articles and elements within articles returned by PADRE. However, unlike our CRE retrieval module, the above approach ignores the structural elements within articles that contain the indexed element. Less specific and more general elements are therefore not likely to appear in the final answer list.
|
{
"abstract": [
"This paper reports on the CSIRO group's participation in INEX. We indexed documents and document fragments using PADRE, the core of CSIRO's Panoptic Enterprise Search Engine. A query translator converts the INEX topics into queries containing selection and projection constraints for the results. Answers are extracted from ranked documents and document fragments based on the projection constraints in the query."
],
"cite_N": [
"@cite_4"
],
"mid": [
"106063377"
]
}
|
Enhancing Content-And-Structure Information Retrieval using a Native XML Database
|
This paper explores an effective hybrid XML retrieval approach that combines full-text information retrieval features with XML-specific retrieval features found in a native XML database. We focus on improving XML retrieval for contentand-structure (CAS) retrieval topics, which represent topics that enforce constraints on the existing document structure and explicitly specify the type of the unit of retrieval (such as section or paragraph). A retrieval challenge for a CAS topic is providing relevant answers to a user information need. In our previous work [6] we investigated the impact of different XML retrieval approaches on content-only (CO) retrieval topics, and also proposed a hybrid system as an effective retrieval solution. Both CAS and CO topics are part of INEX 1 , the INitiative for the Evaluation of XML Retrieval.
The INEX 2003 CAS retrieval topics can be classified in 1 http://www.is.informatik.uniduisburg.de/projects/inex/index.html.en two categories: the first category of topics where full articles rather than more specific elements are required to be retrieved as final answers, and the second category of topics where more specific elements rather than full articles are required to be retrieved as final answers.
For topics in the first category, we investigate whether a fulltext information retrieval system is capable of retrieving full article elements as highly relevant answers. We use Zettair 2 (formerly known as Lucy) as our choice for a full-text information retrieval system. Zettair is a compact and fast full-text search engine designed and written by the Search Engine Group at RMIT University. Although Zettair implements an efficient inverted index structure [11], the unit of retrieval is a full article, and currently it is neither capable of indexing and retrieving more specific elements within articles nor capable of specifying constraints on elements within articles.
For topics in the second category, we investigate whether an XML-specific retrieval system is capable of retrieving more specific elements as highly relevant answers. We use eXist 3 , an open source native XML database, as our choice for an XML-specific retrieval system. eXist implements many XML retrieval features found in most native XML databases, such as full and partial keyword text searches and proximity functions. Two of eXist's advanced features are efficient index-based query processing and XPath extensions for fulltext search [3]. However, most native XML databases follow Boolean retrieval approaches and are not capable of ranking the final answer elements according to their estimated likelihood of relevance to the information need in a CAS retrieval topic.
Our initial experiments using a native XML database approach show a poor retrieval performance for CAS retrieval topics. We also observe a similar retrieval behaviour for CO retrieval topics [5,6]. In an effort to enhance its XML retrieval effectiveness, we implement a retrieval system that follows a hybrid XML retrieval approach. The native XML database in our hybrid system effectively utilises the information about articles likely to be considered relevant to an XML retrieval topic. In order to address the issue of ranking the final answer elements, we develop and evaluate a retrieval module that for a CAS topic utilises the structural relationships found in the answer list of a native XML database and retrieves a ranked list of Coherent Retrieval Elements (CREs). Section 3.4 provides the definition of CREs and highlights their importance in the XML retrieval process. Our module can equally be applied to both cases when the logical query constraints in a CAS topic are treated as either strict or vague, since it is capable of identifying highly relevant answer elements at different levels of retrieval granularity.
The hybrid system and the CRE retrieval module we use in this paper extend the system and the module we previously proposed and evaluated for the INEX 2003 CO retrieval topics [6].
ANALYSIS OF INEX 2003 TOPICS
INEX provides a means, in the form of a test collection and corresponding scoring methods, to evaluate the effectiveness of different XML retrieval systems. INEX uses an XML document collection that comprises 12107 IEEE Computer Society articles published in the period 1997-2002 with approximately 500MB of data. Each year (starting in 2002) a new set of XML retrieval topics are introduced in INEX which are then usually assessed by participating groups that originally proposed the topics.
The XML retrieval task performed by the groups participating in INEX is defined as ad-hoc retrieval of XML documents. In information retrieval literature this type of retrieval involves searching a static set of documents using a new set of topics, which represents an activity commonly used in digital library systems.
Within the ad-hoc retrieval task, INEX defines two additional retrieval tasks: a content-only (CO) task involving CO topics, and a content-and-structure (CAS) task involving CAS topics. A CAS topic enforces restrictions with respect to the underlying document structure by explicitly specifying the type of the unit of retrieval, whereas a CO topic has no such restriction on the elements retrieved. In INEX 2003, the CAS retrieval task furthermore comprises a SCAS sub-task and a VCAS sub-task. A SCAS sub-task considers the structural constraints in a query to be strictly matched, while a VCAS sub-task allows the structural constraints in a query to be treated as vague conditions.
In this paper we focus on improving XML retrieval for CAS topics, in particular using the SCAS retrieval sub-task. Thus for a section element to be considered marginally, fairly or highly relevant, it is very likely that it will at least contain a combination of some important words or phrases, such as mobile, security, electronic payment system or e-payment. Furthermore, for the INEX XML document collection the <?xml version="1.0" encoding="ISO-8859-1"? > <!DOCTYPE inex topic SYSTEM "topic.dtd"> <inex topic topic id="86" query type="CAS" ct no="107"> sec, ss1 and ss2 elements are considered equivalent and interchangeable for a CAS topic. In that sense, an XML retrieval system should follow an effective extraction strategy capable of producing coherent answers with the appropriate level of retrieval granularity (such as retrieving sec rather than ss2 elements).
INEX CAS Topic Example
INEX CAS Topic Categories
INEX 2003 introduces 30 CAS topics in total, with topic numbers between 61 and 90. Out of the CAS topic titles, we distinguish two categories of topics.
• The first category of topics seek to retrieve full articles rather than more specific elements within articles as final answers. There are 12 such topics, their numbers being 61, 62, 63, 65, 70, 73, 75, 79, 81, 82, 87, 88. We refer to such topics as Article topics.
• The second category of topics seek to retrieve more specific elements within articles rather than full articles as final answers. There are 18 topics that belong to this category. We refer to such topics as Specific topics.
XML RETRIEVAL APPROACHES
Most full-text information retrieval systems ignore the information about the document structure and consider whole documents as units of retrieval. Such retrieval systems take queries that often represent a bag of words, where phrases or logical query operators could also be included. The final list of answer elements usually comprises ranked list of whole documents sorted in a descending order according to their estimated likelihood of relevance to the information need in the query. Accordingly, it is expected that for CAS retrieval topics in the first category a full-text information retrieval system would be able to successfully retrieve highly relevant articles.
Most native XML databases support XML-specific retrieval technologies, such as found in XPath and XQuery. The information about the structure of the XML documents is usually incorporated in the document index, allowing users to query both by document content and by document structure. This allows an easy identification of elements that belong to the XML documents, either by the path they appear in the document or by certain keywords they contain. Accordingly, it is expected that a native XML database would be suitable for CAS retrieval topics that belong in the second category.
In an effort to support a content-and-structure XML retrieval that combines both CAS topic categories, we develop a hybrid XML retrieval system that uses a native XML database to produce final answers from those documents that are estimated as likely to be relevant by a full-text information retrieval system.
The following sections describe the XML retrieval approaches implemented in the respective systems, together with some open issues that arise when a particular retrieval approach is applied.
Full-Text Information Retrieval Approach
The efficient inverted index structure is first used with Zettair to index the INEX XML document collection. The term postings file is stored in a compressed form on disk, so the size of the Zettair index takes roughly 26% of the total collection size. The time taken to index the entire INEX collection on a system with a Pentium4 2.66GHz processor and a 512MB RAM memory running Mandrake Linux 9.1 is around 70 seconds.
A topic translation module is used to automatically translate an INEX CAS topic into a Zettair query. For INEX CAS topics, terms that appear in the <Title> part of the topic are used to formulate the query. Up to 100 <article> elements are then returned in the descending order according to their estimated likelihood of relevance to the CAS topic. One retrieval issue when using Zettair, which is in particular related to the XML retrieval process, is that it is not currently capable of indexing and retrieving more specific elements within articles.
When the information retrieval task involves retrieval of whole documents with varying lengths, the pivoted cosine document length normalisation scheme is shown to be an effective retrieval scheme [8]. For the INEX XML document collection, we calculated the optimal slope parameter in the pivoted cosine ranking formula by using a different set of retrieval topics (those from the previous year, INEX 2002).
When using terms from <Title> part of INEX topics while formulating Zettair queries, we found that a slope parameter with a value of 0.25 yields highest system effectiveness (although when longer queries are used, such as queries that contain terms from the <Keywords> part of INEX topics, a different value of 0.55 would be better [6]). Consequently, for INEX 2003 CAS topics we use the value of 0.25 for the slope parameter in the pivoted cosine ranking formula in Zettair.
Native XML Database Approach
With eXist, the INEX XML document collection is first indexed by using its efficient indexing scheme. This index stores the information about the parsed elements within articles together with the information about the attributes and all word occurrences; its size is roughly twice as big as the total collection size. The time taken to index the entire INEX collection on a system with a Pentium 4 2.6GHz processor and a 512MB RAM memory running Mandrake Linux 9.1 is around 2050 seconds.
A topic translation module is used to automatically translate an INEX CAS topic into two eXist queries: AND and OR. For INEX CAS topics, the terms and structural constraints that appear in the <Title> part of the CAS topic are used to formulate eXist queries. The &= and |= query operators are used with eXist while formulating the above queries, respectively. The AND and OR eXist queries are depicted in solid boxes in Figure 1 where the elements to be retrieved are specified explicitly.
For an INEX CAS topic, our choice for the final list of answer elements comprises matching elements from the AND answer list followed by the matching elements from the OR answer list that do not belong to the AND answer list.
If an AND answer list is empty, the final answer list is the same as the OR answer list. In both cases it contains (up to) 100 matching articles or elements within articles. The equivalent matching elements are also considered during the retrieval process.
We observed two retrieval issues while using eXist, which are in particular related to the XML retrieval process.
1. For an INEX CAS topic that retrieves full articles rather than more specific elements within articles, the list of answer elements comprises full articles that satisfy the logical query constraints. These articles are sorted by their internal identifiers that correspond to the order in which each article is stored in the database.
However, there is no information about the estimated likelihood of relevance of a particular matching article to the information need expressed in the CAS topic.
2. For an INEX CAS topic that retrieves more specific elements within articles rather than full articles, the list of answer elements comprises most specific elements that satisfy both the content and the granularity constraints in the query. eXist orders the matching elements in the answer list by the article where they belong, according to the XQuery specification 4 . However, there is no information whether a particular matching element in the above list is likely to be more relevant than other matching elements that belong to the same article. Accordingly, ranking of matching elements within articles is also not supported.
The following sections describe our approaches that address both of these issues.
Hybrid XML Retrieval Approach
Our hybrid system incorporates the best retrieval features from Zettair and eXist. Figure 1 shows the hybrid XML retrieval approach as implemented in the hybrid system. We use the CAS topic 86 throughout the example. Zettair is first used to obtain (up to) 100 articles likely to be considered relevant to the information need expressed in the CAS topic as into a Zettair query. For each article in the answer list produced by Zettair, both AND and OR queries are then applied by eXist, which produce matching elements in two corresponding answer lists. The answer list for an INEX CAS topic and a particular article thus comprises the article's matching elements from the AND answer list followed by the article's matching elements from the OR answer list that do not belong to the AND answer list.
The final answer list for an INEX CAS topic comprises (up to) 100 matching elements and equivalent element tags that belong to highly ranked articles as estimated by Zettair. The final answer list is shown as Hybrid list in Figure 1. Figure 1 also shows queries and other parts of our hybrid system depicted in dashed boxes, where we also explore whether using CO-type queries could improve the CAS retrieval task. This can equally be applied to the hybrid approach as well as to the native XML database approach, since they both use eXist to produce the final list of matching elements. The next section explores this retrieval process in detail.
The hybrid XML retrieval approach addresses the first retrieval issue observed in a native XML database approach. However, because of its modular nature we observe a loss in efficiency. For a particular CAS topic, up to 100 articles firstly need to be retrieved by Zettair. This article list is then queried by eXist, one article at a time. In order to retrieve (up to) 100 matching elements, eXist may need to query each article in the list before it reaches this number. Obviously, having an equally effective system that produces its final list of answer elements much faster would be more efficient solution. The second retrieval issue observed in a native XML database approach still remains open, since for a particular article our hybrid XML retrieval system also uses eXist to produce its final list of answer elements.
The following section describes one possible approach that addresses this issue.
Rank the Native XML Database Output
This section describes our novel retrieval module that utilises the structural relationships between elements in the eXist's answer list and identifies, ranks and retrieves Coherent Retrieval Elements. Our definition of a Coherent Retrieval Element is as follows.
"For a particular article in the final answer list, a cases, the containing elements of a Coherent Retrieval Element should constitute either its different children or each different child's descendants" [6].
Consider the eXist answer list shown in Table 2. The list is a result of using the CAS topic 86 and the OR eXist query depicted as dashed box in Figure 1. Each matching element in the list therefore contains any combination of query keywords. Although this example shows a retrieval case when an OR list is used with eXist, our CRE algorithm equally applies in the case when an AND list is used. Table 2 also shows that the matching elements in the answer list are presented in article order. Figure 2 shows a tree representation of the above eXist answer list. The eXist matching elements are shown in triangle boxes, while the CREs are shown in square boxes. The figure also shows elements that represent neither matching elements nor CREs.
We identify one specific case, however. If an answer list contains only one matching element, the above CRE algorithm produces the same result: the matching element. This is due to the lack of supporting information that would justify the choice for the ancestors of the matching element to be regarded as CREs.
So far we have managed to identify the Coherent Retrieval Elements from the eXist's answer list of matching elements to the CAS topic. However, in order to enhance the effectiveness of our retrieval module we still need to rank these elements according to their estimated likelihood of relevance to the information need expressed in the topic. The follow- 1. The number of times a CRE appears in the absolute path of each matching element in the answer list (the more often it appears, the better);
2. The length of the absolute path of a CRE (the shorter it is, the better);
3. The ordering of the XPath sequence in the absolute path of a CRE (nearer to beginning is better); and 4. Since we are dealing with CAS retrieval task, only CREs that satisfy the granularity constraints in a CAS topic will be considered as answers.
In accordance to the above heuristics, if two Coherent Retrieval Elements appear the same number of times in the answer list, the shorter one will be ranked higher. Moreover, if they have the same length, the ordering sequence where they appear in the article will determine their final ranks. In our example, article[1]/bdy [1]/sec [1] will be ranked higher than article[1]/bdy[1]/sec [3].
The order of importance for the XML-specific heuristics outlined above is based on the following observation. As it is currently implemented, less specific (or more general) CREs are likely to be ranked higher than more specific (or less general) CREs. However, depending on the retrieval task, the retrieval module could easily be switched the other way around. When dealing with the INEX test collection, the latter functionality proved to be less effective than the one currently implemented. Table 3 shows the final ranked list of Coherent Retrieval Elements for the particular article. (the OR list is shown in the CRE module in Figure 1). The bdy[1] element does not satisfy the last heuristic above, thus it is not included in the final list of CREs. This means that our CRE module could easily be applied without any modifications with the VCAS retrieval task, where the query constraints are treated as vague conditions. Moreover, the sec[4] element will be included in the eXist's list of matching elements when the strict OR query is used (the OR list is shown in the Hybrid list in Figure 1) whereas this element does not appear in the final list of CREs, which on the basis of the above heuristics makes it not likely to be a highly relevant element. In that regard, we identify the Coherent Retrieval Elements as preferable units of retrieval for the INEX CAS retrieval topics.
We show the positive impact on the XML retrieval effectiveness for the systems that implement our CRE module in the next section.
EXPERIMENTS AND RESULTS
This section shows the experimental results for the above XML retrieval approaches when different quantisation functions and different categories of CAS retrieval topics apply.
Our aim is to determine the most effective XML retrieval approach among the following:
• a full-text information retrieval approach, using Zettair only;
• a native XML database approach, using eXist only;
• a hybrid approach to XML retrieval, using our initial hybrid system;
• a native XML database approach with the CRE module applied on the answer list; and
• a hybrid XML retrieval approach with the CRE module applied on the answer list.
For each of the retrieval approaches above, the final answer list for an INEX CAS topic comprises (up to) 100 articles or elements within articles. An average precision value over 100 recall points is firstly calculated. These values are then averaged over all CAS topics, which produces the final average precision value for a particular retrieval run. A retrieval run for each XML retrieval approach therefore comprises answer lists for all CAS topics.
Comparison of XML Retrieval Approaches
The strict quantisation function in the inex_eval evaluation metric [1] is used to evaluate whether an XML retrieval approach is capable of retrieving highly relevant elements. Table 4 shows that in this case our hybrid system that uses the CRE retrieval module is the most effective. On the other hand, the plain eXist database is the least effective XML retrieval system. The two previously observed retrieval issues in the native XML database approach are very likely to influence the latter behaviour. We also observe an 11% relative improvement for the retrieval effectiveness when our CRE module is applied with eXist. Moreover, our initial hybrid system (without the CRE module) improves the retrieval effectiveness of the plain eXist database by 2.8 times. The latter behaviour is strongly influenced by the presence of a full-text information retrieval system in our hybrid system. Similar improvement for the retrieval effectiveness is exhibited when both eXist and the hybrid system use the CRE module.
The generalised quantisation function is used to evaluate the XML retrieval approaches when retrieving elements with different degrees of relevance [1]. Table 4 shows that in this case the plain hybrid system (without the CRE module applied) performs best. We furthermore observe that the effectiveness of retrieval systems that use the CRE module is lower than the effectiveness of the same systems without the CRE module applied. It is very likely that some marginally relevant elements are omitted from the list of the resulting CREs, which, as shown before, is not the case when the XML retrieval task focusses on highly relevant elements.
Although Zettair alone can not be applied to the CAS retrieval topics in the second category, Table 4 shows that overall it still performs better than eXist, regardless of which quantisation function applies. This is rather surprising, and reflects our previous expectation that for the CAS topics in the first category Zettair is indeed capable of retrieving highly relevant articles, whereas the first retrieval issue observed in eXist has a negative impact on its overall effectiveness. On the other hand, both the plain hybrid system The graph in Figure 3 outlines a detailed summary of the evaluation results for the XML retrieval approaches when the standard inex_eval evaluation metric using strict quantisation function applies. It shows runs that produce the best results for each XML retrieval approach, which (except plain Zettair) represent the approaches that apply the CRE retrieval module. As previously observed, the hybrid-CRE run performs best, followed by the Zettair run, and the eXist-CRE run is worst.
Analysis based on CAS Topic Categories
Our last experiment is based upon the INEX 2003 CAS topic categories described in Section 2.2. The retrieval effectiveness of the XML retrieval approaches is evaluated across three CAS topic categories: All topics, Article topics and Specific topics. The strict quantisation function in inex_eval evaluation metric is used to calculate the average precision values for each run. Table 5 shows final results for each XML retrieval approach evaluated across the three topic categories. For Article topics the Zettair run performs best, outperforming the hybrid-CRE run and the eXist-CRE run. This is very surprising, and shows that there are cases where a highly relevant article does not necessarily represent a matching article satisfying logical query constraints. For Specific topics where Zettair run does not apply, the hybrid-CRE run is roughly 2.7 times more effective than eXist-CRE run. As shown previously, when both CAS topic categories are considered (the case of All topics), the hybrid-CRE run performs best.
CONCLUSION AND FUTURE WORK
This paper investigates the impact when three systems with different XML retrieval approaches are used in the XML content-and-structure (CAS) retrieval task: Zettair, a fulltext information retrieval system; eXist, a native XML database, and a hybrid XML retrieval system that combines the best retrieval features from Zettair and eXist.
Two categories of CAS retrieval topics can be identified in INEX 2003: the first category of topics where full article elements are retrieved, and the second category of topics where more specific elements within articles are retrieved. We have shown that a full-text information retrieval system yields effective retrieval for CAS topics in the first category. For CAS topics in the second category we have used a native XML database and have observed two issues particularly related to the XML retrieval process that have a negative impact on its retrieval effectiveness.
In order to address the first issue as well as support a CAS XML retrieval that combines both topic categories, we have developed and evaluated a hybrid XML retrieval system that uses eXist to produce final answers from the likely relevant articles retrieved by Zettair. For addressing the second issue we have developed a retrieval module that ranks and retrieves Coherent Retrieval Elements (CREs) from the an-swer list of a native XML database. We have shown that our CRE module is capable of retrieving answer elements with appropriate levels of retrieval granularity, which means it could equally be applied with the VCAS retrieval task as it applies with the SCAS retrieval task. Moreover, the CRE retrieval module can easily be used by other native XML databases, since most of them output their answer lists in article order.
We have shown through the final results of our experiments that our hybrid XML retrieval system with the CRE retrieval module improves the effectiveness of both retrieval systems and yields an effective content-and-structure XML retrieval. However, this improvement is not as apparent as it is for content-only (CO) retrieval topics where no indication for the granularity of the answer elements is provided [6]. The latter reflects the previous observation that the XML retrieval task should focus more on providing answer elements relevant to an information need instead of focusing on retrieving the elements that only satisfy the logical query constraints.
We plan to undertake the following extensions of this work in the future.
• Our CRE module is currently not capable of comparing the ranking values of CREs coming out of answer lists that belong to different articles. We therefore aim at investigating whether or not additionally using Zettair as a means to rank the CREs coming out of different answer lists would be an effective solution.
• For further improvement of the effectiveness of our hybrid XML retrieval system, we also aim at investigating the optimal combination of Coherent Retrieval and matching elements in the final answer list, which could equally be applied to CAS as well as to CO retrieval topics.
| 4,536 |
cs0508017
|
2950059513
|
Three approaches to content-and-structure XML retrieval are analysed in this paper: first by using Zettair, a full-text information retrieval system; second by using eXist, a native XML database, and third by using a hybrid XML retrieval system that uses eXist to produce the final answers from likely relevant articles retrieved by Zettair. INEX 2003 content-and-structure topics can be classified in two categories: the first retrieving full articles as final answers, and the second retrieving more specific elements within articles as final answers. We show that for both topic categories our initial hybrid system improves the retrieval effectiveness of a native XML database. For ranking the final answer elements, we propose and evaluate a novel retrieval model that utilises the structural relationships between the answer elements of a native XML database and retrieves Coherent Retrieval Elements. The final results of our experiments show that when the XML retrieval task focusses on highly relevant elements our hybrid XML retrieval system with the Coherent Retrieval Elements module is 1.8 times more effective than Zettair and 3 times more effective than eXist, and yields an effective content-and-structure XML retrieval.
|
For the purpose of ranking the resulting answers of XML retrieval topics, @cite_2 extend the probabilistic ranking model by incorporating the notion of structural roles'', which can be determined manually from the document schema. However, the term frequencies are measured only for the structural elements belonging to a particular role, without taking into account the entire context where all these elements belong in the document hierarchy. XRank @cite_3 and XSearch @cite_5 furthermore aim at producing effective ranked results for XML queries. XRank generally focuses on hyperlinked XML documents, while XSearch retrieves answers comprising semantically related nodes. However, since the structure of IEEE XML documents in the INEX document collection does not typically meet the above requirements, neither of them (without some modifications) could be used in a straightforward fashion with the CAS retrieval task.
|
{
"abstract": [
"XSEarch, a semantic search engine for XML, is presented. XSEarch has a simple query language, suitable for a naive user. It returns semantically related document fragments that satisfy the user's query. Query answers are ranked using extended information-retrieval techniques and are generated in an order similar to the ranking. Advanced indexing techniques were developed to facilitate efficient implementation of XSEarch. The performance of the different techniques as well as the recall and the precision were measured experimentally. These experiments indicate that XSEarch is efficient, scalable and ranks quality results highly.",
"We consider the problem of efficiently producing ranked results for keyword search queries over hyperlinked XML documents. Evaluating keyword search queries over hierarchical XML documents, as opposed to (conceptually) flat HTML documents, introduces many new challenges. First, XML keyword search queries do not always return entire documents, but can return deeply nested XML elements that contain the desired keywords. Second, the nested structure of XML implies that the notion of ranking is no longer at the granularity of a document, but at the granularity of an XML element. Finally, the notion of keyword proximity is more complex in the hierarchical XML data model. In this paper, we present the XRANK system that is designed to handle these novel features of XML keyword search. Our experimental results show that XRANK offers both space and performance benefits when compared with existing approaches. An interesting feature of XRANK is that it naturally generalizes a hyperlink based HTML search engine such as Google. XRANK can thus be used to query a mix of HTML and XML documents.",
"This paper proposes a new approach to querying collections of structured textual information such as SGML XML documents. Knowledge about the structure of documents is an additional resource that should be exploited during retrieval since the semantics of the different textual objects can be used to specify an information need much more precisely. However the traditional probabilistic retrieval model lacks the ability to handle structural information. We define a new retrieval function based on the probabilistic model which overcomes this drawback. The presented query language allows the assignment of structural roles to individual terms. The efficient evaluation of queries in this framework requires appropriate index structures. We design text and structure indexes and show how their information is combined during evaluation. The implementation supports additional functionalities such as a table of contents for browsing. First evaluation results show the feasibility of the approach on collections of unstructured documents."
],
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_2"
],
"mid": [
"2113112851",
"1973828215",
"2099691420"
]
}
|
Enhancing Content-And-Structure Information Retrieval using a Native XML Database
|
This paper explores an effective hybrid XML retrieval approach that combines full-text information retrieval features with XML-specific retrieval features found in a native XML database. We focus on improving XML retrieval for contentand-structure (CAS) retrieval topics, which represent topics that enforce constraints on the existing document structure and explicitly specify the type of the unit of retrieval (such as section or paragraph). A retrieval challenge for a CAS topic is providing relevant answers to a user information need. In our previous work [6] we investigated the impact of different XML retrieval approaches on content-only (CO) retrieval topics, and also proposed a hybrid system as an effective retrieval solution. Both CAS and CO topics are part of INEX 1 , the INitiative for the Evaluation of XML Retrieval.
The INEX 2003 CAS retrieval topics can be classified in 1 http://www.is.informatik.uniduisburg.de/projects/inex/index.html.en two categories: the first category of topics where full articles rather than more specific elements are required to be retrieved as final answers, and the second category of topics where more specific elements rather than full articles are required to be retrieved as final answers.
For topics in the first category, we investigate whether a fulltext information retrieval system is capable of retrieving full article elements as highly relevant answers. We use Zettair 2 (formerly known as Lucy) as our choice for a full-text information retrieval system. Zettair is a compact and fast full-text search engine designed and written by the Search Engine Group at RMIT University. Although Zettair implements an efficient inverted index structure [11], the unit of retrieval is a full article, and currently it is neither capable of indexing and retrieving more specific elements within articles nor capable of specifying constraints on elements within articles.
For topics in the second category, we investigate whether an XML-specific retrieval system is capable of retrieving more specific elements as highly relevant answers. We use eXist 3 , an open source native XML database, as our choice for an XML-specific retrieval system. eXist implements many XML retrieval features found in most native XML databases, such as full and partial keyword text searches and proximity functions. Two of eXist's advanced features are efficient index-based query processing and XPath extensions for fulltext search [3]. However, most native XML databases follow Boolean retrieval approaches and are not capable of ranking the final answer elements according to their estimated likelihood of relevance to the information need in a CAS retrieval topic.
Our initial experiments using a native XML database approach show a poor retrieval performance for CAS retrieval topics. We also observe a similar retrieval behaviour for CO retrieval topics [5,6]. In an effort to enhance its XML retrieval effectiveness, we implement a retrieval system that follows a hybrid XML retrieval approach. The native XML database in our hybrid system effectively utilises the information about articles likely to be considered relevant to an XML retrieval topic. In order to address the issue of ranking the final answer elements, we develop and evaluate a retrieval module that for a CAS topic utilises the structural relationships found in the answer list of a native XML database and retrieves a ranked list of Coherent Retrieval Elements (CREs). Section 3.4 provides the definition of CREs and highlights their importance in the XML retrieval process. Our module can equally be applied to both cases when the logical query constraints in a CAS topic are treated as either strict or vague, since it is capable of identifying highly relevant answer elements at different levels of retrieval granularity.
The hybrid system and the CRE retrieval module we use in this paper extend the system and the module we previously proposed and evaluated for the INEX 2003 CO retrieval topics [6].
ANALYSIS OF INEX 2003 TOPICS
INEX provides a means, in the form of a test collection and corresponding scoring methods, to evaluate the effectiveness of different XML retrieval systems. INEX uses an XML document collection that comprises 12107 IEEE Computer Society articles published in the period 1997-2002 with approximately 500MB of data. Each year (starting in 2002) a new set of XML retrieval topics are introduced in INEX which are then usually assessed by participating groups that originally proposed the topics.
The XML retrieval task performed by the groups participating in INEX is defined as ad-hoc retrieval of XML documents. In information retrieval literature this type of retrieval involves searching a static set of documents using a new set of topics, which represents an activity commonly used in digital library systems.
Within the ad-hoc retrieval task, INEX defines two additional retrieval tasks: a content-only (CO) task involving CO topics, and a content-and-structure (CAS) task involving CAS topics. A CAS topic enforces restrictions with respect to the underlying document structure by explicitly specifying the type of the unit of retrieval, whereas a CO topic has no such restriction on the elements retrieved. In INEX 2003, the CAS retrieval task furthermore comprises a SCAS sub-task and a VCAS sub-task. A SCAS sub-task considers the structural constraints in a query to be strictly matched, while a VCAS sub-task allows the structural constraints in a query to be treated as vague conditions.
In this paper we focus on improving XML retrieval for CAS topics, in particular using the SCAS retrieval sub-task. Thus for a section element to be considered marginally, fairly or highly relevant, it is very likely that it will at least contain a combination of some important words or phrases, such as mobile, security, electronic payment system or e-payment. Furthermore, for the INEX XML document collection the <?xml version="1.0" encoding="ISO-8859-1"? > <!DOCTYPE inex topic SYSTEM "topic.dtd"> <inex topic topic id="86" query type="CAS" ct no="107"> sec, ss1 and ss2 elements are considered equivalent and interchangeable for a CAS topic. In that sense, an XML retrieval system should follow an effective extraction strategy capable of producing coherent answers with the appropriate level of retrieval granularity (such as retrieving sec rather than ss2 elements).
INEX CAS Topic Example
INEX CAS Topic Categories
INEX 2003 introduces 30 CAS topics in total, with topic numbers between 61 and 90. Out of the CAS topic titles, we distinguish two categories of topics.
• The first category of topics seek to retrieve full articles rather than more specific elements within articles as final answers. There are 12 such topics, their numbers being 61, 62, 63, 65, 70, 73, 75, 79, 81, 82, 87, 88. We refer to such topics as Article topics.
• The second category of topics seek to retrieve more specific elements within articles rather than full articles as final answers. There are 18 topics that belong to this category. We refer to such topics as Specific topics.
XML RETRIEVAL APPROACHES
Most full-text information retrieval systems ignore the information about the document structure and consider whole documents as units of retrieval. Such retrieval systems take queries that often represent a bag of words, where phrases or logical query operators could also be included. The final list of answer elements usually comprises ranked list of whole documents sorted in a descending order according to their estimated likelihood of relevance to the information need in the query. Accordingly, it is expected that for CAS retrieval topics in the first category a full-text information retrieval system would be able to successfully retrieve highly relevant articles.
Most native XML databases support XML-specific retrieval technologies, such as found in XPath and XQuery. The information about the structure of the XML documents is usually incorporated in the document index, allowing users to query both by document content and by document structure. This allows an easy identification of elements that belong to the XML documents, either by the path they appear in the document or by certain keywords they contain. Accordingly, it is expected that a native XML database would be suitable for CAS retrieval topics that belong in the second category.
In an effort to support a content-and-structure XML retrieval that combines both CAS topic categories, we develop a hybrid XML retrieval system that uses a native XML database to produce final answers from those documents that are estimated as likely to be relevant by a full-text information retrieval system.
The following sections describe the XML retrieval approaches implemented in the respective systems, together with some open issues that arise when a particular retrieval approach is applied.
Full-Text Information Retrieval Approach
The efficient inverted index structure is first used with Zettair to index the INEX XML document collection. The term postings file is stored in a compressed form on disk, so the size of the Zettair index takes roughly 26% of the total collection size. The time taken to index the entire INEX collection on a system with a Pentium4 2.66GHz processor and a 512MB RAM memory running Mandrake Linux 9.1 is around 70 seconds.
A topic translation module is used to automatically translate an INEX CAS topic into a Zettair query. For INEX CAS topics, terms that appear in the <Title> part of the topic are used to formulate the query. Up to 100 <article> elements are then returned in the descending order according to their estimated likelihood of relevance to the CAS topic. One retrieval issue when using Zettair, which is in particular related to the XML retrieval process, is that it is not currently capable of indexing and retrieving more specific elements within articles.
When the information retrieval task involves retrieval of whole documents with varying lengths, the pivoted cosine document length normalisation scheme is shown to be an effective retrieval scheme [8]. For the INEX XML document collection, we calculated the optimal slope parameter in the pivoted cosine ranking formula by using a different set of retrieval topics (those from the previous year, INEX 2002).
When using terms from <Title> part of INEX topics while formulating Zettair queries, we found that a slope parameter with a value of 0.25 yields highest system effectiveness (although when longer queries are used, such as queries that contain terms from the <Keywords> part of INEX topics, a different value of 0.55 would be better [6]). Consequently, for INEX 2003 CAS topics we use the value of 0.25 for the slope parameter in the pivoted cosine ranking formula in Zettair.
Native XML Database Approach
With eXist, the INEX XML document collection is first indexed by using its efficient indexing scheme. This index stores the information about the parsed elements within articles together with the information about the attributes and all word occurrences; its size is roughly twice as big as the total collection size. The time taken to index the entire INEX collection on a system with a Pentium 4 2.6GHz processor and a 512MB RAM memory running Mandrake Linux 9.1 is around 2050 seconds.
A topic translation module is used to automatically translate an INEX CAS topic into two eXist queries: AND and OR. For INEX CAS topics, the terms and structural constraints that appear in the <Title> part of the CAS topic are used to formulate eXist queries. The &= and |= query operators are used with eXist while formulating the above queries, respectively. The AND and OR eXist queries are depicted in solid boxes in Figure 1 where the elements to be retrieved are specified explicitly.
For an INEX CAS topic, our choice for the final list of answer elements comprises matching elements from the AND answer list followed by the matching elements from the OR answer list that do not belong to the AND answer list.
If an AND answer list is empty, the final answer list is the same as the OR answer list. In both cases it contains (up to) 100 matching articles or elements within articles. The equivalent matching elements are also considered during the retrieval process.
We observed two retrieval issues while using eXist, which are in particular related to the XML retrieval process.
1. For an INEX CAS topic that retrieves full articles rather than more specific elements within articles, the list of answer elements comprises full articles that satisfy the logical query constraints. These articles are sorted by their internal identifiers that correspond to the order in which each article is stored in the database.
However, there is no information about the estimated likelihood of relevance of a particular matching article to the information need expressed in the CAS topic.
2. For an INEX CAS topic that retrieves more specific elements within articles rather than full articles, the list of answer elements comprises most specific elements that satisfy both the content and the granularity constraints in the query. eXist orders the matching elements in the answer list by the article where they belong, according to the XQuery specification 4 . However, there is no information whether a particular matching element in the above list is likely to be more relevant than other matching elements that belong to the same article. Accordingly, ranking of matching elements within articles is also not supported.
The following sections describe our approaches that address both of these issues.
Hybrid XML Retrieval Approach
Our hybrid system incorporates the best retrieval features from Zettair and eXist. Figure 1 shows the hybrid XML retrieval approach as implemented in the hybrid system. We use the CAS topic 86 throughout the example. Zettair is first used to obtain (up to) 100 articles likely to be considered relevant to the information need expressed in the CAS topic as into a Zettair query. For each article in the answer list produced by Zettair, both AND and OR queries are then applied by eXist, which produce matching elements in two corresponding answer lists. The answer list for an INEX CAS topic and a particular article thus comprises the article's matching elements from the AND answer list followed by the article's matching elements from the OR answer list that do not belong to the AND answer list.
The final answer list for an INEX CAS topic comprises (up to) 100 matching elements and equivalent element tags that belong to highly ranked articles as estimated by Zettair. The final answer list is shown as Hybrid list in Figure 1. Figure 1 also shows queries and other parts of our hybrid system depicted in dashed boxes, where we also explore whether using CO-type queries could improve the CAS retrieval task. This can equally be applied to the hybrid approach as well as to the native XML database approach, since they both use eXist to produce the final list of matching elements. The next section explores this retrieval process in detail.
The hybrid XML retrieval approach addresses the first retrieval issue observed in a native XML database approach. However, because of its modular nature we observe a loss in efficiency. For a particular CAS topic, up to 100 articles firstly need to be retrieved by Zettair. This article list is then queried by eXist, one article at a time. In order to retrieve (up to) 100 matching elements, eXist may need to query each article in the list before it reaches this number. Obviously, having an equally effective system that produces its final list of answer elements much faster would be more efficient solution. The second retrieval issue observed in a native XML database approach still remains open, since for a particular article our hybrid XML retrieval system also uses eXist to produce its final list of answer elements.
The following section describes one possible approach that addresses this issue.
Rank the Native XML Database Output
This section describes our novel retrieval module that utilises the structural relationships between elements in the eXist's answer list and identifies, ranks and retrieves Coherent Retrieval Elements. Our definition of a Coherent Retrieval Element is as follows.
"For a particular article in the final answer list, a cases, the containing elements of a Coherent Retrieval Element should constitute either its different children or each different child's descendants" [6].
Consider the eXist answer list shown in Table 2. The list is a result of using the CAS topic 86 and the OR eXist query depicted as dashed box in Figure 1. Each matching element in the list therefore contains any combination of query keywords. Although this example shows a retrieval case when an OR list is used with eXist, our CRE algorithm equally applies in the case when an AND list is used. Table 2 also shows that the matching elements in the answer list are presented in article order. Figure 2 shows a tree representation of the above eXist answer list. The eXist matching elements are shown in triangle boxes, while the CREs are shown in square boxes. The figure also shows elements that represent neither matching elements nor CREs.
We identify one specific case, however. If an answer list contains only one matching element, the above CRE algorithm produces the same result: the matching element. This is due to the lack of supporting information that would justify the choice for the ancestors of the matching element to be regarded as CREs.
So far we have managed to identify the Coherent Retrieval Elements from the eXist's answer list of matching elements to the CAS topic. However, in order to enhance the effectiveness of our retrieval module we still need to rank these elements according to their estimated likelihood of relevance to the information need expressed in the topic. The follow- 1. The number of times a CRE appears in the absolute path of each matching element in the answer list (the more often it appears, the better);
2. The length of the absolute path of a CRE (the shorter it is, the better);
3. The ordering of the XPath sequence in the absolute path of a CRE (nearer to beginning is better); and 4. Since we are dealing with CAS retrieval task, only CREs that satisfy the granularity constraints in a CAS topic will be considered as answers.
In accordance to the above heuristics, if two Coherent Retrieval Elements appear the same number of times in the answer list, the shorter one will be ranked higher. Moreover, if they have the same length, the ordering sequence where they appear in the article will determine their final ranks. In our example, article[1]/bdy [1]/sec [1] will be ranked higher than article[1]/bdy[1]/sec [3].
The order of importance for the XML-specific heuristics outlined above is based on the following observation. As it is currently implemented, less specific (or more general) CREs are likely to be ranked higher than more specific (or less general) CREs. However, depending on the retrieval task, the retrieval module could easily be switched the other way around. When dealing with the INEX test collection, the latter functionality proved to be less effective than the one currently implemented. Table 3 shows the final ranked list of Coherent Retrieval Elements for the particular article. (the OR list is shown in the CRE module in Figure 1). The bdy[1] element does not satisfy the last heuristic above, thus it is not included in the final list of CREs. This means that our CRE module could easily be applied without any modifications with the VCAS retrieval task, where the query constraints are treated as vague conditions. Moreover, the sec[4] element will be included in the eXist's list of matching elements when the strict OR query is used (the OR list is shown in the Hybrid list in Figure 1) whereas this element does not appear in the final list of CREs, which on the basis of the above heuristics makes it not likely to be a highly relevant element. In that regard, we identify the Coherent Retrieval Elements as preferable units of retrieval for the INEX CAS retrieval topics.
We show the positive impact on the XML retrieval effectiveness for the systems that implement our CRE module in the next section.
EXPERIMENTS AND RESULTS
This section shows the experimental results for the above XML retrieval approaches when different quantisation functions and different categories of CAS retrieval topics apply.
Our aim is to determine the most effective XML retrieval approach among the following:
• a full-text information retrieval approach, using Zettair only;
• a native XML database approach, using eXist only;
• a hybrid approach to XML retrieval, using our initial hybrid system;
• a native XML database approach with the CRE module applied on the answer list; and
• a hybrid XML retrieval approach with the CRE module applied on the answer list.
For each of the retrieval approaches above, the final answer list for an INEX CAS topic comprises (up to) 100 articles or elements within articles. An average precision value over 100 recall points is firstly calculated. These values are then averaged over all CAS topics, which produces the final average precision value for a particular retrieval run. A retrieval run for each XML retrieval approach therefore comprises answer lists for all CAS topics.
Comparison of XML Retrieval Approaches
The strict quantisation function in the inex_eval evaluation metric [1] is used to evaluate whether an XML retrieval approach is capable of retrieving highly relevant elements. Table 4 shows that in this case our hybrid system that uses the CRE retrieval module is the most effective. On the other hand, the plain eXist database is the least effective XML retrieval system. The two previously observed retrieval issues in the native XML database approach are very likely to influence the latter behaviour. We also observe an 11% relative improvement for the retrieval effectiveness when our CRE module is applied with eXist. Moreover, our initial hybrid system (without the CRE module) improves the retrieval effectiveness of the plain eXist database by 2.8 times. The latter behaviour is strongly influenced by the presence of a full-text information retrieval system in our hybrid system. Similar improvement for the retrieval effectiveness is exhibited when both eXist and the hybrid system use the CRE module.
The generalised quantisation function is used to evaluate the XML retrieval approaches when retrieving elements with different degrees of relevance [1]. Table 4 shows that in this case the plain hybrid system (without the CRE module applied) performs best. We furthermore observe that the effectiveness of retrieval systems that use the CRE module is lower than the effectiveness of the same systems without the CRE module applied. It is very likely that some marginally relevant elements are omitted from the list of the resulting CREs, which, as shown before, is not the case when the XML retrieval task focusses on highly relevant elements.
Although Zettair alone can not be applied to the CAS retrieval topics in the second category, Table 4 shows that overall it still performs better than eXist, regardless of which quantisation function applies. This is rather surprising, and reflects our previous expectation that for the CAS topics in the first category Zettair is indeed capable of retrieving highly relevant articles, whereas the first retrieval issue observed in eXist has a negative impact on its overall effectiveness. On the other hand, both the plain hybrid system The graph in Figure 3 outlines a detailed summary of the evaluation results for the XML retrieval approaches when the standard inex_eval evaluation metric using strict quantisation function applies. It shows runs that produce the best results for each XML retrieval approach, which (except plain Zettair) represent the approaches that apply the CRE retrieval module. As previously observed, the hybrid-CRE run performs best, followed by the Zettair run, and the eXist-CRE run is worst.
Analysis based on CAS Topic Categories
Our last experiment is based upon the INEX 2003 CAS topic categories described in Section 2.2. The retrieval effectiveness of the XML retrieval approaches is evaluated across three CAS topic categories: All topics, Article topics and Specific topics. The strict quantisation function in inex_eval evaluation metric is used to calculate the average precision values for each run. Table 5 shows final results for each XML retrieval approach evaluated across the three topic categories. For Article topics the Zettair run performs best, outperforming the hybrid-CRE run and the eXist-CRE run. This is very surprising, and shows that there are cases where a highly relevant article does not necessarily represent a matching article satisfying logical query constraints. For Specific topics where Zettair run does not apply, the hybrid-CRE run is roughly 2.7 times more effective than eXist-CRE run. As shown previously, when both CAS topic categories are considered (the case of All topics), the hybrid-CRE run performs best.
CONCLUSION AND FUTURE WORK
This paper investigates the impact when three systems with different XML retrieval approaches are used in the XML content-and-structure (CAS) retrieval task: Zettair, a fulltext information retrieval system; eXist, a native XML database, and a hybrid XML retrieval system that combines the best retrieval features from Zettair and eXist.
Two categories of CAS retrieval topics can be identified in INEX 2003: the first category of topics where full article elements are retrieved, and the second category of topics where more specific elements within articles are retrieved. We have shown that a full-text information retrieval system yields effective retrieval for CAS topics in the first category. For CAS topics in the second category we have used a native XML database and have observed two issues particularly related to the XML retrieval process that have a negative impact on its retrieval effectiveness.
In order to address the first issue as well as support a CAS XML retrieval that combines both topic categories, we have developed and evaluated a hybrid XML retrieval system that uses eXist to produce final answers from the likely relevant articles retrieved by Zettair. For addressing the second issue we have developed a retrieval module that ranks and retrieves Coherent Retrieval Elements (CREs) from the an-swer list of a native XML database. We have shown that our CRE module is capable of retrieving answer elements with appropriate levels of retrieval granularity, which means it could equally be applied with the VCAS retrieval task as it applies with the SCAS retrieval task. Moreover, the CRE retrieval module can easily be used by other native XML databases, since most of them output their answer lists in article order.
We have shown through the final results of our experiments that our hybrid XML retrieval system with the CRE retrieval module improves the effectiveness of both retrieval systems and yields an effective content-and-structure XML retrieval. However, this improvement is not as apparent as it is for content-only (CO) retrieval topics where no indication for the granularity of the answer elements is provided [6]. The latter reflects the previous observation that the XML retrieval task should focus more on providing answer elements relevant to an information need instead of focusing on retrieving the elements that only satisfy the logical query constraints.
We plan to undertake the following extensions of this work in the future.
• Our CRE module is currently not capable of comparing the ranking values of CREs coming out of answer lists that belong to different articles. We therefore aim at investigating whether or not additionally using Zettair as a means to rank the CREs coming out of different answer lists would be an effective solution.
• For further improvement of the effectiveness of our hybrid XML retrieval system, we also aim at investigating the optimal combination of Coherent Retrieval and matching elements in the final answer list, which could equally be applied to CAS as well as to CO retrieval topics.
| 4,536 |
cs0503028
|
2953061427
|
An information agent is viewed as a deductive database consisting of 3 parts: an observation database containing the facts the agent has observed or sensed from its surrounding environment, an input database containing the information the agent has obtained from other agents, and an intensional database which is a set of rules for computing derived information from the information stored in the observation and input databases. Stabilization of a system of information agents represents a capability of the agents to eventually get correct information about their surrounding despite unpredictable environment changes and the incapability of many agents to sense such changes causing them to have temporary incorrect information. We argue that the stabilization of a system of cooperative information agents could be understood as the convergence of the behavior of the whole system toward the behavior of a "superagent", who has the sensing and computing capabilities of all agents combined. We show that unfortunately, stabilization is not guaranteed in general, even if the agents are fully cooperative and do not hide any information from each other. We give sufficient conditions for stabilization and discuss the consequences of our results.
|
In this paper, we consider a specific class of cooperative information agents without considering effects of their actions on the environment e.g. in @cite_9 , @cite_12 , @cite_10 . We are currently working to extend the framework towards this generalized issue.
|
{
"abstract": [
"This paper presents ALIAS, an agent architecture based on intelligent logic agents, where the main form of agent reasoning is abduction. The system is particularly suited for solving problems where knowledge is incomplete, where agents may need to make reasonable hypotheses about the problem domain and other agents, and where the raised hypotheses have to be consistent for the overall set of agents. ALIAS agents are pro-active, exhibiting a goal-directed behavior, and autonomous, since each one can solve problems using its own private knowledge base. ALIAS agents are also social, because they are able to interact with other agents, in order to cooperatively solve problems. The coordination mechanisms are modeled by means of LAILA, a logic-based language which allows to express intra-agent reasoning and inter-agent coordination. As an application, we show how LAILA can be used to implement inter-agent dialogues, e.g., for negotiation. In particular, LAILA is well-suited to coordinate the process of negotiation aimed at exchanging resources between agents, thus allowing them to execute the plans to achieve their goals.",
"In multi-agent system, we often face incompleteness of information due to communication failure or other agent's suspension of decisions. To solve the incompleteness, we previously proposed speculative computation using abduction in the context of matter-slave multi-agent systems and gave a procedure in abductive logic programming [14]. In the work, a master agent prepares a default value for a question in advance and it performs speculative computation using the default without waiting for a reply for the question. This computation is effective unless the contradictory reply with the default is returned. However, we find that this mechanism is not sufficient for speculative computation in more general multi-agent systems such that replies can be revised according to other agents' speculative computation. In this paper, we formalize speculative computation with multi-agent belief revision and propose a correct procedure for such computation.",
""
],
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_12"
],
"mid": [
"1585686429",
"2131552518",
""
]
}
|
Stabilization of Cooperative Information Agents in Unpredictable Environment: A Logic Programming Approach
|
To operate effectively in a dynamic and unpredictable environment, agents need correct information about the environment. Often only part of this environment could be sensed by the agent herself. As the agent may need information about other part of the environment that she could not sense, she needs to cooperate with other agents to get such information. There are many such systems of cooperative information agents operating in the Internet today. A prominent example of such system is the system of routers that cooperate to deliver messages from one place to another in the Internet. One of the key characteristics of these systems is their resilience in the face of unpredictable changes in their environment and the incapability of many agents to sense such changes causing them to have temporary incorrect information. This is possible because agents in such systems cooperate by exchanging tentative partial results to eventually converge on correct and consistent global view of the environment. Together they constitute a stabilizing system that allows the individual agents to eventually get a correct view of their surrounding.
Agent communications could be classified into push-based communications and pull-based communications. In the push-based communication, agents periodically send information to specific recipients. Push-based communications are used widely in routing system, network protocols, emails, videoconferencing calls, etc. A key goal of these systems is to guarantee that the agents have a correct view of their surrounding. On the other hand, in the pull-based communication, agents have to send a request for information to other agents and wait for a reply. Until now pull-based communications are the dominant mode of communication in research in multiagent systems, e.g. (Shoham 1993), (Satoh and Yamamoto 2002), (Ciampolini et al. 2003), (Kowalski and Sadri 1999), (Wooldridge 1997), (Wooldridge and Jennings 1995). In this paper, we consider multiagent systems where agent communications are based on push-technologies. A prominent example of a push-based multiagent system is the internet routing system. This paper studies the problem of stabilization of systems of cooperative information agents where an information agent is viewed as a deductive database which consists of 3 parts:
• an observation database containing the facts the agent has observed or sensed from its surrounding environment. • an input database containing the information the agent was told by other agents • an intensional database which is a set of rules for computing derived information from the information stored in the observation and input databases.
It turns out that in general, it is not possible to ensure that the agents will eventually have the correct information about the environment even if they honestly exchange information and do not hide any information that is needed by others and every change in the environment is immediately sensed by some of the agents. We also introduce sufficient conditions for stabilization.
The stabilization of distributed protocols has been studied extensively in the literature ( (Dijkstra 1974), (Flatebo et al. 1994), (Schneider 1993)) where agents are defined operationally as automata. Dijkstra (1974) defined a system as stabilizing if it is guaranteed to reach a legitimate state after a finite number of steps regardless of the initial state. The definition of what constitutes a legitimate state is left to individual algorithms. Thanks to the introduction of an explicit notion of environment, we could characterize a legitimate state as a state in which the agents have correct information about their environment. In this sense, we could say that our agents are a new form of situated agents ( (Rosenschein and Kaelbling 1995), (Brooks 1991), (Brooks 1986)) that may sometimes act on wrong information but nonetheless will be eventually situated after getting correct information about their surrounding. Further in our approach, agents are defined as logic programs, and hence it is possible for us to get general results about what kind of algorithms could be implemented in stabilizing multiagent systems in many applications. To the best of our knowledge, we believe that our work is the first work on stabilization of multiagent systems.
The rest of this paper is organized as follows. Basic notations and definitions used in this paper are briefly introduced in section 2. We give an illustrating example and formalize the problem in section 3. Related works and conclusions are given in section 4. Proofs of theorems are given in Appendices.
Preliminaries: Logic Programs and Stable Models
In this section we briefly introduce the basic notations and definitions that are needed in this paper.
We assume the existence of a Herbrand base HB.
A logic program is a set of ground clauses of the form:
H ← L 1 , . . . , L m
where H is an atom from HB, and L 1 , . . . , L m are literals (i.e., atoms or negations of an atoms) over HB, m ≥ 0. H is called the head, and L 1 , . . . , L m the body of the clause. Given a set of clauses S, the set of the heads of clauses in S is denoted by head(S). Note that clauses with variables are considered as a shorthand for the set of all their ground instantiations. Often the variables appearing in a non-ground clause have types that are clear from the context. In such cases these variables are instantiated by ground terms of corresponding types.
For each atom a, the definition of a is the set of all clauses whose head is a.
A logic program is bounded if the definition of every atom is finite. Let P be an arbitrary logic program. For any set S ⊆ HB, let P S be a program obtained from P by deleting 1. each rule that has a negative literal ¬B in its body with B ∈ S, and 2. all negative literals in the bodies of the remaining rules S is a stable model ((Gelfond and Lifschitz 1988)
) of P if S is the least model of P S .
The atom dependency graph of a logic program P is a graph, whose nodes are atoms in HB and there is an edge from a to b in the graph iff there is a clause in P whose head is a and whose body contains b or ¬b. Note that in the literature (Apt et al. 1988), the direction of the link is from the atom in the body to the head of a clause. We reverse the direction of the link for the ease of definition of acyclicity using the atom dependency graph.
An atom b is said to be relevant to an atom a if there is a path from a to b in the atom dependency graph.
A logic program P is acyclic iff there is no infinite path in its atom dependency graph. It is well known that
b b b b b A4 A5 A1 A2 A3 Fig. 1. A network example
The problem for each agent is to find the shortest paths from her node to other nodes. The environment information an agent can sense is the availability of links connecting to her node. The agents use an algorithm known as "distance vector algorithm" ( (Bellman 1957), (Ford and Fulkerson 1962)) to find the shortest paths from their nodes to other nodes. If the destination is directly reachable by a link, the cost is 1. If the destination is not directly reachable, an agent needs information from its neighbors about their shortest paths to the destination. The agent will select the route to the destination through a neighbor who offers a shortest path to the destination among the agent's neighbors. Thus at any point of time, each agent needs three kinds of information:
• The information about the environment, that the agent can acquire with her sensing capability. In our example, agent A 1 could sense whether the links connecting her and her neighbors A 2 , A 4 are available. • The algorithm the agent needs to solve her problem. In our example the algorithm for agent A 1 is represented by the following clauses: 1
sp(A 1 , A 1 , 0) ← sp(A 1 , y, d) ← spt(A 1 , y, x, d) spt(A 1 , y, x, d + 1) ← link(A 1 , x), sp(x, y, d), not spl(A 1 , y, d + 1) spl(A 1 , A 1 , d + 1) ← spl(A 1 , y, d + 1) ← link(A 1 , x), sp(x, y, d ′ ), d ′ < d where link(A i , A j )
is true iff there a link from A i to A j in the network and the link is intact. Links are undirected, i.e. we identify link(A i , A j ) and link(A j , A i ). sp(A 1 , y, d) is true iff a shortest path from A 1 to y has length d spt(A 1 , y, x, d) is true iff the length of shortest paths from A 1 to y is d and there is a shortest path from A 1 to y that goes through x as the next node after A 1 spl(A 1 , y, d) is true iff there is a path from A 1 to y whose length is less than d.
• The information the agent needs from other agents. For agent A 1 to calculate the shortest paths from her node to say A 3 , she needs the information about the length of the shortest paths from her neighbors A 2 , and A 4 to A 3 , that means she needs to know the values d, d ′ such that sp(A 2 , A 3 , d), sp(A 4 , A 3 , d ′ ) hold.
Problem Formalization
The agents are situated in the environment. They may have different accessibility to the environment depending on their sensing capabilities. The environment is represented by a set of (ground) environment atoms, whose truth values could change in an unpredictable way. where • IDB, the intensional database, is an acyclic logic program.
• HBE is the set of all (ground) environment atoms whose truth values the agent could sense, i.e. a ∈ HBE iff A could discover instantly any change in the truth value of a and update her extensional database accordingly. • HIN is the set of all atoms called input atoms, whose truth values the agent must obtain from other agents. No atom in HIN ∪ HBE appears in the head of the clauses in IDB and HIN ∩ HBE = ∅. • δ is the initial state of the agent. That means for each a ∈ HBE, a ∈ EDB iff a is true. • IN ⊆ HIN , the input database of A, represents the set of information A has obtained from other agents, i.e. a ∈ IN iff A was told that a is true.
Given a state σ = (EDB, IN ), the stable model of A = (IDB, HBE, HIN, δ) at σ is defined as the stable model of IDB ∪ EDB ∪ IN . Note that δ and σ could be different states.
Example 3.2 (Continuation of the network routing example)
Imagine that initially the agents have not sent each other any information and all links are intact. In this situation, agent A 1 is represented as follows:
• IDB 1 contains the clauses shown in Example 3.1.
• HBE 1 = {link(A 1 , A 2 ), link(A 1 , A 4 )} • HIN 1 consists of ground atoms of the form sp(A 2 , Y, D), sp(A 4 , Y, D) where Y ∈ {A 2 , . . . , A 5 } and D is a positive integer. • The initial state δ 1 = (EDB 1,0 , IN 1,0 ) where EDB 1,0 = {link(A 1 , A 2 ), link(A 1 , A 4 )} IN 1,0 = ∅ Definition 3.3 A cooperative multiagent system is a collection of n agents (A 1 , . . . , A n ), with A i = (IDB i ,HBE i , HIN i , δ i )
such that the following conditions are satisfied
• for each atom a, if a ∈ head(IDB i ) ∩ head(IDB j ) then a has the same definition in IDB i and IDB j .
• for each agent A i , HIN i ⊆ n j = 1 (head(IDB j ) ∪ HBE j )
• No environment atom appears in the head of clauses in the intentional database of any agent, i.e. for all i,j:
HBE i ∩ head(IDB j ) = ∅. For each agent A i let HB i = head(IDB i ) ∪ HBE i ∪ HIN i .
Agent Communication and Sensing
Let A i = (IDB i , HBE i , HIN i , δ i ) for 1 ≤ i ≤ n. We say that A i depends on A j if A i needs input from A j , i.e. HIN i ∩ (head(IDB j ) ∪ HBE j ) = ∅. The depen- dency of A i on A j is defined to be the set D(i, j) = HIN i ∩(head(IDB j )∪HBE j ).
As we have mentioned before, the mode of communication for our agents corresponds to the "push-technology". Formally, it means that if A i depends on A j
then A j will periodically send A i a set S = D(i, j) ∩ M j where M j is the sta- ble model of A j .
When A i obtains S, she knows that each atom a ∈ D(i, j) \ S is false with respect to M j . Therefore she will update her input database
IN i to U pa i,j (IN i , S) as follows U pa i,j (IN i , S) = (IN i \ D(i, j)) ∪ S Thus her state has changed from σ i = (EDB i , IN i ) to σ ′ i = (EDB i , U pa i,j (IN i , S)) accordingly.
An environment change is represented by a pair C = (T, F ) where T (resp. F ) contains the atoms whose truth values have changed from false (resp. true) to true (resp. false). Therefore, given an environment change
(T, F ), what A i could sense of this change, is captured by the pair (T i , F i ) where T i = T ∩ HBE i and F i = F ∩ HBE i .
Hence when a change C = (T, F ) occurs in the environment, agent A i will update her sensing database EDB i to U pe i (EDB i , C) as follows:
U pe i (EDB i , C) = (EDB i \ F i ) ∪ T i The state of agent A i has changed from σ i = (EDB i , IN i ) to σ ′ i = (U pe i (EDB i , C), IN i ) accordingly.
Semantics of Multiagent Systems
Let A = (A 1 , . . . , A n ) with A i = (IDB i , HBE i , HIN i , δ i ) be a multiagent system. (δ 1 , . . . , δ n ) is called the initial state of A. A state of A is defined as △ = (σ 1 , . . . , σ n ) such that σ i is a state of agent A i .
There are two types of transitions in a multiagent system. A environment transition happens when there is a change in the environment which is sensed by a set of agents and causes these agents to update their extensional databases accordingly. A communication transition happens when an agent sends information to another agent and causes the later to update her input database accordingly.
For an environment change C = (T, F ), let S C be the set of agents which could sense parts of C, i.e.
S C = {A i | HBE i ∩ (T ∪ F ) = ∅} Definition 3.4 Let △ = (σ 1 , . . . , σ n ), △ ′ = (σ ′ 1 , . . . , σ ′ n ) be states of A with σ i = (EDB i , IN i ), σ ′ i = (EDB ′ i , IN ′ i ). 1. A environment transition △ C − → △ ′ caused by an environment change C = (T, F ) is defined as follows (a) for every agent A k such that A k ∈ S C : σ k = σ ′ k , and (b) for each agent A i ∈ S C : • EDB ′ i = U pe i (EDB i , C), • IN ′ i = IN i . 2. A communication transition △ j i −−→ △ ′ caused by agent A j sending information to agent A i , where A i depends on A j , is defined as follows: (a) For all k such that k = i: σ k = σ ′ k (b) EDB ′ i = EDB i and IN ′ i = U pa i,j (IN i , S) where S = D(i, j)∩M j and M j is the stable model of A j at σ j . We often simply write △ → △ ′ if there is a transition △ C − → △ ′ or △ j i −−→ △ ′ .
Definition 3.5 A run of a multiagent system A is an infinite sequence
△ 0 → △ 1 → . . . → △ m → . . .
such that
• △ 0 is the initial state of A and for all agents A i , A j such that A i depends on A j the following condition is satisfied:
For each h, there is a k ≥ h such that △ k j i −−→ △ k+1
The above condition is introduced to capture the idea that agents periodically send the needed information to other agents. • There is a point h such that at every k ≥ h in the run, there is no more environment change.
For a run R = △ 0 → △ 1 → . . . → △ k → . . . where △ k = (σ 1,k , . . . , σ n,k ) we often refer to the stable model of A i at state σ i,k as the stable model of A i at point k and denote it by M i,k .
Example 3.3
Consider the following multiagent system
A = (A 1 , A 2 ) where IDB 1 = {a ← b, c IDB 2 = {b ← a, d f ← a} b ← e} HBE 1 = {c} HBE 2 = {d, e} HIN 1 = {b} HIN 2 = {a} EDB 1,0 = {c} EDB 2,0 = {d, e} IN 1,0 = ∅ IN 2,0 = ∅
Consider the following run R, where the only environment change occurs at point 2 such that the truth value of e becomes false:
△ 0 2 1 − −− → △ 1 1 2 − −− → △ 2 (∅,{e}) −−−−→ △ 3 1 2 − −− → △ 4 2 1 − −− → △ 5 . . .
The states and stable models of A 1 and A 2 at points 0, 1, 2, 3, and 4 are as follows
A 1 A 2 k EDB IN Stable Model EDB IN Stable Model 0 {c} ∅ {c} {d, e} ∅ {b, d, e} 1 {c} {b} {a, b, c, f } {d, e} ∅ {b, d, e} 2 {c} {b} {a, b, c, f } {d, e} {a} {a, b, d, e} 3 {c} {b} {a, b, c, f } {d} {a} {a, b, d} 4 {c} {b} {a, b, c, f } {d} {a} {a, b, d}
Example 3.4 (Continuation of example 3.2)
Consider the following run R of the multiagent system given in Example 3.2.
△ 0 2 1 − −− → △ 1 (∅,{link(A1,A2)}) −−−−−−−−−−−→ △ 2 → . . .
Initially, all links are intact and all inputs of agents are empty, i.e. IN i,0 = ∅ for i = 1, . . . , 5. At point 0 in the run, agent A 2 sends to agent A 1 information about shortest paths from her to other agents. At point 1 in the run, the link between A 1 and A 2 is down.
The information (output) an agent needs to send to other agents consists of shortest paths from her to other agents. Thus from the stable model of an agent we are interested only in this output.
Let
SP i,k be the set {sp(A i , Y, D)|sp(A i , Y, D) ∈ M i,k } where M i,k is the stable model of A i at point k. SP i,k denotes the output of A i at point k. It is easy to see that if there is a transition △ k j i −−→ △ k+1 , then A j sends to A i : S = D(i, j) ∩ M j,k = SP j,k
At point 0, A 1 and A 2 have the following states and outputs:
EDB 1,0 = {link(A 1 , A 2 ), link(A 1 , A 4 )} IN 1,0 = ∅ SP 1,0 = {sp(A 1 , A 1 , 0)} EDB 2,0 = {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} IN 2,0 = ∅ SP 2,0 = {sp(A 2 , A 2 , 0)} A 2 sends S to A 1 in the transition △ 0 2 1 − −− → △ 1 where S = SP 2,0 = {sp(A 2 , A 2 , 0)} Thus IN 1,1 = U pa 1,2 (IN 1,0 , S) = U pa 1,2 (∅, S) = S = {sp(A 2 , A 2 , 0)}
The environment change C = (∅, {link(A 1 , A 2 )}) at point 1 is sensed by A 1 and A 2 . The states of A 1 and A 2 are changed as follows:
IN 1,2 = IN 1,1 EDB 1,2 = U pe 1 (EDB 1,1 , C) = (EDB 1,1 \ {link(A 1 , A 2 )}) ∪ ∅ = {link(A 1 , A 4 )} IN 2,2 = IN 2,1 EDB 2,2 = U pe 2 (EDB 2,1 , C) = (EDB 2,1 \ {link(A 1 , A 2 )}) ∪ ∅ = {link(A 2 , A 3 ), link(A 2 , A 5 )}
The following tables show the states and outputs of A 1 and A 2 at points 0, 1, and 2 respectively.
A 1 k EDB IN SP 0 {link(A 1 , A 2 ), link(A 1 , A 4 )} ∅ {sp(A 1 , A 1 , 0)} 1 {link(A 1 , A 2 ), link(A 1 , A 4 )} {sp(A 2 , A 2 , 0)} {sp(A 1 , A 1 , 0), sp(A 1 , A 2 , 1)} 2 {link(A 1 , A 4 )} {sp(A 2 , A 2 , 0)} {sp(A 1 , A 1 , 0)} A 2 k EDB IN SP 0 {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} ∅ {sp(A 2 , A 2 , 0)} 1 {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} ∅ {sp(A 2 , A 2 , 0)} 2 {link(A 2 , A 3 ), link(A 2 , A 5 )} ∅ {sp(A 2 , A 2 , 0)}
Stabilization
Consider a superagent whose sensing capability and problem solving capability are the combination of the sensing capabilities and problem solving capabilities of all agents, i.e. this agent can sense any change in the environment and her intensional database is the union of the intensional databases of all other agents. Formally, the superagent of a multiagent system
A = (A 1 , . . . , A n ) where A i = (IDB i , HBE i , HIN i , δ i ), δ i = (EDB i , IN i )
is represented by
P A = (IDB A , δ) where • IDB A = IDB 1 ∪ · · · ∪ IDB n • δ, the initial state of P A , is equal to EDB 1 ∪ · · · ∪ EDB n
The superagent actually represents the multiagent system in the ideal case where each agent has obtained the correct information for its input atoms.
Example 3.5 (Continuation of Example 3.3)
Consider the multiagent system in Example 3.3. At point 0, the superagent P A is represented as follows:
• IDB A consists of the following clauses:
a ← b, c f ← a b ← a, d b ← e • δ = {c, d, e}.
Example 3.6 (Continuation of Example 3.4) Consider the multiagent system in Example 3.4. Initially, when all links between nodes are intact, the superagent P A is represented as follows:
• IDB A consists of the following clauses:
sp(x, x, 0) ← sp(x, y, d) ← spt(x, y, z, d) spt(x, y, z, d + 1) ← link(x, z), sp(z, y, d), not spl(x, y, d + 1) spl(x, x, d + 1) ← spl(x, y, d + 1) ← link(x, z), sp(z, y, d ′ ), d ′ < d • The initial state δ = { link(A 1 , A 2 ), link(A 1 , A 4 ), link(A 2 , A 3 ), link(A 2 , A 5 ), link(A 3 , A 5 ), link(A 4 , A 5 )}
Note that the possible values of variables x, y, z are A 1 , A 2 , A 3 , A 4 , A 5 .
Definition 3.6
Let A be a multiagent system. The I/O graph of A denoted by G A is a graph obtained from the atom dependency graph of its superagent's intensional database IDB A by removing all nodes that are not relevant for any input atom in HIN 1 ∪ · · · ∪ HIN n .
A is IO-acyclic if there is no infinite path in its I/O graph G A . A is bounded if IDB A is bounded. A is IO-finite if its I/O graph is finite. Example 3.7
The atom dependency graph of IDB A and the I/O-graph G A of the multiagent system in Examples 3.3 and 3.5 is given in Fig. 2. It is obvious that the multiagent system in Examples 3.3 and 3.5 is bounded but not IO-acyclic and the multiagent system in Examples 3.1, 3.2, 3.4 and 3.6 is IO-acyclic and bounded.
Proposition 3.1 If a multiagent system A is IO-acyclic then IDB A is acyclic.
Proof
Suppose IDB A is not acyclic. There is an infinite path η in its atom dependency graph starting from some atom a. There is some agent A i such that a ∈ HB i . Since IDB i is acyclic, every path in its atom dependency graph is finite. η must go through some atom b ∈ IN i to outside of A i 's atom dependency graph. Clearly starting from b, all atoms in η are relevant to b. The infinite path of η starting from b is a path in the I/O graph G A . Hence G A is not acyclic. Contradiction! Definition 3.7 Let R = △ 0 → . . . △ k → . . . be a run and M i,k be the stable model of A i at point k.
1. R is convergent for an atom a if either of the following conditions is satisfied.
• There is a point h such that at every point k ≥ h, for every agent
A i with a ∈ HB i = head(IDB i ) ∪ HBE i ∪ HIN i , a ∈ M i,k
In this case we write Conv(R, a) = true • There is a point h such that at every point k ≥ h, for every agent A i with a ∈ HB i , a ∈ M i,k
In this case we write Conv(R, a) = f alse 2. R is convergent if it is convergent for each atom.
R is strongly convergent if it is convergent and there is a point h such that at every point k ≥ h, for every agent
A i , M i,k = M i,h .
It is easy to see that strong convergence implies convergence. Define Definition 3.8 • A multiagent system is said to be weakly stabilizing if every run R is convergent, and its convergence model Conv(R) is a stable model of P A in the stabilized environment of R, i.e. Conv(R) is a stable model of IDB A ∪ EDB where EDB is the stabilized environment of R. • A multiagent system is said to be stabilizing if it is weakly stabilizing and all of its runs are strongly convergent.
Theorem 3.1 IO-acyclic and bounded multiagent systems are weakly stabilizing.
Proof
See Appendix A.
Unfortunately, the above theorem does not hold for more general class of multiagent systems as the following example shows.
Example 3.8 (Continuation of example 3.3 and 3.5) Consider the multiagent system A and run R in Example 3.3. It is obvious that A is bounded but not IO-acyclic.
For every point k ≥ 4,
M 1,k = {a, b, c, f }, M 2,k = {a, b, d}. Conv(R) = {a, b, c, d, f }. The stabilized environment of R is EBD = {c, d}.
The stable model of P A in the stabilized environment of R is {c, d}, which is not the same as Conv(R). Hence the system is not weakly stabilizing.
Boundedness is very important for the weak stabilization of multiagent systems. Consider a multiagent system in the following example which is IO-acyclic, but not bounded.
Example 3.9
Consider the following multiagent system
A = (A 1 , A 2 ) where IDB 1 = {q ← ¬r(x) IDB 2 = {r(x + 1) ← s(x) s(x) ← r(x)} r(0) ←} HBE 1 = {} HBE 2 = {} HIN 1 = {r(0), r(1), . . . } HIN 2 = {s(0), s(1), . . . } EDB 1,0 = ∅ IN 1,0 = ∅ EDB 2,0 = ∅ IN 2,0 = ∅
Since HBE = HBE 1 ∪ HBE 2 = ∅, for every run R the stabilized environment of R is empty. The stable model of P A in the stabilized environment of R is the set {r(0), r(1), . . . }∪{s(0), s(1), . . . }. It is easy to see that for each run, the agents need to exchange infinitely many messages to establish all the values of r(x). Hence for every run R, for every point h ≥ 0 in the run: q ∈ M 1,h , but q is not in the stable model of P A in the stabilized environment of R. Thus the system is not weakly stabilizing.
Are the boundedness and IO-acyclicity sufficient to guarantee the stabilization of a multiagent system? The following example shows that they are not.
Example 3.10 (Continuation of Example 3.4 and 3.6)
Consider the multiagent system in Example 3.2. Consider the following run R with no environment change after point 6.
△ 0 5 2 − −− → △ 1 5 4 − −− → △ 2 2 1 − −− → (1) △ 3 (∅,{link(A1,A2)}) −−−−−−−−−−−→ △ 4 4 1 − −− → (2) △ 5 (∅,{link(A4,A5)}) −−−−−−−−−−−→ △ 6 1 4 − −− → (3) △ 7 4 1 − −− → △ 8 → . . .(4)
Initially all links in the network are intact. The states and outputs of agents are as follows:
• EDB 1,0 = {link(A 1 , A 2 ), link(A 1 , A 4 )}, EDB 2,0 = {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} EDB 3,0 = {link(A 3 , A 2 ), link(A 3 , A 5 )}. EDB 4,0 = {link(A 4 , A 1 ), link(A 4 , A 5 )}. EDB 5,0 = {link(A 5 , A 2 ), link(A 5 , A 3 ), link(A 5 , A 4 )}. • IN i,0 = ∅ for i = 1, . . . , 5. • SP i,0 = {sp(A i , A i , 0)} for i = 1, . . . , 5.
Recall that SP i,k denotes the output of A i at point k and is defined as follows:
SP i,k = {sp(A i , Y, D)|sp(A i , Y, D) ∈ M i,k }
The following transitions occur in R:
• At point 0, A 5 sends SP 5,0 = {sp(A 5 , A 5 , 0)} to A 2 . This causes the following changes in the input and output of A 2 :
IN 2,1 = {sp(A 5 , A 5 , 0)} SP 2,1 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1)} • At point 1, A 5 sends SP 5,1 = {sp(A 5 , A 5 , 0)} to A 4 .
This causes the following changes in the input and output of A 4 :
IN 4,2 = {sp(A 5 , A 5 , 0)} SP 4,2 = {sp(A 4 , A 4 , 0), sp(A 4 , A 5 , 1)}
• At point 2, A 2 sends SP 2,2 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1)} to A 1 . This causes the following changes in the input and output of A 1 :
IN 1,3 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1)} SP 1,3 = {sp(A 1 , A 1 , 0), sp(A 1 , A 2 , 1), sp(A 1 , A 5 , 2)}
• At point 3, the link between A 1 and A 2 is down as shown in Fig. 3. This
b b b b b A4 A5 A1 A2 A3IN 1,5 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1), sp(A 4 , A 4 , 0), sp(A 4 , A 5 , 1)} SP 1,5 = {sp(A 1 , A 1 , 0), sp(A 1 , A 4 , 1), sp(A 1 , A 5 , 2)}
• At point 5, the link between A 4 and A 5 is down as shown in Fig. 4. This Note that at point 6, sp(A 1 , A 5 , 2) ∈ M 1,6 , i.e. the length of the shortest path from A 1 to A 5 equals to 2, is wrong. But A 1 sends this information to A 4 . Now the length of the shortest paths to A 5 of agents A 1 , and A 4 equal to 2, and 3 respectively (i.e. sp(A 1 , A 5 , 2) ∈ M 1,7 and sp(A 4 , A 5 , 3) ∈ M 4,7 , are all wrong. Later on A 1 and A 4 exchange wrong information, increase the shortest paths to A 5 after each round by 2 and go into an infinite loop.
b b b b b A4 A5 A1 A2 A3
The states and outputs of A 1 and A 4 at points 0 → 8 are shown in Fig. 5 and Fig. 6 respectively.
This example shows that
Theorem 3.2 IO-acyclicity and boundedness are not sufficient to guarantee the stabilization of a multiagent system.
As we have pointed out before, the routing example in this paper models the popular routing RIP protocol that has been widely deployed in the internet. Example 3.10 shows that RIP is not stabilizing. In configuration 4, the routers at the nodes A 1 , A 4 go into a loop and continuously change the length of the shortest paths from them to A 5 from 2 to infinite. This is because the router at node A 1 believes that the shortest path from it to A 5 goes through A 4 while the router at A 4 believes that the shortest path from it to A 5 goes through A 1 . None of them realizes that there is no more connection between them and A 5 . 2 . The above theorem general-izes this insight to multiagent systems. The conclusion is that in general it is not possible for an agent to get correct information about its environment if this agent can not sense all the changes in the environment by itself and has to rely on the communications with other agents. This is true even if all the agents involved are honest and do not hide their information.
k EDB IN SP 0 {link(A1, A2), ∅ {sp(A1, A1, 0)} link(A1, A4)} 1 {link(A1, A2), ∅ {sp(A1, A1, 0)} link(A1, A4)} 2 {link(A1, A2), ∅ {sp(A1, A1, 0)} link(A1, A4)} 3 {link(A1, A4)} {sp(A2,
Obviously, if a multiagent system is IO-acyclic and IO-finite, every agent would obtain complete and correct information after finitely many exchanges of information with other agents. The system is stabilizing. Hence
Appendix A Proof of theorem 3.1
First it is clear that the following lemma holds.
Lemma Appendix A.1
Let M be a stable model of a logic program P . For each atom a: a ∈ M iff there is a clause a ← Bd in P such that M |= Bd.
Given an IO-acyclic and bounded multiagent system A = (A 1 , . . . , A n ). By proposition 3.1, IDB A is acyclic. Let
R = △ 0 → · · · → △ h → . . .
be a run of A such that after point h there is no more change in the environment. The stabilized environment of R is EDB = EDB 1,h ∪ · · · ∪ EDB n,h . Let [[P A ]] be the stable model of P A in the stabilized environment of R, i.e. the stable model of IDB A ∪ EDB. The height of an atom a in the atom dependency graph of P A denoted by π(a) is the length of a longest path from a to other atoms in the atom dependency graph of P A . Since IDB A is acyclic, there is no infinite path in the atom dependency graph of P A . From the boundedness of IDB A , π(a) is finite.
Theorem 3.1 follows directly from the following lemma.
Lemma Appendix A.2
For every atom a, R is convergent for a and conv(R, a) = true iff a ∈ [[P A ]].
It is easy to see that lemma Appendix A.2 follows immediately from the following lemma.
Lemma Appendix A.3
For every atom a, there is a point k ≥ h, such that at every point p ≥ k in R, for
every A i such that a ∈ HB i , a ∈ M i,p iff a ∈ [[P A ]].
Proof
We prove by induction on π(a). For each i, let HBI i = head(IDB i ).
• Base case: π(a) = 0 (a is a leaf in the dependency graph of P A ).
Let A i be an agent with a ∈ HB i . There are three cases:
1. a ∈ HBI i . There must be a clause of the form a ← in IDB i . a ← is also in IDB A . At every point m ≥ 0, a ∈ M i,m and a ∈ [[P A ]]. 2. a ∈ HBE i . There is no change in the environment after h, at every point
k ≥ h, a ∈ M i,k iff a ∈ EDB i,k iff a ∈ [[P A ]].
3. a ∈ HIN i . There must be an agent A j such that D(i, j) = ∅ and a ∈ HBE j ∪ HBI j . By definition 3.5 of the run, there must be a point p ≥ h such that there is a transition where S = D(i, j) ∩ M j,p . Since a ∈ D(i, j), a ∈ M i,p+1 iff a ∈ IN i,p+1 iff a ∈ M j,p . As shown in 1 and 2, at every point k ≥ h, for every A j such that a ∈ HBI j ∪HBE j , a ∈ M j,k iff a ∈ [[P A ]]. So at every point k ≥ p, a ∈ M i,k+1 iff a ∈ [[P A ]].
We have proved that for each A i such that a ∈ HB i there a point p i such that at every point k ≥ p i , a ∈ M i,k iff a ∈ [[P A ]]. Take p = max(p 1 , . . . , p n ). At every point k ≥ p, for every agent A i such that a ∈ HB i , a ∈ M i,k iff a ∈ [[P A ]].
• Inductive case: Suppose the lemma holds for every atom a with π(a) ≤ m, m ≥ 0.
We show that the lemma also holds for a with π(a) = m + 1. Let A i be an agent with a ∈ HB i . Clearly a ∈ HBE ⊇ HBE i . There are two cases:
1. a ∈ HBI i . The atom dependency graph of P A is acyclic, every child b of a has π(b) ≤ m. By the inductive assumption, for each b there is a point p b such that at every point k ≥
p b , b ∈ M i,p b iff b ∈ [[P A ]
]. The set of children of a in the atom dependency graph of P A is the same as the set of atoms in the body of all clauses of the definition of a. As IDB A is bounded, a has a finite number of children in the atom dependency graph of P A and the definition of a is finite. Let p a is the maximum number in the set of all such above p b where b is a child of a. At every point k ≥ p a , for every child b of a, by the inductive assumption, b ∈ M i,k iff b ∈ [[P A ]]. We prove that a ∈ M i,k iff a ∈ [[P A ]]. By lemma Appendix A.1, a ∈ M i,k iff there is a rule a ← Bd in P i,k = IDB i ∪ EDB i,k ∪ IN i,k such that M i,k |= Bd. By inductive assumption for every b ∈ atom(Bd), b ∈ M i,k iff b ∈ [[P A ]]. Moreover a ← Bd is also a rule in P A . Thus a ∈ M i,k iff there is a rule a ← Bd in P A such that [[P A ]] |= Bd iff a ∈ [[P A ]] (by lemma Appendix A.1). 2. a ∈ HIN i . As shown in 1, for every A j such that a ∈ HBI j there is a point p j , such that at every point k ≥ p j , a ∈ M j,k iff a ∈ [[P A ]]. Let p be the maximum of all such p j . Clearly, at every point k ≥ p, for every A j such that a ∈ HBI j , a ∈ M j,k iff a ∈ [[P A ]]. Follow similarly as case 3 in base case of the proof, there is a point p ′ ≥ p + 1 such that at every point k ≥ p ′ , a ∈ M i,k iff a ∈ M j,k . It also means that at every point k ≥ p ′ , a ∈ M i,k iff a ∈ [[P A ]].
We have proved that for each A i such that a ∈ HB i there a point p i such that at every point k ≥ p i , a ∈ M i,k iff a ∈ [[P A ]]. Take p = max(p 1 , . . . , p n ). At every point k ≥ p, for every agent A i such that a ∈ HB i , a ∈ M i,k iff a ∈ [[P A ]].
Appendix B Proof of theorem 3.3
Let A be an IO-acyclic and IO-finite multiagent system. Obviously A is also bounded. Let R be a run of A. By theorem 3.1, R is convergent. By lemma Appendix A.3, for every atom a in G A there is a point k a such that at every point p ≥ k a , for every agent A i such that a ∈ HB i , a ∈ M i,p iff a ∈ [[P A ]]. As G A is finite, take the largest number k of all such k a 's for every atoms a in G A . Obviously, at every point p ≥ k, for every agent A i , M i,k = M i,p . Thus R is strongly convergent. The system is stabilizing and theorem 3.3 follows immediately.
| 7,679 |
cs0503028
|
2953061427
|
An information agent is viewed as a deductive database consisting of 3 parts: an observation database containing the facts the agent has observed or sensed from its surrounding environment, an input database containing the information the agent has obtained from other agents, and an intensional database which is a set of rules for computing derived information from the information stored in the observation and input databases. Stabilization of a system of information agents represents a capability of the agents to eventually get correct information about their surrounding despite unpredictable environment changes and the incapability of many agents to sense such changes causing them to have temporary incorrect information. We argue that the stabilization of a system of cooperative information agents could be understood as the convergence of the behavior of the whole system toward the behavior of a "superagent", who has the sensing and computing capabilities of all agents combined. We show that unfortunately, stabilization is not guaranteed in general, even if the agents are fully cooperative and do not hide any information from each other. We give sufficient conditions for stabilization and discuss the consequences of our results.
|
In this paper communications for agents are based on push-technologies. It is interesting to see how the results could be extended to multiagent systems whose communication is based on pull-technologies ( @cite_10 , @cite_9 ).
|
{
"abstract": [
"This paper presents ALIAS, an agent architecture based on intelligent logic agents, where the main form of agent reasoning is abduction. The system is particularly suited for solving problems where knowledge is incomplete, where agents may need to make reasonable hypotheses about the problem domain and other agents, and where the raised hypotheses have to be consistent for the overall set of agents. ALIAS agents are pro-active, exhibiting a goal-directed behavior, and autonomous, since each one can solve problems using its own private knowledge base. ALIAS agents are also social, because they are able to interact with other agents, in order to cooperatively solve problems. The coordination mechanisms are modeled by means of LAILA, a logic-based language which allows to express intra-agent reasoning and inter-agent coordination. As an application, we show how LAILA can be used to implement inter-agent dialogues, e.g., for negotiation. In particular, LAILA is well-suited to coordinate the process of negotiation aimed at exchanging resources between agents, thus allowing them to execute the plans to achieve their goals.",
"In multi-agent system, we often face incompleteness of information due to communication failure or other agent's suspension of decisions. To solve the incompleteness, we previously proposed speculative computation using abduction in the context of matter-slave multi-agent systems and gave a procedure in abductive logic programming [14]. In the work, a master agent prepares a default value for a question in advance and it performs speculative computation using the default without waiting for a reply for the question. This computation is effective unless the contradictory reply with the default is returned. However, we find that this mechanism is not sufficient for speculative computation in more general multi-agent systems such that replies can be revised according to other agents' speculative computation. In this paper, we formalize speculative computation with multi-agent belief revision and propose a correct procedure for such computation."
],
"cite_N": [
"@cite_9",
"@cite_10"
],
"mid": [
"1585686429",
"2131552518"
]
}
|
Stabilization of Cooperative Information Agents in Unpredictable Environment: A Logic Programming Approach
|
To operate effectively in a dynamic and unpredictable environment, agents need correct information about the environment. Often only part of this environment could be sensed by the agent herself. As the agent may need information about other part of the environment that she could not sense, she needs to cooperate with other agents to get such information. There are many such systems of cooperative information agents operating in the Internet today. A prominent example of such system is the system of routers that cooperate to deliver messages from one place to another in the Internet. One of the key characteristics of these systems is their resilience in the face of unpredictable changes in their environment and the incapability of many agents to sense such changes causing them to have temporary incorrect information. This is possible because agents in such systems cooperate by exchanging tentative partial results to eventually converge on correct and consistent global view of the environment. Together they constitute a stabilizing system that allows the individual agents to eventually get a correct view of their surrounding.
Agent communications could be classified into push-based communications and pull-based communications. In the push-based communication, agents periodically send information to specific recipients. Push-based communications are used widely in routing system, network protocols, emails, videoconferencing calls, etc. A key goal of these systems is to guarantee that the agents have a correct view of their surrounding. On the other hand, in the pull-based communication, agents have to send a request for information to other agents and wait for a reply. Until now pull-based communications are the dominant mode of communication in research in multiagent systems, e.g. (Shoham 1993), (Satoh and Yamamoto 2002), (Ciampolini et al. 2003), (Kowalski and Sadri 1999), (Wooldridge 1997), (Wooldridge and Jennings 1995). In this paper, we consider multiagent systems where agent communications are based on push-technologies. A prominent example of a push-based multiagent system is the internet routing system. This paper studies the problem of stabilization of systems of cooperative information agents where an information agent is viewed as a deductive database which consists of 3 parts:
• an observation database containing the facts the agent has observed or sensed from its surrounding environment. • an input database containing the information the agent was told by other agents • an intensional database which is a set of rules for computing derived information from the information stored in the observation and input databases.
It turns out that in general, it is not possible to ensure that the agents will eventually have the correct information about the environment even if they honestly exchange information and do not hide any information that is needed by others and every change in the environment is immediately sensed by some of the agents. We also introduce sufficient conditions for stabilization.
The stabilization of distributed protocols has been studied extensively in the literature ( (Dijkstra 1974), (Flatebo et al. 1994), (Schneider 1993)) where agents are defined operationally as automata. Dijkstra (1974) defined a system as stabilizing if it is guaranteed to reach a legitimate state after a finite number of steps regardless of the initial state. The definition of what constitutes a legitimate state is left to individual algorithms. Thanks to the introduction of an explicit notion of environment, we could characterize a legitimate state as a state in which the agents have correct information about their environment. In this sense, we could say that our agents are a new form of situated agents ( (Rosenschein and Kaelbling 1995), (Brooks 1991), (Brooks 1986)) that may sometimes act on wrong information but nonetheless will be eventually situated after getting correct information about their surrounding. Further in our approach, agents are defined as logic programs, and hence it is possible for us to get general results about what kind of algorithms could be implemented in stabilizing multiagent systems in many applications. To the best of our knowledge, we believe that our work is the first work on stabilization of multiagent systems.
The rest of this paper is organized as follows. Basic notations and definitions used in this paper are briefly introduced in section 2. We give an illustrating example and formalize the problem in section 3. Related works and conclusions are given in section 4. Proofs of theorems are given in Appendices.
Preliminaries: Logic Programs and Stable Models
In this section we briefly introduce the basic notations and definitions that are needed in this paper.
We assume the existence of a Herbrand base HB.
A logic program is a set of ground clauses of the form:
H ← L 1 , . . . , L m
where H is an atom from HB, and L 1 , . . . , L m are literals (i.e., atoms or negations of an atoms) over HB, m ≥ 0. H is called the head, and L 1 , . . . , L m the body of the clause. Given a set of clauses S, the set of the heads of clauses in S is denoted by head(S). Note that clauses with variables are considered as a shorthand for the set of all their ground instantiations. Often the variables appearing in a non-ground clause have types that are clear from the context. In such cases these variables are instantiated by ground terms of corresponding types.
For each atom a, the definition of a is the set of all clauses whose head is a.
A logic program is bounded if the definition of every atom is finite. Let P be an arbitrary logic program. For any set S ⊆ HB, let P S be a program obtained from P by deleting 1. each rule that has a negative literal ¬B in its body with B ∈ S, and 2. all negative literals in the bodies of the remaining rules S is a stable model ((Gelfond and Lifschitz 1988)
) of P if S is the least model of P S .
The atom dependency graph of a logic program P is a graph, whose nodes are atoms in HB and there is an edge from a to b in the graph iff there is a clause in P whose head is a and whose body contains b or ¬b. Note that in the literature (Apt et al. 1988), the direction of the link is from the atom in the body to the head of a clause. We reverse the direction of the link for the ease of definition of acyclicity using the atom dependency graph.
An atom b is said to be relevant to an atom a if there is a path from a to b in the atom dependency graph.
A logic program P is acyclic iff there is no infinite path in its atom dependency graph. It is well known that
b b b b b A4 A5 A1 A2 A3 Fig. 1. A network example
The problem for each agent is to find the shortest paths from her node to other nodes. The environment information an agent can sense is the availability of links connecting to her node. The agents use an algorithm known as "distance vector algorithm" ( (Bellman 1957), (Ford and Fulkerson 1962)) to find the shortest paths from their nodes to other nodes. If the destination is directly reachable by a link, the cost is 1. If the destination is not directly reachable, an agent needs information from its neighbors about their shortest paths to the destination. The agent will select the route to the destination through a neighbor who offers a shortest path to the destination among the agent's neighbors. Thus at any point of time, each agent needs three kinds of information:
• The information about the environment, that the agent can acquire with her sensing capability. In our example, agent A 1 could sense whether the links connecting her and her neighbors A 2 , A 4 are available. • The algorithm the agent needs to solve her problem. In our example the algorithm for agent A 1 is represented by the following clauses: 1
sp(A 1 , A 1 , 0) ← sp(A 1 , y, d) ← spt(A 1 , y, x, d) spt(A 1 , y, x, d + 1) ← link(A 1 , x), sp(x, y, d), not spl(A 1 , y, d + 1) spl(A 1 , A 1 , d + 1) ← spl(A 1 , y, d + 1) ← link(A 1 , x), sp(x, y, d ′ ), d ′ < d where link(A i , A j )
is true iff there a link from A i to A j in the network and the link is intact. Links are undirected, i.e. we identify link(A i , A j ) and link(A j , A i ). sp(A 1 , y, d) is true iff a shortest path from A 1 to y has length d spt(A 1 , y, x, d) is true iff the length of shortest paths from A 1 to y is d and there is a shortest path from A 1 to y that goes through x as the next node after A 1 spl(A 1 , y, d) is true iff there is a path from A 1 to y whose length is less than d.
• The information the agent needs from other agents. For agent A 1 to calculate the shortest paths from her node to say A 3 , she needs the information about the length of the shortest paths from her neighbors A 2 , and A 4 to A 3 , that means she needs to know the values d, d ′ such that sp(A 2 , A 3 , d), sp(A 4 , A 3 , d ′ ) hold.
Problem Formalization
The agents are situated in the environment. They may have different accessibility to the environment depending on their sensing capabilities. The environment is represented by a set of (ground) environment atoms, whose truth values could change in an unpredictable way. where • IDB, the intensional database, is an acyclic logic program.
• HBE is the set of all (ground) environment atoms whose truth values the agent could sense, i.e. a ∈ HBE iff A could discover instantly any change in the truth value of a and update her extensional database accordingly. • HIN is the set of all atoms called input atoms, whose truth values the agent must obtain from other agents. No atom in HIN ∪ HBE appears in the head of the clauses in IDB and HIN ∩ HBE = ∅. • δ is the initial state of the agent. That means for each a ∈ HBE, a ∈ EDB iff a is true. • IN ⊆ HIN , the input database of A, represents the set of information A has obtained from other agents, i.e. a ∈ IN iff A was told that a is true.
Given a state σ = (EDB, IN ), the stable model of A = (IDB, HBE, HIN, δ) at σ is defined as the stable model of IDB ∪ EDB ∪ IN . Note that δ and σ could be different states.
Example 3.2 (Continuation of the network routing example)
Imagine that initially the agents have not sent each other any information and all links are intact. In this situation, agent A 1 is represented as follows:
• IDB 1 contains the clauses shown in Example 3.1.
• HBE 1 = {link(A 1 , A 2 ), link(A 1 , A 4 )} • HIN 1 consists of ground atoms of the form sp(A 2 , Y, D), sp(A 4 , Y, D) where Y ∈ {A 2 , . . . , A 5 } and D is a positive integer. • The initial state δ 1 = (EDB 1,0 , IN 1,0 ) where EDB 1,0 = {link(A 1 , A 2 ), link(A 1 , A 4 )} IN 1,0 = ∅ Definition 3.3 A cooperative multiagent system is a collection of n agents (A 1 , . . . , A n ), with A i = (IDB i ,HBE i , HIN i , δ i )
such that the following conditions are satisfied
• for each atom a, if a ∈ head(IDB i ) ∩ head(IDB j ) then a has the same definition in IDB i and IDB j .
• for each agent A i , HIN i ⊆ n j = 1 (head(IDB j ) ∪ HBE j )
• No environment atom appears in the head of clauses in the intentional database of any agent, i.e. for all i,j:
HBE i ∩ head(IDB j ) = ∅. For each agent A i let HB i = head(IDB i ) ∪ HBE i ∪ HIN i .
Agent Communication and Sensing
Let A i = (IDB i , HBE i , HIN i , δ i ) for 1 ≤ i ≤ n. We say that A i depends on A j if A i needs input from A j , i.e. HIN i ∩ (head(IDB j ) ∪ HBE j ) = ∅. The depen- dency of A i on A j is defined to be the set D(i, j) = HIN i ∩(head(IDB j )∪HBE j ).
As we have mentioned before, the mode of communication for our agents corresponds to the "push-technology". Formally, it means that if A i depends on A j
then A j will periodically send A i a set S = D(i, j) ∩ M j where M j is the sta- ble model of A j .
When A i obtains S, she knows that each atom a ∈ D(i, j) \ S is false with respect to M j . Therefore she will update her input database
IN i to U pa i,j (IN i , S) as follows U pa i,j (IN i , S) = (IN i \ D(i, j)) ∪ S Thus her state has changed from σ i = (EDB i , IN i ) to σ ′ i = (EDB i , U pa i,j (IN i , S)) accordingly.
An environment change is represented by a pair C = (T, F ) where T (resp. F ) contains the atoms whose truth values have changed from false (resp. true) to true (resp. false). Therefore, given an environment change
(T, F ), what A i could sense of this change, is captured by the pair (T i , F i ) where T i = T ∩ HBE i and F i = F ∩ HBE i .
Hence when a change C = (T, F ) occurs in the environment, agent A i will update her sensing database EDB i to U pe i (EDB i , C) as follows:
U pe i (EDB i , C) = (EDB i \ F i ) ∪ T i The state of agent A i has changed from σ i = (EDB i , IN i ) to σ ′ i = (U pe i (EDB i , C), IN i ) accordingly.
Semantics of Multiagent Systems
Let A = (A 1 , . . . , A n ) with A i = (IDB i , HBE i , HIN i , δ i ) be a multiagent system. (δ 1 , . . . , δ n ) is called the initial state of A. A state of A is defined as △ = (σ 1 , . . . , σ n ) such that σ i is a state of agent A i .
There are two types of transitions in a multiagent system. A environment transition happens when there is a change in the environment which is sensed by a set of agents and causes these agents to update their extensional databases accordingly. A communication transition happens when an agent sends information to another agent and causes the later to update her input database accordingly.
For an environment change C = (T, F ), let S C be the set of agents which could sense parts of C, i.e.
S C = {A i | HBE i ∩ (T ∪ F ) = ∅} Definition 3.4 Let △ = (σ 1 , . . . , σ n ), △ ′ = (σ ′ 1 , . . . , σ ′ n ) be states of A with σ i = (EDB i , IN i ), σ ′ i = (EDB ′ i , IN ′ i ). 1. A environment transition △ C − → △ ′ caused by an environment change C = (T, F ) is defined as follows (a) for every agent A k such that A k ∈ S C : σ k = σ ′ k , and (b) for each agent A i ∈ S C : • EDB ′ i = U pe i (EDB i , C), • IN ′ i = IN i . 2. A communication transition △ j i −−→ △ ′ caused by agent A j sending information to agent A i , where A i depends on A j , is defined as follows: (a) For all k such that k = i: σ k = σ ′ k (b) EDB ′ i = EDB i and IN ′ i = U pa i,j (IN i , S) where S = D(i, j)∩M j and M j is the stable model of A j at σ j . We often simply write △ → △ ′ if there is a transition △ C − → △ ′ or △ j i −−→ △ ′ .
Definition 3.5 A run of a multiagent system A is an infinite sequence
△ 0 → △ 1 → . . . → △ m → . . .
such that
• △ 0 is the initial state of A and for all agents A i , A j such that A i depends on A j the following condition is satisfied:
For each h, there is a k ≥ h such that △ k j i −−→ △ k+1
The above condition is introduced to capture the idea that agents periodically send the needed information to other agents. • There is a point h such that at every k ≥ h in the run, there is no more environment change.
For a run R = △ 0 → △ 1 → . . . → △ k → . . . where △ k = (σ 1,k , . . . , σ n,k ) we often refer to the stable model of A i at state σ i,k as the stable model of A i at point k and denote it by M i,k .
Example 3.3
Consider the following multiagent system
A = (A 1 , A 2 ) where IDB 1 = {a ← b, c IDB 2 = {b ← a, d f ← a} b ← e} HBE 1 = {c} HBE 2 = {d, e} HIN 1 = {b} HIN 2 = {a} EDB 1,0 = {c} EDB 2,0 = {d, e} IN 1,0 = ∅ IN 2,0 = ∅
Consider the following run R, where the only environment change occurs at point 2 such that the truth value of e becomes false:
△ 0 2 1 − −− → △ 1 1 2 − −− → △ 2 (∅,{e}) −−−−→ △ 3 1 2 − −− → △ 4 2 1 − −− → △ 5 . . .
The states and stable models of A 1 and A 2 at points 0, 1, 2, 3, and 4 are as follows
A 1 A 2 k EDB IN Stable Model EDB IN Stable Model 0 {c} ∅ {c} {d, e} ∅ {b, d, e} 1 {c} {b} {a, b, c, f } {d, e} ∅ {b, d, e} 2 {c} {b} {a, b, c, f } {d, e} {a} {a, b, d, e} 3 {c} {b} {a, b, c, f } {d} {a} {a, b, d} 4 {c} {b} {a, b, c, f } {d} {a} {a, b, d}
Example 3.4 (Continuation of example 3.2)
Consider the following run R of the multiagent system given in Example 3.2.
△ 0 2 1 − −− → △ 1 (∅,{link(A1,A2)}) −−−−−−−−−−−→ △ 2 → . . .
Initially, all links are intact and all inputs of agents are empty, i.e. IN i,0 = ∅ for i = 1, . . . , 5. At point 0 in the run, agent A 2 sends to agent A 1 information about shortest paths from her to other agents. At point 1 in the run, the link between A 1 and A 2 is down.
The information (output) an agent needs to send to other agents consists of shortest paths from her to other agents. Thus from the stable model of an agent we are interested only in this output.
Let
SP i,k be the set {sp(A i , Y, D)|sp(A i , Y, D) ∈ M i,k } where M i,k is the stable model of A i at point k. SP i,k denotes the output of A i at point k. It is easy to see that if there is a transition △ k j i −−→ △ k+1 , then A j sends to A i : S = D(i, j) ∩ M j,k = SP j,k
At point 0, A 1 and A 2 have the following states and outputs:
EDB 1,0 = {link(A 1 , A 2 ), link(A 1 , A 4 )} IN 1,0 = ∅ SP 1,0 = {sp(A 1 , A 1 , 0)} EDB 2,0 = {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} IN 2,0 = ∅ SP 2,0 = {sp(A 2 , A 2 , 0)} A 2 sends S to A 1 in the transition △ 0 2 1 − −− → △ 1 where S = SP 2,0 = {sp(A 2 , A 2 , 0)} Thus IN 1,1 = U pa 1,2 (IN 1,0 , S) = U pa 1,2 (∅, S) = S = {sp(A 2 , A 2 , 0)}
The environment change C = (∅, {link(A 1 , A 2 )}) at point 1 is sensed by A 1 and A 2 . The states of A 1 and A 2 are changed as follows:
IN 1,2 = IN 1,1 EDB 1,2 = U pe 1 (EDB 1,1 , C) = (EDB 1,1 \ {link(A 1 , A 2 )}) ∪ ∅ = {link(A 1 , A 4 )} IN 2,2 = IN 2,1 EDB 2,2 = U pe 2 (EDB 2,1 , C) = (EDB 2,1 \ {link(A 1 , A 2 )}) ∪ ∅ = {link(A 2 , A 3 ), link(A 2 , A 5 )}
The following tables show the states and outputs of A 1 and A 2 at points 0, 1, and 2 respectively.
A 1 k EDB IN SP 0 {link(A 1 , A 2 ), link(A 1 , A 4 )} ∅ {sp(A 1 , A 1 , 0)} 1 {link(A 1 , A 2 ), link(A 1 , A 4 )} {sp(A 2 , A 2 , 0)} {sp(A 1 , A 1 , 0), sp(A 1 , A 2 , 1)} 2 {link(A 1 , A 4 )} {sp(A 2 , A 2 , 0)} {sp(A 1 , A 1 , 0)} A 2 k EDB IN SP 0 {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} ∅ {sp(A 2 , A 2 , 0)} 1 {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} ∅ {sp(A 2 , A 2 , 0)} 2 {link(A 2 , A 3 ), link(A 2 , A 5 )} ∅ {sp(A 2 , A 2 , 0)}
Stabilization
Consider a superagent whose sensing capability and problem solving capability are the combination of the sensing capabilities and problem solving capabilities of all agents, i.e. this agent can sense any change in the environment and her intensional database is the union of the intensional databases of all other agents. Formally, the superagent of a multiagent system
A = (A 1 , . . . , A n ) where A i = (IDB i , HBE i , HIN i , δ i ), δ i = (EDB i , IN i )
is represented by
P A = (IDB A , δ) where • IDB A = IDB 1 ∪ · · · ∪ IDB n • δ, the initial state of P A , is equal to EDB 1 ∪ · · · ∪ EDB n
The superagent actually represents the multiagent system in the ideal case where each agent has obtained the correct information for its input atoms.
Example 3.5 (Continuation of Example 3.3)
Consider the multiagent system in Example 3.3. At point 0, the superagent P A is represented as follows:
• IDB A consists of the following clauses:
a ← b, c f ← a b ← a, d b ← e • δ = {c, d, e}.
Example 3.6 (Continuation of Example 3.4) Consider the multiagent system in Example 3.4. Initially, when all links between nodes are intact, the superagent P A is represented as follows:
• IDB A consists of the following clauses:
sp(x, x, 0) ← sp(x, y, d) ← spt(x, y, z, d) spt(x, y, z, d + 1) ← link(x, z), sp(z, y, d), not spl(x, y, d + 1) spl(x, x, d + 1) ← spl(x, y, d + 1) ← link(x, z), sp(z, y, d ′ ), d ′ < d • The initial state δ = { link(A 1 , A 2 ), link(A 1 , A 4 ), link(A 2 , A 3 ), link(A 2 , A 5 ), link(A 3 , A 5 ), link(A 4 , A 5 )}
Note that the possible values of variables x, y, z are A 1 , A 2 , A 3 , A 4 , A 5 .
Definition 3.6
Let A be a multiagent system. The I/O graph of A denoted by G A is a graph obtained from the atom dependency graph of its superagent's intensional database IDB A by removing all nodes that are not relevant for any input atom in HIN 1 ∪ · · · ∪ HIN n .
A is IO-acyclic if there is no infinite path in its I/O graph G A . A is bounded if IDB A is bounded. A is IO-finite if its I/O graph is finite. Example 3.7
The atom dependency graph of IDB A and the I/O-graph G A of the multiagent system in Examples 3.3 and 3.5 is given in Fig. 2. It is obvious that the multiagent system in Examples 3.3 and 3.5 is bounded but not IO-acyclic and the multiagent system in Examples 3.1, 3.2, 3.4 and 3.6 is IO-acyclic and bounded.
Proposition 3.1 If a multiagent system A is IO-acyclic then IDB A is acyclic.
Proof
Suppose IDB A is not acyclic. There is an infinite path η in its atom dependency graph starting from some atom a. There is some agent A i such that a ∈ HB i . Since IDB i is acyclic, every path in its atom dependency graph is finite. η must go through some atom b ∈ IN i to outside of A i 's atom dependency graph. Clearly starting from b, all atoms in η are relevant to b. The infinite path of η starting from b is a path in the I/O graph G A . Hence G A is not acyclic. Contradiction! Definition 3.7 Let R = △ 0 → . . . △ k → . . . be a run and M i,k be the stable model of A i at point k.
1. R is convergent for an atom a if either of the following conditions is satisfied.
• There is a point h such that at every point k ≥ h, for every agent
A i with a ∈ HB i = head(IDB i ) ∪ HBE i ∪ HIN i , a ∈ M i,k
In this case we write Conv(R, a) = true • There is a point h such that at every point k ≥ h, for every agent A i with a ∈ HB i , a ∈ M i,k
In this case we write Conv(R, a) = f alse 2. R is convergent if it is convergent for each atom.
R is strongly convergent if it is convergent and there is a point h such that at every point k ≥ h, for every agent
A i , M i,k = M i,h .
It is easy to see that strong convergence implies convergence. Define Definition 3.8 • A multiagent system is said to be weakly stabilizing if every run R is convergent, and its convergence model Conv(R) is a stable model of P A in the stabilized environment of R, i.e. Conv(R) is a stable model of IDB A ∪ EDB where EDB is the stabilized environment of R. • A multiagent system is said to be stabilizing if it is weakly stabilizing and all of its runs are strongly convergent.
Theorem 3.1 IO-acyclic and bounded multiagent systems are weakly stabilizing.
Proof
See Appendix A.
Unfortunately, the above theorem does not hold for more general class of multiagent systems as the following example shows.
Example 3.8 (Continuation of example 3.3 and 3.5) Consider the multiagent system A and run R in Example 3.3. It is obvious that A is bounded but not IO-acyclic.
For every point k ≥ 4,
M 1,k = {a, b, c, f }, M 2,k = {a, b, d}. Conv(R) = {a, b, c, d, f }. The stabilized environment of R is EBD = {c, d}.
The stable model of P A in the stabilized environment of R is {c, d}, which is not the same as Conv(R). Hence the system is not weakly stabilizing.
Boundedness is very important for the weak stabilization of multiagent systems. Consider a multiagent system in the following example which is IO-acyclic, but not bounded.
Example 3.9
Consider the following multiagent system
A = (A 1 , A 2 ) where IDB 1 = {q ← ¬r(x) IDB 2 = {r(x + 1) ← s(x) s(x) ← r(x)} r(0) ←} HBE 1 = {} HBE 2 = {} HIN 1 = {r(0), r(1), . . . } HIN 2 = {s(0), s(1), . . . } EDB 1,0 = ∅ IN 1,0 = ∅ EDB 2,0 = ∅ IN 2,0 = ∅
Since HBE = HBE 1 ∪ HBE 2 = ∅, for every run R the stabilized environment of R is empty. The stable model of P A in the stabilized environment of R is the set {r(0), r(1), . . . }∪{s(0), s(1), . . . }. It is easy to see that for each run, the agents need to exchange infinitely many messages to establish all the values of r(x). Hence for every run R, for every point h ≥ 0 in the run: q ∈ M 1,h , but q is not in the stable model of P A in the stabilized environment of R. Thus the system is not weakly stabilizing.
Are the boundedness and IO-acyclicity sufficient to guarantee the stabilization of a multiagent system? The following example shows that they are not.
Example 3.10 (Continuation of Example 3.4 and 3.6)
Consider the multiagent system in Example 3.2. Consider the following run R with no environment change after point 6.
△ 0 5 2 − −− → △ 1 5 4 − −− → △ 2 2 1 − −− → (1) △ 3 (∅,{link(A1,A2)}) −−−−−−−−−−−→ △ 4 4 1 − −− → (2) △ 5 (∅,{link(A4,A5)}) −−−−−−−−−−−→ △ 6 1 4 − −− → (3) △ 7 4 1 − −− → △ 8 → . . .(4)
Initially all links in the network are intact. The states and outputs of agents are as follows:
• EDB 1,0 = {link(A 1 , A 2 ), link(A 1 , A 4 )}, EDB 2,0 = {link(A 2 , A 1 ), link(A 2 , A 3 ), link(A 2 , A 5 )} EDB 3,0 = {link(A 3 , A 2 ), link(A 3 , A 5 )}. EDB 4,0 = {link(A 4 , A 1 ), link(A 4 , A 5 )}. EDB 5,0 = {link(A 5 , A 2 ), link(A 5 , A 3 ), link(A 5 , A 4 )}. • IN i,0 = ∅ for i = 1, . . . , 5. • SP i,0 = {sp(A i , A i , 0)} for i = 1, . . . , 5.
Recall that SP i,k denotes the output of A i at point k and is defined as follows:
SP i,k = {sp(A i , Y, D)|sp(A i , Y, D) ∈ M i,k }
The following transitions occur in R:
• At point 0, A 5 sends SP 5,0 = {sp(A 5 , A 5 , 0)} to A 2 . This causes the following changes in the input and output of A 2 :
IN 2,1 = {sp(A 5 , A 5 , 0)} SP 2,1 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1)} • At point 1, A 5 sends SP 5,1 = {sp(A 5 , A 5 , 0)} to A 4 .
This causes the following changes in the input and output of A 4 :
IN 4,2 = {sp(A 5 , A 5 , 0)} SP 4,2 = {sp(A 4 , A 4 , 0), sp(A 4 , A 5 , 1)}
• At point 2, A 2 sends SP 2,2 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1)} to A 1 . This causes the following changes in the input and output of A 1 :
IN 1,3 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1)} SP 1,3 = {sp(A 1 , A 1 , 0), sp(A 1 , A 2 , 1), sp(A 1 , A 5 , 2)}
• At point 3, the link between A 1 and A 2 is down as shown in Fig. 3. This
b b b b b A4 A5 A1 A2 A3IN 1,5 = {sp(A 2 , A 2 , 0), sp(A 2 , A 5 , 1), sp(A 4 , A 4 , 0), sp(A 4 , A 5 , 1)} SP 1,5 = {sp(A 1 , A 1 , 0), sp(A 1 , A 4 , 1), sp(A 1 , A 5 , 2)}
• At point 5, the link between A 4 and A 5 is down as shown in Fig. 4. This Note that at point 6, sp(A 1 , A 5 , 2) ∈ M 1,6 , i.e. the length of the shortest path from A 1 to A 5 equals to 2, is wrong. But A 1 sends this information to A 4 . Now the length of the shortest paths to A 5 of agents A 1 , and A 4 equal to 2, and 3 respectively (i.e. sp(A 1 , A 5 , 2) ∈ M 1,7 and sp(A 4 , A 5 , 3) ∈ M 4,7 , are all wrong. Later on A 1 and A 4 exchange wrong information, increase the shortest paths to A 5 after each round by 2 and go into an infinite loop.
b b b b b A4 A5 A1 A2 A3
The states and outputs of A 1 and A 4 at points 0 → 8 are shown in Fig. 5 and Fig. 6 respectively.
This example shows that
Theorem 3.2 IO-acyclicity and boundedness are not sufficient to guarantee the stabilization of a multiagent system.
As we have pointed out before, the routing example in this paper models the popular routing RIP protocol that has been widely deployed in the internet. Example 3.10 shows that RIP is not stabilizing. In configuration 4, the routers at the nodes A 1 , A 4 go into a loop and continuously change the length of the shortest paths from them to A 5 from 2 to infinite. This is because the router at node A 1 believes that the shortest path from it to A 5 goes through A 4 while the router at A 4 believes that the shortest path from it to A 5 goes through A 1 . None of them realizes that there is no more connection between them and A 5 . 2 . The above theorem general-izes this insight to multiagent systems. The conclusion is that in general it is not possible for an agent to get correct information about its environment if this agent can not sense all the changes in the environment by itself and has to rely on the communications with other agents. This is true even if all the agents involved are honest and do not hide their information.
k EDB IN SP 0 {link(A1, A2), ∅ {sp(A1, A1, 0)} link(A1, A4)} 1 {link(A1, A2), ∅ {sp(A1, A1, 0)} link(A1, A4)} 2 {link(A1, A2), ∅ {sp(A1, A1, 0)} link(A1, A4)} 3 {link(A1, A4)} {sp(A2,
Obviously, if a multiagent system is IO-acyclic and IO-finite, every agent would obtain complete and correct information after finitely many exchanges of information with other agents. The system is stabilizing. Hence
Appendix A Proof of theorem 3.1
First it is clear that the following lemma holds.
Lemma Appendix A.1
Let M be a stable model of a logic program P . For each atom a: a ∈ M iff there is a clause a ← Bd in P such that M |= Bd.
Given an IO-acyclic and bounded multiagent system A = (A 1 , . . . , A n ). By proposition 3.1, IDB A is acyclic. Let
R = △ 0 → · · · → △ h → . . .
be a run of A such that after point h there is no more change in the environment. The stabilized environment of R is EDB = EDB 1,h ∪ · · · ∪ EDB n,h . Let [[P A ]] be the stable model of P A in the stabilized environment of R, i.e. the stable model of IDB A ∪ EDB. The height of an atom a in the atom dependency graph of P A denoted by π(a) is the length of a longest path from a to other atoms in the atom dependency graph of P A . Since IDB A is acyclic, there is no infinite path in the atom dependency graph of P A . From the boundedness of IDB A , π(a) is finite.
Theorem 3.1 follows directly from the following lemma.
Lemma Appendix A.2
For every atom a, R is convergent for a and conv(R, a) = true iff a ∈ [[P A ]].
It is easy to see that lemma Appendix A.2 follows immediately from the following lemma.
Lemma Appendix A.3
For every atom a, there is a point k ≥ h, such that at every point p ≥ k in R, for
every A i such that a ∈ HB i , a ∈ M i,p iff a ∈ [[P A ]].
Proof
We prove by induction on π(a). For each i, let HBI i = head(IDB i ).
• Base case: π(a) = 0 (a is a leaf in the dependency graph of P A ).
Let A i be an agent with a ∈ HB i . There are three cases:
1. a ∈ HBI i . There must be a clause of the form a ← in IDB i . a ← is also in IDB A . At every point m ≥ 0, a ∈ M i,m and a ∈ [[P A ]]. 2. a ∈ HBE i . There is no change in the environment after h, at every point
k ≥ h, a ∈ M i,k iff a ∈ EDB i,k iff a ∈ [[P A ]].
3. a ∈ HIN i . There must be an agent A j such that D(i, j) = ∅ and a ∈ HBE j ∪ HBI j . By definition 3.5 of the run, there must be a point p ≥ h such that there is a transition where S = D(i, j) ∩ M j,p . Since a ∈ D(i, j), a ∈ M i,p+1 iff a ∈ IN i,p+1 iff a ∈ M j,p . As shown in 1 and 2, at every point k ≥ h, for every A j such that a ∈ HBI j ∪HBE j , a ∈ M j,k iff a ∈ [[P A ]]. So at every point k ≥ p, a ∈ M i,k+1 iff a ∈ [[P A ]].
We have proved that for each A i such that a ∈ HB i there a point p i such that at every point k ≥ p i , a ∈ M i,k iff a ∈ [[P A ]]. Take p = max(p 1 , . . . , p n ). At every point k ≥ p, for every agent A i such that a ∈ HB i , a ∈ M i,k iff a ∈ [[P A ]].
• Inductive case: Suppose the lemma holds for every atom a with π(a) ≤ m, m ≥ 0.
We show that the lemma also holds for a with π(a) = m + 1. Let A i be an agent with a ∈ HB i . Clearly a ∈ HBE ⊇ HBE i . There are two cases:
1. a ∈ HBI i . The atom dependency graph of P A is acyclic, every child b of a has π(b) ≤ m. By the inductive assumption, for each b there is a point p b such that at every point k ≥
p b , b ∈ M i,p b iff b ∈ [[P A ]
]. The set of children of a in the atom dependency graph of P A is the same as the set of atoms in the body of all clauses of the definition of a. As IDB A is bounded, a has a finite number of children in the atom dependency graph of P A and the definition of a is finite. Let p a is the maximum number in the set of all such above p b where b is a child of a. At every point k ≥ p a , for every child b of a, by the inductive assumption, b ∈ M i,k iff b ∈ [[P A ]]. We prove that a ∈ M i,k iff a ∈ [[P A ]]. By lemma Appendix A.1, a ∈ M i,k iff there is a rule a ← Bd in P i,k = IDB i ∪ EDB i,k ∪ IN i,k such that M i,k |= Bd. By inductive assumption for every b ∈ atom(Bd), b ∈ M i,k iff b ∈ [[P A ]]. Moreover a ← Bd is also a rule in P A . Thus a ∈ M i,k iff there is a rule a ← Bd in P A such that [[P A ]] |= Bd iff a ∈ [[P A ]] (by lemma Appendix A.1). 2. a ∈ HIN i . As shown in 1, for every A j such that a ∈ HBI j there is a point p j , such that at every point k ≥ p j , a ∈ M j,k iff a ∈ [[P A ]]. Let p be the maximum of all such p j . Clearly, at every point k ≥ p, for every A j such that a ∈ HBI j , a ∈ M j,k iff a ∈ [[P A ]]. Follow similarly as case 3 in base case of the proof, there is a point p ′ ≥ p + 1 such that at every point k ≥ p ′ , a ∈ M i,k iff a ∈ M j,k . It also means that at every point k ≥ p ′ , a ∈ M i,k iff a ∈ [[P A ]].
We have proved that for each A i such that a ∈ HB i there a point p i such that at every point k ≥ p i , a ∈ M i,k iff a ∈ [[P A ]]. Take p = max(p 1 , . . . , p n ). At every point k ≥ p, for every agent A i such that a ∈ HB i , a ∈ M i,k iff a ∈ [[P A ]].
Appendix B Proof of theorem 3.3
Let A be an IO-acyclic and IO-finite multiagent system. Obviously A is also bounded. Let R be a run of A. By theorem 3.1, R is convergent. By lemma Appendix A.3, for every atom a in G A there is a point k a such that at every point p ≥ k a , for every agent A i such that a ∈ HB i , a ∈ M i,p iff a ∈ [[P A ]]. As G A is finite, take the largest number k of all such k a 's for every atoms a in G A . Obviously, at every point p ≥ k, for every agent A i , M i,k = M i,p . Thus R is strongly convergent. The system is stabilizing and theorem 3.3 follows immediately.
| 7,679 |
cs0503065
|
1678440633
|
We tackle the problem of data-structure rewriting including pointer redirections. We propose two basic rewrite steps: (i) Local Redirection and Replacement steps the aim of which is redirecting specific pointers determined by means of a pattern, as well as adding new information to an existing data ; and (ii) Global Redirection steps which are aimed to redirect all pointers targeting a node towards another one. We define these two rewriting steps following the double pushout approach. We define first the category of graphs we consider and then define rewrite rules as pairs of graph homomorphisms of the form "L R". Unfortunately, inverse pushouts (complement pushouts) are not unique in our setting and pushouts do not always exist. Therefore, we define rewriting steps so that a rewrite rule can always be performed once a matching is found.
|
In @cite_7 @cite_1 @cite_17 cyclic term graph rewriting is considered using the algorithmic way. Pointer redirection is limited to global redirection of all edges pointing to the root of a redex by redirecting them to point to the root of the instance of the right-hand side. In @cite_10 , Banach, inspired by features found in implementations of declarative languages, proposed rewrite systems close to ours. We share the same graphs and global redirection of pointers. However, Banach did not discuss local redirections of pointers. We differ also in the way to express rewriting. Rewriting steps in @cite_10 are defined by using the notion of opfibration of a category while our approach is based on double-pushouts.
|
{
"abstract": [
"The categorical semantics of (an abstract version of) the general term graph rewriting language DACTL is investigated. The operational semantics is reformulated in order to reveal its universal properties. The technical dissonance between the matchings of left-hand sides of rules to redexes, and the properties of rewrite rules themselves, is taken as the impetus for expressing the core of the model as a Grothendieck opfibration of a category of general rewrites over a base of general rewrite rules. Garbage collection is examined in this framework in order to reconcile the treatment with earlier approaches. It is shown that term rewriting has particularly good garbage-theoretic properties that do not generalise to all cases of graph rewriting and that this has been a stumbling block for aspects of some earlier models for graph rewriting.",
"Several authors have investigated the correspondence between graph rewriting and term rewriting. Almost invariably they have considered only acyclic graphs. Yet cyclic graphs naturally arise from certain optimizations in implementing functional languages. They correspond to infinite terms, and their reductions correspond to transfinite term-reduction sequences, which have recently received detailed attention. We formalize the close correspondence between finitary cyclic graph rewriting and a restricted form of infinitary term rewriting, called rational term rewriting. This subsumes the known relation between finitary acyclic graph rewriting and finitary term rewriting. Surprisingly, the correspondence breaks down for general infinitary rewriting",
"",
"We address the problem of graph rewriting and narrowing as the underlying operational semantics of rule-based programming languages. We propose new optimal graph rewriting and narrowing strategies in the setting of orthogonal constructor-based graph rewriting systems. For this purpose, we first characterize a subset of graphs, called admissible graphs. A graph is admissible if none of its defined operations belongs to a cycle. We then prove the confluence, as well as the confluence modulo bisimilarity (unraveling), of the admissible graph rewriting relation. Afterwards, we define a sequential graph rewriting strategy by using Antoy’s definitional trees. We show that the resulting strategy computes only needed redexes and develops optimal derivations w.r.t. the number of steps. Finally, we tackle the graph narrowing relation over admissible graphs and propose a sequential narrowing strategy which computes independent solutions and develops shorter derivations than most general graph narrowing."
],
"cite_N": [
"@cite_10",
"@cite_1",
"@cite_7",
"@cite_17"
],
"mid": [
"2048022954",
"2006468185",
"2154286698",
"176560628"
]
}
|
Data-Structure Rewriting
|
Rewriting techniques have been proven to be very useful to establish formal bases for high level programming laguages as well as theorem provers. These techniques have been widely investigated for strings [7], trees or terms [2] and term graphs [19,6].
In this paper we tackle the problem of rewriting classical data-structures such as circular lists, double-chained lists, etc. Even if such data-structures can be easily simulated by string or tree processing, they remain very useful in designing algorithms with good complexity. The investigation of data-structure rewrite systems will contribute to define a clean semantics and proof techniques for "pointer" handling. It will also provide a basis for multiparadigm programming languages integrating declarative (functional and logic) and imperative features.
General frameworks of graph transformation are now well established, see e.g. [22,11,12]. Unfortunately, rewriting classical data-structures represented as cyclic graphs did not benefit yet of the same effort as for terms or term graphs. Our aim in this paper is to investigate basic rewrite steps for datastructure transformation. It turns out that pointer redirection is the key issue we had to face, in addition to classical replacement and garbage collection. We distinguish two kinds of redirections: (i)Global redirection which consists in redirecting in a row all edges pointing to a given node, to another node ; and (ii) Local redirection which consists in redirecting a particular pointer, specified e.g. by a pattern, in order to point to a new target node. Global redirection is very often used in the implementation of functional programming languages, for instance when changing roots of term graphs. As for local redirection, it is useful to express classical imperative algorithms.
We introduce two kind of rewrite steps. The first is one called local redirection and replacement and the second kind is dedicated to global redirection. We define these steps following the double pushout approach [8,16]. We have chosen this approach because it simplifies drastically the presentation of our results. The algorithmic fashion, which we followed first, turns out to be arduous. Thus, basic rewrite rules are given by a pair of graph homomorphisms L ← K → R. We precise the rôle that plays K in order to perform local or global redirection of pointers. The considered homomorphisms are not necessarily injective in our setting, unlike classical assumptions as in the recent proposals dedicated to graph programs [20,17]. This means that inverse pushouts (complement pushouts) are not unique.
The paper is organized as follows: The next section introduces the category of graphs which we consider in the paper. Section 3 states some technical results that help defining rewrite steps. Section 4 introduces data-structure rewriting and defines mainly two rewrite steps, namely LRR-rewriting and GR-rewriting. We compare our proposal to related work in section 5. Concluding remarks are given in section 6. Proofs are found in the appendix. We assume the reader is familiar with basic notions of category theory (see e.g. [1] for an introduction).
Graphs
In this section we introduce the category of graphs we consider in the paper. These graphs are supposed to represent data-structures. We define below such graphs in a mono-sorted setting. Lifting our results to the many-sorted case is straightforward.
Definition 2.1 (Signature) A signature Ω is a set of operation symbols such that each operation symbol in Ω, say f , is provided by a natural number, n, representing its arity. We write ar(f ) = n.
In the sequel, we use the following notations. Let A be a set. We note A * the set of strings made of elements in A. Let f : A → B be a function. We note f * : A * → B * the unique extension of f over strings defined by f * (ǫ) = ǫ where ǫ is the empty string and f * (a 1 . . . a n ) = f (a 1 ) . . . f (a n ).
We assume that Ω is fixed throughout the rest of the paper.
Definition 2.2 (Graph)
A graph G is made of:
• a set of nodes N G ,
• a subset of labeled nodes N Ω G ⊆ N G , • a labeling function L G : N Ω G → Ω, • and a successor function
S G : N Ω G → N * G ,
such that, for each labeled node n, the length of the string S G (n) is the arity of the operation L G (n).
This definition can be illustrated by the following diagram, where lg(u) is the length of the string u. :
N G N Ω G ⊇ o o LG SG G G = N * G lg Ω ar G G N Moreover:
• the arity of a node n is defined as the arity of its label,
• the i-th successor of a node n is denoted succ G (n, i),
• the edges of a graph G are the pairs (n, i) where n ∈ N Ω G and i ∈ {1, . . . , ar(n)}, the source of an edge (n, i) is the node n, and its target is the node succ G (n, i),
• the fact that f = L G (n) can be written as n : f ,
• the set of unlabeled nodes of G is denoted N X G , so that:
N G = N Ω G + 1 N X G .
Example 2.3 Let G be the graph defined by
• N G = {m; n; o; p; q; r} • N Ω G = {m; o; p} • N X G = {n; q; r} • L G is defined by: [m → f ; o → g; p → h] • S G is defined by: [m → no; o → np; p → qrm] 1 + stands for disjoint union.
Graphically we represent this graph as:
m : f z z u u u u u u n : • o : g o o G G p : h y y t t t t t t t q :
• r : • We use • to denote lack of label. Informally, one may think of • as anonymous variables.
Definition 2.4 (Graph homomorphism) A graph homomorphism ϕ : G → H is a map ϕ : N G → N H such that ϕ(N Ω G ) is included in N Ω
H and, for each node n ∈ N Ω G : L H (ϕ(n)) = L G (n) and S H (ϕ(n)) = ϕ * (S G (n)) . Let ϕ Ω : N Ω G → N Ω H denote the restriction of ϕ to the subset N Ω G . Then, the properties in the definition above mean that the following diagrams are commutative:
N Ω G ϕ Ω LG 9 9 y y y y y y = Ω N Ω H LH U U o o o o o o N Ω G ϕ Ω SG G G = N * G ϕ * N Ω H SH G G N * H
The image ϕ(n, i) of an edge (n, i) of G is defined as the edge (ϕ(n), i) of H. It is easy to check that the graphs (as objects) together with the graph homomorphisms (as arrows) form a category, which is called the category of graphs and noted Gr .
Disconnected graphs and homomorphisms
This section is dedicated to some technical definitions the aim of which is the simplification of the definition of rewrite rules given in the following section. The next definition introduces the notion of what we call disconnected graph. Roughly speaking, the disconnected graph associated to a graph G and a set of edges E is obtained by redirecting every edge in E (whether it is yet disconnected or not) towards a new, unlabeled, target.
Definition 3.2 (Disconnected graph)
The disconnected graph associated to a graph G and a set of edges E of G is the following graph D(G, E):
• N D(G,E) = N G + N E , where N E is made of one new node n[i] for each edge (n, i) ∈ E, • N Ω D(G,E) = N Ω G ,
• for each n ∈ N Ω G : L D(G,E) (n) = L G (n), • for each n ∈ N Ω G and i ∈ {1, . . . , ar(n)}:
-if (n, i) ∈ E then succ D(G,E) (n, i) = succ G (n, i), -if (n, i) ∈ E then succ D(G,E) (n, i) = n[i].
Definition 3.3 (Connection homomorphism)
The connection homomorphism associated to a graph G and a set of edges E of G is the homomorphism δ G,E : D(G, E) → G such that:
• if n ∈ N G then δ G,E (n) = n, • if n[i] ∈ N E then δ G,E (n[i]) = succ G (n, i).
It is easy to check that δ G,E is a graph homomorphism. • if n ∈ N G then D ϕ,E (n) = ϕ(n),
• if n[i] ∈ N E then D ϕ,E (n[i]) = ϕ(n)[i].
It is easy to check that D ϕ,E is a graph homomorphism.
Data-structure rewriting
In this section we define data structure rewriting as a succession of rewrite steps. A rewrite step is defined from a rewrite rule and a matching. A rewrite rule is a span of graphs, i.e., a pair of graph homomorphisms with a common source:
L K δ o o ρ G G R A matching is a morphism of graphs: L µ G G G .
There are two kinds of rewrite steps.
• The first kind is called Local Redirection and Replacement Rewriting (LRRrewriting, for short). Its rôle is twofold: adding to G a copy of the instance of the right-hand side R, and performing some local redirections of edges specified by means of the rewrite rule.
• The second kind of rewrite steps is called Global Redirection Rewriting (GR-Rewriting, for short). Its rôle consists in performing redirections: all incoming edges of some node a in G are redirected to a node b.
We define LRR-rewriting and GR-rewriting in the two following subsections. We use in both cases the double-pushout approach to define rewrite steps.
LRR-rewriting
Before defining LRR-rewrite rules and steps, we state first a technical result about the existence of inverse pushouts in our setting.
L µ D(L, E) Dµ,E δL,E o o U D(U, µ(E)) δ U,µ(E) o o
Proof. This result is an easy corollary of Theorem A.2.
Definition 4.2 (Disconnecting pushout) Let µ : L → U be a graph homomorphism and E a set of edges of L. The disconnecting pushout associated to µ and E is the pushout from Theorem 4.1.
It can be noted that the disconnecting pushout is not unique, in the sense that there are generally several inverse pushouts of:
L µ D(L, E) δL,E o o U Before stating the next definition, it should be reminded that N D(L,E) = N L + N E = N Ω L + N X L + N E .L D(L, E) δL,E o o ρ G G R
where E is a set of edges of L, and where ρ(N X L ) ⊆ N X R and the restriction of ρ to N X L is injective.
G G m : cons | | o : • p : • R δ L,{(m,2)} - ρ
In this example we show how (local) edge redirection can be achieved through edge disconnection. Since an element is added to the head of a circular list (of length 1), one has to make the curve pointer (m, 2) to point to the new added cell. For this we disconnect the edge (m, 2) in D(L, {(m, 2)}) in order to be able to redirect it, thanks to an appropriate homomorphism ρ, to the new cell in R,
namely q. Here, ρ = [n → n; m[2] → q; · · ·]
One may also remark that graph R still has a node labelled by add. In this paper we do not tackle the problem of garbage collection which has been treated in a categorical way in e.g. [4].
rewrite rule L D(L, E) δL,E o o ρ G G R is a graph homomorphism µ : L → U
that is Ω-injective, which means that the restriction of the map µ to N Ω G is injective.
Definition 4.6 (LRR-Rewrite step) Let r = ( L D(L, E) δL,E o o ρ G G R )
be a rewrite rule, and µ : L → U a matching with respect to r. Then U rewrites into V using rule r if there are graph homomorphisms ν : R → V and ρ ′ : D(U, µ(E)) → V such that the following square is a pushout in the category of graphs (Gr):
D(L, E) Dµ,E ρ G G R ν D(U, µ(E)) ρ ′ G G V
Thus, a rewrite step corresponds to a double pushout in the category of graphs:
L µ D(L, E) Dµ,E δL,E o o ρ G G R ν U D(U, µ(E)) δ U,µ(E) o o ρ ′ G G V
Theorem 4.7 (Rewrite step is feasible) Let r be a rewrite rule, and µ : L → U a matching with respect to r. Then U can be rewritten using rule r. More precisely, the required pushout can be built as follows (the notations are simplified by dropping E and µ(E)):
• the set of nodes of V is N V = (N R +N D(U) )/ ∼, where ∼ is the equivalence relation generated by D µ (n) ∼ ρ(n) for each node n of D(L),
• the maps ν and ρ ′ , on the sets of nodes, are the inclusions of N R and N D(U) in N R + N D(U) , respectively, followed by the quotient map with respect to ∼,
• N Ω V is made of the classes modulo ∼ which contain at least one labeled node, and a section π : N Ω V → N Ω R + N Ω D(U) of the quotient map is chosen, which means that the class of π(n) is n, for each n ∈ N Ω V , • for each n ∈ N Ω V , the label of n is the label of π(n), • for each n ∈ N Ω V , the successors of n are the classes of the successors of π(n).
Moreover, the resulting pushout does not depend on the choice of the section π.
N Ω V = (N Ω U − µ(N Ω L )) + N Ω R .
Proof. Both Theorem 4.7 and Corollary 4.8 are derived from Theorem A.4, their proofs are given at the end of the appendix.
Example 4.9
Here we consider the case of a non Ω-injective matching in order to show that there may be no double pushout in such cases. Thus justifying our restriction over acceptable matchings (see Definition 4.5).
In this example we identify two nodes of L labelled by g via the homomorphism µ, namely n 1 and n 2 , to a single one, m. In the span we disconnect the two edges coming from g's and redirect them to two different nodes labeled by different constants : b and c.This is done by the homomorphism ρ = id. Now, as both edges have been merged by the matching in U , the second (right) pushout cannot exist since a single edge cannot point to both b and c in the same time. Note that this impossibility does not denote a limitation of our formalism. Figure 1 we give the span for lists of size greater than 1, as well as the application of the rule to a list of size 3. Notice how the disconnection is actually used in order to redirect the pointer (n 6 , 2). The homomorphisms of the bottom layer show that the disconnected edge, pointing to the unlabeled node c 4 [2] is mapped to c 1 to the left and to n 8 to the right. The mechanism of disconnection allows the categorical manipulation of an edge.
The Ω-injectivity hypothesis is also useful in this rule since edges (n 6 , 2) and (n 3 , 2) must be different, thus a list of size less than or equal to one cannot be matched by this rule.
U L R [n6[2] → n8] [n6[2] → n3]? [n1 → o n3 → c1 n6 → c4] ? [n1 → o n3 → c1 n6 → c4] ? [n1 → o n3 → c1 n6 → c4]
GR-Rewriting
Let U be graph and let a, b ∈ N U . we say that U rewrites into V using the global redirection from a to b and write U a→b −→ V iff V is obtained from U by redirecting all edges targeting node a to point towards node b. This kind of rewriting is very useful when dealing with rooted term graphs (see, e.g. [4]). We define below one GR-rewriting step following the double pushout approach.
P SW λ o o ρ G G P where
• P is made of two unlabeled nodes ar and pr,
• SW (switch graph) is made of three unlabeled nodes ar, pr and mr,
• λ(ar) = λ(mr) = ar and λ(pr) = pr,
• ρ(ar) = ar and ρ(pr) = ρ(mr) = pr. G G P be a GR-rewrite rule, and µ : P → U be a GR-matching. LetD µ : SW → D(U, µ(ar)) be the homomorphism defined byD µ (ar) = µ(ar),D µ (pr) = µ(pr) andD µ (mr) = mr. Then U rewrites into V using rule r if there are graph homomorphisms ν : P → V and ρ ′ :D(U, µ(ar)) → V such that the following square is a pushout in the category of graphs (Gr):
SW Dµ ρ G G P ν D(U, µ(ar)) ρ ′ G G V
Thus, a GR-rewrite step, U µ(ar)→µ(pr) −→ V , corresponds to a double pushout in the category of graphs:
P µ SW Dµ δµ o o ρ G G P ν U D(U, µ(ar)) δµ o o ρ ′ G G V
The construction of graph V is straightforward. It may be deduced from Theorem A.4 given in the appendix.
Example 4.16
In this example we show how global redirection works. In the graph G, given in Example 2.3, we want redirect all edges with target n towards q. For this pupose, we define the homomorphism µ from P to G by mapping appropriately the nodes ar (ante-rewriting), and pr (post-rewriting). I.e. in our case µ = [ar → n; pr → q]. Applying this on G, we get the following double push-out: Notice how node mr (midrewriting) is used. It is mapped to n on the left and to q on the right. Thus in the middle graph, mr allows to disconnect edges targeting n in order to redirect them towards q.
1 0 ( ) m : f { { x x x x x n : • o : g o o G G p : h { { w w w w w q q q : • r : • 1 0 ( ) mr : • m : f o o n : • o : g d d s s s s s s G G p : h { { w w w w w q q q : • r : • - 1 0 ( ) m : f 2 2 n : • o : g G G p : h { { w w w w w
Example 4.17 In this additional example, we give rewriting rules defining the function length (written ♯) which computes the size of non-empty circular lists. In this example every LRR-rewriting is followed by a GR-rewriting. That is why we precise the global rewriting that should be performed after each LRR-rewrite step.
The first rule simply introduces an auxiliary function, ♯ b , which has two arguments. The first one indicates the head of the list while the second one will move along the list in order to measure it. We have the following span for ♯: (m, i). The next rule defines ♯ b when its arguments are different. Once again we use the hypothesis of Ω-injectivity to ensure that both cons nodes cannot be identified via matching.
Conclusion
We defined two basic rewrite steps dedicated to data-structure rewriting. The rewrite relationships induced by LRR-rewrite rules as well as GR-rewrite rules over graphs are trickier than the classical ones over terms (trees). There was no room in the present paper to discuss classical properties of the rewrite relationship induced by the above definitions such as confluence and termination or its extension to narrowing. However, our preliminary investigation shows that confluence is not guaranted even for nonoverlapping rewrite systems, and thus user-definable strategies are necessary when using all the power of datastructure rewriting. In addition, integration of LRR and GR rewriting in one step is also possible and can be helpful in describing some algorithms.
On the other hand, data-structures are better represented by means of graphics (e.g. [21]). Our purpose in this paper was rather the definition of the basic rewrite steps for data-structures. We intend to consider syntactical issue in a future work.
where G 0 , G 1 and G 2 are made of only one node: n 0 in G 0 is unlabeled, n 1 : a 1 in G 1 and n 2 : a 2 in G 2 , where a 1 and a 2 are distinct constants. This span has no pushout, because there cannot be any commutative square of graphs based on it.
Theorem A.2 below states a sufficient condition for a commutative square of graphs to be a pushout, and Theorem A.4 states a sufficient condition for a span of graphs to have a pushout, together with a construction of this pushout.
In the following, when G i occurs as an index, it is replaced by i.
Theorem A.2 (Pushout of graphs from pushout of sets) If a square Γ of the following form in the category of graphs:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 ψ1 3 3 g g g g G 2 ψ2 } } { { { { G 3 is such that: 1. Γ is a commutative square in Gr,
N (Γ) is a pushout in Set,
and each
n ∈ N Ω 3 is in ψ i (N Ω i ) for i = 1 or i = 2,
then Γ is a pushout in Gr.
Point (2) implies that each n ∈ N 3 is the image of at least a node in G 1 or in G 2 , and point (3) adds that, if n is labeled, then it is the image of at least a labeled node in G 1 or in G 2 . Proof. Let us consider a commutative square Γ ′ in Gr of the form:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 θ1 3 3 g g g g G 2 θ2 } } { { { { G 4
Then N (Γ ′ ) is a commutative square in Set, and since N (Γ) is a pushout in Set, there is a unique map θ :
N 3 → N 4 such that θ • ψ i = θ i , for i = 1, 2. N 0 ϕ1 v v m m m m m m m m m m ϕ2 @ @ N 1 ψ1 3 3 h h h h θ1 B B N 2 ψ2 t t h h h h h h h h h h h h h h h h θ2 } } z z z z N 3 θ G G N 4
Let us now prove that θ actually is a graph homomorphism. According to Definition 2.4, we have to prove that, for each labeled node n of G 3 , its image n ′ = θ(n) is a labeled node of G 4 , and that L 4 (n ′ ) = L 3 (n) and S 4 (n ′ ) = θ * (S 3 (n)).
So, let n ∈ N Ω 3 , and let n ′ = θ(n) ∈ N 4 . ¿From our third assumption, without loss of generality, n = ψ 1 (n 1 ) for some n 1 ∈ N Ω 1 . It follows that θ 1 (n 1 ) = θ(ψ 1 (n 1 )) = θ(n) = n ′ : n = ψ 1 (n 1 ) and n ′ = θ 1 (n 1 ) .
Since n 1 is labeled and θ 1 is a graph homomorphism, the node n ′ is labeled. Since ψ 1 and θ 1 are graph homomorphisms, L 3 (n) = L 1 (n 1 ) and L 4 (n ′ ) = L 1 (n 1 ), thus L 3 (n) = L 4 (n ′ ), as required for labels.
Since ψ 1 and θ 1 are graph homomorphisms, S 3 (n) = ψ * 1 (S 1 (n 1 )) and S 4 (n ′ ) = θ 1 * (S 1 (n 1 )). So, θ * (S 3 (n)) = θ * (ψ * 1 (S 1 (n 1 ))) = θ 1 * (S 1 (n 1 ) = S 4 (n ′ ), as required for successors.
This proves that θ : G 3 → G 4 is a graph homomorphism. Then, from the faithfulness of the functor N (Proposition A.1), for i ∈ {1, 2}, the equality of the underlying maps θ • ψ i = θ i : N i → N 4 is an equality of graph homomorphisms:
θ • ψ i = θ i : G i → G 4 . Now, let θ ′ : G 3 → G 4 be a graph homomorphism such that θ ′ • ψ i = θ i for i ∈ {1, 2}. Since N (Γ)
is a pushout in Set, the underlying maps are equal: θ = θ ′ : N 3 → N 4 . Then, it follows from the faithfulness of the functor N that the graph homomorphisms are equal: θ = θ ′ : G 3 → G 4 .
For each span of graphs Σ:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 G 2
let ∼ denote the equivalence relation on the disjoint union N 1 + N 2 generated by: ϕ 1 (n 0 ) ∼ ϕ 2 (n 0 ) for all n 0 ∈ N 0 , let N 3 be the quotient set N 3 = (N 1 + N 2 )/ ∼, and ψ : N 1 + N 2 → N 3 the quotient map. Two nodes n, n ′ in N 1 + N 2 are called equivalent if n ∼ n ′ . For i ∈ {1, 2}, let ψ i : N i → N 3 be made of the inclusion of N i in N 1 + N 2 followed by ψ. Then, it is well-known that the square of sets:
N 0 ϕ1 } } z z z z ϕ2 3 3 h h h h N 1 ψ1 3 3 h h h h N 2 ψ2 } } z z z z N 3
is a pushout, which can be called canonical.
where n i−1 is labeled and p i is unlabeled.
It should be reminded that:
• N D(L) = N L +N E and N D(U) = N U +µ(N E ), with D µ (N L ) ⊆ N U and D µ injective on N E (the last point comes from the fact that µ is Ω-injective);
• ρ(N X L ) ⊆ N X R and the restriction of ρ to N X L is injective, since L D(L) δL o o ρ G G R
is a rewrite rule.
Case 1: n i−1 is a node of R. Then n i−1 ∈ N Ω R . Since ρ(N X L ) ⊆ N X R and p i is unlabeled, it follows that p i ∈ N E . Then, since D µ maps N E to µ(N E ), n i ∈ µ(N E ). Then k > i, since the last node in the chain is labeled. Since D µ is injective on N E , and maps N L to N U , it follows that p i+1 = n i . So, p i = p i+1 , which is impossible since the chain is minimal.
p i ∈ N E F w w n n n n n n @ @ p i+1 ∈ N E E v v m m m m m m m n i−1 ∈ N Ω R n i ∈ µ(N E )
Case 2: n i−1 is a node of D(U ). Then n i−1 ∈ N Ω U . Since D µ maps N E to µ(N E ) and D µ (N L ) on N U , it follows that p i ∈ N X L . Since ρ maps N X L to N X R , it follows that n i ∈ N X R . Then k > i, since the last node in the chain is labeled. Then p i+1 ∈ N X L + N E . If p i+1 ∈ N E , a contradiction follows as in case 1. Hence, p i+1 ∈ N X L . Since the restriction of ρ to N X L is injective, p i+1 = p i , which is also impossible since the chain is minimal.
p i ∈ N X L F w w n n n n n n 9 9
x x x x x x p i+1 ∈ N X L + N E D u u l l l l l l l n i−1 ∈ N Ω U n i ∈ N X R Finally, it has been proved that all the nodes in this chain are labeled, which concludes the proof.
Proof of Corollary 4.8. We use the proof of theorem 4.7, as well as the notations in this proof. Let n ∈ N Ω V , we have to choose a representative r(n) of n. It should be reminded that N Ω D(U) = N Ω U . (R.) If there is a node n R ∈ N Ω R such that n = ν(n R ), let us prove that it is unique. Let n ′ R ∈ N Ω R be another node such that n = ν(n ′ R ), i.e., such that n R ∼ n ′ R . Let us consider a chain with minimal length k ≥ 1 from n R (= n 0 ) to n ′ R (= n k ); we know that all the nodes in this chain are labeled. Since n 0 and n 1 cannot be both in N R , it follows that n 1 ∈ N Ω U , so that p 0 , p 1 ∈ N Ω L and n 1 = µ(p 0 ) = µ(p 1 ). The Ω-injectivity of µ implies that p 0 = p 1 , but this is impossible. So, we have proved that ν Ω : N Ω R → N Ω V is injective, and we define r(n) = n R .
(U.) If there is no node n R ∈ N Ω R such that n = ν(n R ), then there is a node n U ∈ N Ω U such that n = ρ ′ (n U ). Let us prove that it is unique. Let n ′ U ∈ N Ω U be another node such that n = ρ ′ (n ′ U ), i.e., such that n U ∼ n ′ U . Let us consider a chain with minimal length k ≥ 1 from n U (= n 0 ) to n ′ U (= n k ); we know that all the nodes in this chain are labeled. Since n 0 and n 1 cannot be both in N U , it follows that n 1 ∈ N Ω R , which contradicts our assumption: there is no node n R ∈ N Ω R such that n = ν(n R ). Let N Ω U denote the subset of N Ω U made of the nodes which are not equivalent to any node in N Ω R . So, we have proved that the restriction of ρ ′ Ω : N Ω D(U) → N Ω V to N Ω U is injective, and we define r(n) = n U . (L.) We still have to prove that N Ω U = N Ω U − µ(N Ω L ), i.e., that a node n U ∈ N Ω U is equivalent to a node n R ∈ N Ω R if and only if there is node n L ∈ N Ω L such that n U = µ(n L ). Clearly, if n L ∈ N Ω L and n U = µ(n L ), let n R = ρ(n L ), then n R ∈ N Ω R and n U ∼ n R . Now, let n U ∼ n R for some n U ∈ N Ω U and n R ∈ N Ω R . Let us consider a chain with minimal length k ≥ 1 from n R (= n 0 ) to n U (= n k ); we know that all the nodes in this chain are labeled. If k > 1, then the Ω-injectivity of µ leads to a contradiction, as in part (R) of the proof. Hence k = 1, which means that p 1 ∈ N Ω L is such that n R = ρ(p 1 ) and n U = µ(p 1 ), so that there is node n L = p 1 ∈ N Ω L such that n U = µ(n L ).
22
This concludes the proof that:
| 5,817 |
cs0503065
|
1678440633
|
We tackle the problem of data-structure rewriting including pointer redirections. We propose two basic rewrite steps: (i) Local Redirection and Replacement steps the aim of which is redirecting specific pointers determined by means of a pattern, as well as adding new information to an existing data ; and (ii) Global Redirection steps which are aimed to redirect all pointers targeting a node towards another one. We define these two rewriting steps following the double pushout approach. We define first the category of graphs we consider and then define rewrite rules as pairs of graph homomorphisms of the form "L R". Unfortunately, inverse pushouts (complement pushouts) are not unique in our setting and pushouts do not always exist. Therefore, we define rewriting steps so that a rewrite rule can always be performed once a matching is found.
|
In @cite_0 , Habel and Plump proposed a kernel language for graph transformation. This language has been improved recently in @cite_15 . Basic rules in this framework are of the form @math satisfying some conditions such as the inclusion @math . Unfortunately, our rewrite rules do not fulfill such condition ; particularly when performing local edge redirections. Furthermore, inverse pushouts (or pushout complements) are not unique in our setting which is not the case in @cite_0 @cite_15 .
|
{
"abstract": [
"We identify a set of programming constructs ensuring that a programming language based on graph transformation is computationally complete. These constructs are (1) nondeterministic application of a set of graph transformation rules, (2) sequential composition and (3) iteration. This language is minimal in that omitting either sequential composition or iteration results in a computationally incomplete language. By computational completeness we refer to the ability to compute every computable partial function on labelled graphs. Our completeness proof is based on graph transformation programs which encode arbitrary graphs as strings, simulate Turing machines on these strings, and decode the resulting strings back into graphs.",
"Graph programs as introduced by Habel and Plump [8] provide a simple yet computationally complete language for computing functions and relations on graphs. We extend this language such that numerical computations on labels can be conveniently expressed. Rather than resorting to some kind of attributed graph transformation, we introduce conditional rule schemata which are instantiated to (conditional) double-pushout rules over ordinary graphs. A guiding principle in our language extension is syntactic and semantic simplicity. As a case study for the use of extended graph programs, we present and analyse two versions of Dijkstra’s shortest path algorithm. The first program consists of just three rule schemata and is easily proved to be correct but can be exponential in the number of rule applications. The second program is a refinement of the first which is essentially deterministic and uses at most a quadratic number of rule applications."
],
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"1483113404",
"1480319318"
]
}
|
Data-Structure Rewriting
|
Rewriting techniques have been proven to be very useful to establish formal bases for high level programming laguages as well as theorem provers. These techniques have been widely investigated for strings [7], trees or terms [2] and term graphs [19,6].
In this paper we tackle the problem of rewriting classical data-structures such as circular lists, double-chained lists, etc. Even if such data-structures can be easily simulated by string or tree processing, they remain very useful in designing algorithms with good complexity. The investigation of data-structure rewrite systems will contribute to define a clean semantics and proof techniques for "pointer" handling. It will also provide a basis for multiparadigm programming languages integrating declarative (functional and logic) and imperative features.
General frameworks of graph transformation are now well established, see e.g. [22,11,12]. Unfortunately, rewriting classical data-structures represented as cyclic graphs did not benefit yet of the same effort as for terms or term graphs. Our aim in this paper is to investigate basic rewrite steps for datastructure transformation. It turns out that pointer redirection is the key issue we had to face, in addition to classical replacement and garbage collection. We distinguish two kinds of redirections: (i)Global redirection which consists in redirecting in a row all edges pointing to a given node, to another node ; and (ii) Local redirection which consists in redirecting a particular pointer, specified e.g. by a pattern, in order to point to a new target node. Global redirection is very often used in the implementation of functional programming languages, for instance when changing roots of term graphs. As for local redirection, it is useful to express classical imperative algorithms.
We introduce two kind of rewrite steps. The first is one called local redirection and replacement and the second kind is dedicated to global redirection. We define these steps following the double pushout approach [8,16]. We have chosen this approach because it simplifies drastically the presentation of our results. The algorithmic fashion, which we followed first, turns out to be arduous. Thus, basic rewrite rules are given by a pair of graph homomorphisms L ← K → R. We precise the rôle that plays K in order to perform local or global redirection of pointers. The considered homomorphisms are not necessarily injective in our setting, unlike classical assumptions as in the recent proposals dedicated to graph programs [20,17]. This means that inverse pushouts (complement pushouts) are not unique.
The paper is organized as follows: The next section introduces the category of graphs which we consider in the paper. Section 3 states some technical results that help defining rewrite steps. Section 4 introduces data-structure rewriting and defines mainly two rewrite steps, namely LRR-rewriting and GR-rewriting. We compare our proposal to related work in section 5. Concluding remarks are given in section 6. Proofs are found in the appendix. We assume the reader is familiar with basic notions of category theory (see e.g. [1] for an introduction).
Graphs
In this section we introduce the category of graphs we consider in the paper. These graphs are supposed to represent data-structures. We define below such graphs in a mono-sorted setting. Lifting our results to the many-sorted case is straightforward.
Definition 2.1 (Signature) A signature Ω is a set of operation symbols such that each operation symbol in Ω, say f , is provided by a natural number, n, representing its arity. We write ar(f ) = n.
In the sequel, we use the following notations. Let A be a set. We note A * the set of strings made of elements in A. Let f : A → B be a function. We note f * : A * → B * the unique extension of f over strings defined by f * (ǫ) = ǫ where ǫ is the empty string and f * (a 1 . . . a n ) = f (a 1 ) . . . f (a n ).
We assume that Ω is fixed throughout the rest of the paper.
Definition 2.2 (Graph)
A graph G is made of:
• a set of nodes N G ,
• a subset of labeled nodes N Ω G ⊆ N G , • a labeling function L G : N Ω G → Ω, • and a successor function
S G : N Ω G → N * G ,
such that, for each labeled node n, the length of the string S G (n) is the arity of the operation L G (n).
This definition can be illustrated by the following diagram, where lg(u) is the length of the string u. :
N G N Ω G ⊇ o o LG SG G G = N * G lg Ω ar G G N Moreover:
• the arity of a node n is defined as the arity of its label,
• the i-th successor of a node n is denoted succ G (n, i),
• the edges of a graph G are the pairs (n, i) where n ∈ N Ω G and i ∈ {1, . . . , ar(n)}, the source of an edge (n, i) is the node n, and its target is the node succ G (n, i),
• the fact that f = L G (n) can be written as n : f ,
• the set of unlabeled nodes of G is denoted N X G , so that:
N G = N Ω G + 1 N X G .
Example 2.3 Let G be the graph defined by
• N G = {m; n; o; p; q; r} • N Ω G = {m; o; p} • N X G = {n; q; r} • L G is defined by: [m → f ; o → g; p → h] • S G is defined by: [m → no; o → np; p → qrm] 1 + stands for disjoint union.
Graphically we represent this graph as:
m : f z z u u u u u u n : • o : g o o G G p : h y y t t t t t t t q :
• r : • We use • to denote lack of label. Informally, one may think of • as anonymous variables.
Definition 2.4 (Graph homomorphism) A graph homomorphism ϕ : G → H is a map ϕ : N G → N H such that ϕ(N Ω G ) is included in N Ω
H and, for each node n ∈ N Ω G : L H (ϕ(n)) = L G (n) and S H (ϕ(n)) = ϕ * (S G (n)) . Let ϕ Ω : N Ω G → N Ω H denote the restriction of ϕ to the subset N Ω G . Then, the properties in the definition above mean that the following diagrams are commutative:
N Ω G ϕ Ω LG 9 9 y y y y y y = Ω N Ω H LH U U o o o o o o N Ω G ϕ Ω SG G G = N * G ϕ * N Ω H SH G G N * H
The image ϕ(n, i) of an edge (n, i) of G is defined as the edge (ϕ(n), i) of H. It is easy to check that the graphs (as objects) together with the graph homomorphisms (as arrows) form a category, which is called the category of graphs and noted Gr .
Disconnected graphs and homomorphisms
This section is dedicated to some technical definitions the aim of which is the simplification of the definition of rewrite rules given in the following section. The next definition introduces the notion of what we call disconnected graph. Roughly speaking, the disconnected graph associated to a graph G and a set of edges E is obtained by redirecting every edge in E (whether it is yet disconnected or not) towards a new, unlabeled, target.
Definition 3.2 (Disconnected graph)
The disconnected graph associated to a graph G and a set of edges E of G is the following graph D(G, E):
• N D(G,E) = N G + N E , where N E is made of one new node n[i] for each edge (n, i) ∈ E, • N Ω D(G,E) = N Ω G ,
• for each n ∈ N Ω G : L D(G,E) (n) = L G (n), • for each n ∈ N Ω G and i ∈ {1, . . . , ar(n)}:
-if (n, i) ∈ E then succ D(G,E) (n, i) = succ G (n, i), -if (n, i) ∈ E then succ D(G,E) (n, i) = n[i].
Definition 3.3 (Connection homomorphism)
The connection homomorphism associated to a graph G and a set of edges E of G is the homomorphism δ G,E : D(G, E) → G such that:
• if n ∈ N G then δ G,E (n) = n, • if n[i] ∈ N E then δ G,E (n[i]) = succ G (n, i).
It is easy to check that δ G,E is a graph homomorphism. • if n ∈ N G then D ϕ,E (n) = ϕ(n),
• if n[i] ∈ N E then D ϕ,E (n[i]) = ϕ(n)[i].
It is easy to check that D ϕ,E is a graph homomorphism.
Data-structure rewriting
In this section we define data structure rewriting as a succession of rewrite steps. A rewrite step is defined from a rewrite rule and a matching. A rewrite rule is a span of graphs, i.e., a pair of graph homomorphisms with a common source:
L K δ o o ρ G G R A matching is a morphism of graphs: L µ G G G .
There are two kinds of rewrite steps.
• The first kind is called Local Redirection and Replacement Rewriting (LRRrewriting, for short). Its rôle is twofold: adding to G a copy of the instance of the right-hand side R, and performing some local redirections of edges specified by means of the rewrite rule.
• The second kind of rewrite steps is called Global Redirection Rewriting (GR-Rewriting, for short). Its rôle consists in performing redirections: all incoming edges of some node a in G are redirected to a node b.
We define LRR-rewriting and GR-rewriting in the two following subsections. We use in both cases the double-pushout approach to define rewrite steps.
LRR-rewriting
Before defining LRR-rewrite rules and steps, we state first a technical result about the existence of inverse pushouts in our setting.
L µ D(L, E) Dµ,E δL,E o o U D(U, µ(E)) δ U,µ(E) o o
Proof. This result is an easy corollary of Theorem A.2.
Definition 4.2 (Disconnecting pushout) Let µ : L → U be a graph homomorphism and E a set of edges of L. The disconnecting pushout associated to µ and E is the pushout from Theorem 4.1.
It can be noted that the disconnecting pushout is not unique, in the sense that there are generally several inverse pushouts of:
L µ D(L, E) δL,E o o U Before stating the next definition, it should be reminded that N D(L,E) = N L + N E = N Ω L + N X L + N E .L D(L, E) δL,E o o ρ G G R
where E is a set of edges of L, and where ρ(N X L ) ⊆ N X R and the restriction of ρ to N X L is injective.
G G m : cons | | o : • p : • R δ L,{(m,2)} - ρ
In this example we show how (local) edge redirection can be achieved through edge disconnection. Since an element is added to the head of a circular list (of length 1), one has to make the curve pointer (m, 2) to point to the new added cell. For this we disconnect the edge (m, 2) in D(L, {(m, 2)}) in order to be able to redirect it, thanks to an appropriate homomorphism ρ, to the new cell in R,
namely q. Here, ρ = [n → n; m[2] → q; · · ·]
One may also remark that graph R still has a node labelled by add. In this paper we do not tackle the problem of garbage collection which has been treated in a categorical way in e.g. [4].
rewrite rule L D(L, E) δL,E o o ρ G G R is a graph homomorphism µ : L → U
that is Ω-injective, which means that the restriction of the map µ to N Ω G is injective.
Definition 4.6 (LRR-Rewrite step) Let r = ( L D(L, E) δL,E o o ρ G G R )
be a rewrite rule, and µ : L → U a matching with respect to r. Then U rewrites into V using rule r if there are graph homomorphisms ν : R → V and ρ ′ : D(U, µ(E)) → V such that the following square is a pushout in the category of graphs (Gr):
D(L, E) Dµ,E ρ G G R ν D(U, µ(E)) ρ ′ G G V
Thus, a rewrite step corresponds to a double pushout in the category of graphs:
L µ D(L, E) Dµ,E δL,E o o ρ G G R ν U D(U, µ(E)) δ U,µ(E) o o ρ ′ G G V
Theorem 4.7 (Rewrite step is feasible) Let r be a rewrite rule, and µ : L → U a matching with respect to r. Then U can be rewritten using rule r. More precisely, the required pushout can be built as follows (the notations are simplified by dropping E and µ(E)):
• the set of nodes of V is N V = (N R +N D(U) )/ ∼, where ∼ is the equivalence relation generated by D µ (n) ∼ ρ(n) for each node n of D(L),
• the maps ν and ρ ′ , on the sets of nodes, are the inclusions of N R and N D(U) in N R + N D(U) , respectively, followed by the quotient map with respect to ∼,
• N Ω V is made of the classes modulo ∼ which contain at least one labeled node, and a section π : N Ω V → N Ω R + N Ω D(U) of the quotient map is chosen, which means that the class of π(n) is n, for each n ∈ N Ω V , • for each n ∈ N Ω V , the label of n is the label of π(n), • for each n ∈ N Ω V , the successors of n are the classes of the successors of π(n).
Moreover, the resulting pushout does not depend on the choice of the section π.
N Ω V = (N Ω U − µ(N Ω L )) + N Ω R .
Proof. Both Theorem 4.7 and Corollary 4.8 are derived from Theorem A.4, their proofs are given at the end of the appendix.
Example 4.9
Here we consider the case of a non Ω-injective matching in order to show that there may be no double pushout in such cases. Thus justifying our restriction over acceptable matchings (see Definition 4.5).
In this example we identify two nodes of L labelled by g via the homomorphism µ, namely n 1 and n 2 , to a single one, m. In the span we disconnect the two edges coming from g's and redirect them to two different nodes labeled by different constants : b and c.This is done by the homomorphism ρ = id. Now, as both edges have been merged by the matching in U , the second (right) pushout cannot exist since a single edge cannot point to both b and c in the same time. Note that this impossibility does not denote a limitation of our formalism. Figure 1 we give the span for lists of size greater than 1, as well as the application of the rule to a list of size 3. Notice how the disconnection is actually used in order to redirect the pointer (n 6 , 2). The homomorphisms of the bottom layer show that the disconnected edge, pointing to the unlabeled node c 4 [2] is mapped to c 1 to the left and to n 8 to the right. The mechanism of disconnection allows the categorical manipulation of an edge.
The Ω-injectivity hypothesis is also useful in this rule since edges (n 6 , 2) and (n 3 , 2) must be different, thus a list of size less than or equal to one cannot be matched by this rule.
U L R [n6[2] → n8] [n6[2] → n3]? [n1 → o n3 → c1 n6 → c4] ? [n1 → o n3 → c1 n6 → c4] ? [n1 → o n3 → c1 n6 → c4]
GR-Rewriting
Let U be graph and let a, b ∈ N U . we say that U rewrites into V using the global redirection from a to b and write U a→b −→ V iff V is obtained from U by redirecting all edges targeting node a to point towards node b. This kind of rewriting is very useful when dealing with rooted term graphs (see, e.g. [4]). We define below one GR-rewriting step following the double pushout approach.
P SW λ o o ρ G G P where
• P is made of two unlabeled nodes ar and pr,
• SW (switch graph) is made of three unlabeled nodes ar, pr and mr,
• λ(ar) = λ(mr) = ar and λ(pr) = pr,
• ρ(ar) = ar and ρ(pr) = ρ(mr) = pr. G G P be a GR-rewrite rule, and µ : P → U be a GR-matching. LetD µ : SW → D(U, µ(ar)) be the homomorphism defined byD µ (ar) = µ(ar),D µ (pr) = µ(pr) andD µ (mr) = mr. Then U rewrites into V using rule r if there are graph homomorphisms ν : P → V and ρ ′ :D(U, µ(ar)) → V such that the following square is a pushout in the category of graphs (Gr):
SW Dµ ρ G G P ν D(U, µ(ar)) ρ ′ G G V
Thus, a GR-rewrite step, U µ(ar)→µ(pr) −→ V , corresponds to a double pushout in the category of graphs:
P µ SW Dµ δµ o o ρ G G P ν U D(U, µ(ar)) δµ o o ρ ′ G G V
The construction of graph V is straightforward. It may be deduced from Theorem A.4 given in the appendix.
Example 4.16
In this example we show how global redirection works. In the graph G, given in Example 2.3, we want redirect all edges with target n towards q. For this pupose, we define the homomorphism µ from P to G by mapping appropriately the nodes ar (ante-rewriting), and pr (post-rewriting). I.e. in our case µ = [ar → n; pr → q]. Applying this on G, we get the following double push-out: Notice how node mr (midrewriting) is used. It is mapped to n on the left and to q on the right. Thus in the middle graph, mr allows to disconnect edges targeting n in order to redirect them towards q.
1 0 ( ) m : f { { x x x x x n : • o : g o o G G p : h { { w w w w w q q q : • r : • 1 0 ( ) mr : • m : f o o n : • o : g d d s s s s s s G G p : h { { w w w w w q q q : • r : • - 1 0 ( ) m : f 2 2 n : • o : g G G p : h { { w w w w w
Example 4.17 In this additional example, we give rewriting rules defining the function length (written ♯) which computes the size of non-empty circular lists. In this example every LRR-rewriting is followed by a GR-rewriting. That is why we precise the global rewriting that should be performed after each LRR-rewrite step.
The first rule simply introduces an auxiliary function, ♯ b , which has two arguments. The first one indicates the head of the list while the second one will move along the list in order to measure it. We have the following span for ♯: (m, i). The next rule defines ♯ b when its arguments are different. Once again we use the hypothesis of Ω-injectivity to ensure that both cons nodes cannot be identified via matching.
Conclusion
We defined two basic rewrite steps dedicated to data-structure rewriting. The rewrite relationships induced by LRR-rewrite rules as well as GR-rewrite rules over graphs are trickier than the classical ones over terms (trees). There was no room in the present paper to discuss classical properties of the rewrite relationship induced by the above definitions such as confluence and termination or its extension to narrowing. However, our preliminary investigation shows that confluence is not guaranted even for nonoverlapping rewrite systems, and thus user-definable strategies are necessary when using all the power of datastructure rewriting. In addition, integration of LRR and GR rewriting in one step is also possible and can be helpful in describing some algorithms.
On the other hand, data-structures are better represented by means of graphics (e.g. [21]). Our purpose in this paper was rather the definition of the basic rewrite steps for data-structures. We intend to consider syntactical issue in a future work.
where G 0 , G 1 and G 2 are made of only one node: n 0 in G 0 is unlabeled, n 1 : a 1 in G 1 and n 2 : a 2 in G 2 , where a 1 and a 2 are distinct constants. This span has no pushout, because there cannot be any commutative square of graphs based on it.
Theorem A.2 below states a sufficient condition for a commutative square of graphs to be a pushout, and Theorem A.4 states a sufficient condition for a span of graphs to have a pushout, together with a construction of this pushout.
In the following, when G i occurs as an index, it is replaced by i.
Theorem A.2 (Pushout of graphs from pushout of sets) If a square Γ of the following form in the category of graphs:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 ψ1 3 3 g g g g G 2 ψ2 } } { { { { G 3 is such that: 1. Γ is a commutative square in Gr,
N (Γ) is a pushout in Set,
and each
n ∈ N Ω 3 is in ψ i (N Ω i ) for i = 1 or i = 2,
then Γ is a pushout in Gr.
Point (2) implies that each n ∈ N 3 is the image of at least a node in G 1 or in G 2 , and point (3) adds that, if n is labeled, then it is the image of at least a labeled node in G 1 or in G 2 . Proof. Let us consider a commutative square Γ ′ in Gr of the form:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 θ1 3 3 g g g g G 2 θ2 } } { { { { G 4
Then N (Γ ′ ) is a commutative square in Set, and since N (Γ) is a pushout in Set, there is a unique map θ :
N 3 → N 4 such that θ • ψ i = θ i , for i = 1, 2. N 0 ϕ1 v v m m m m m m m m m m ϕ2 @ @ N 1 ψ1 3 3 h h h h θ1 B B N 2 ψ2 t t h h h h h h h h h h h h h h h h θ2 } } z z z z N 3 θ G G N 4
Let us now prove that θ actually is a graph homomorphism. According to Definition 2.4, we have to prove that, for each labeled node n of G 3 , its image n ′ = θ(n) is a labeled node of G 4 , and that L 4 (n ′ ) = L 3 (n) and S 4 (n ′ ) = θ * (S 3 (n)).
So, let n ∈ N Ω 3 , and let n ′ = θ(n) ∈ N 4 . ¿From our third assumption, without loss of generality, n = ψ 1 (n 1 ) for some n 1 ∈ N Ω 1 . It follows that θ 1 (n 1 ) = θ(ψ 1 (n 1 )) = θ(n) = n ′ : n = ψ 1 (n 1 ) and n ′ = θ 1 (n 1 ) .
Since n 1 is labeled and θ 1 is a graph homomorphism, the node n ′ is labeled. Since ψ 1 and θ 1 are graph homomorphisms, L 3 (n) = L 1 (n 1 ) and L 4 (n ′ ) = L 1 (n 1 ), thus L 3 (n) = L 4 (n ′ ), as required for labels.
Since ψ 1 and θ 1 are graph homomorphisms, S 3 (n) = ψ * 1 (S 1 (n 1 )) and S 4 (n ′ ) = θ 1 * (S 1 (n 1 )). So, θ * (S 3 (n)) = θ * (ψ * 1 (S 1 (n 1 ))) = θ 1 * (S 1 (n 1 ) = S 4 (n ′ ), as required for successors.
This proves that θ : G 3 → G 4 is a graph homomorphism. Then, from the faithfulness of the functor N (Proposition A.1), for i ∈ {1, 2}, the equality of the underlying maps θ • ψ i = θ i : N i → N 4 is an equality of graph homomorphisms:
θ • ψ i = θ i : G i → G 4 . Now, let θ ′ : G 3 → G 4 be a graph homomorphism such that θ ′ • ψ i = θ i for i ∈ {1, 2}. Since N (Γ)
is a pushout in Set, the underlying maps are equal: θ = θ ′ : N 3 → N 4 . Then, it follows from the faithfulness of the functor N that the graph homomorphisms are equal: θ = θ ′ : G 3 → G 4 .
For each span of graphs Σ:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 G 2
let ∼ denote the equivalence relation on the disjoint union N 1 + N 2 generated by: ϕ 1 (n 0 ) ∼ ϕ 2 (n 0 ) for all n 0 ∈ N 0 , let N 3 be the quotient set N 3 = (N 1 + N 2 )/ ∼, and ψ : N 1 + N 2 → N 3 the quotient map. Two nodes n, n ′ in N 1 + N 2 are called equivalent if n ∼ n ′ . For i ∈ {1, 2}, let ψ i : N i → N 3 be made of the inclusion of N i in N 1 + N 2 followed by ψ. Then, it is well-known that the square of sets:
N 0 ϕ1 } } z z z z ϕ2 3 3 h h h h N 1 ψ1 3 3 h h h h N 2 ψ2 } } z z z z N 3
is a pushout, which can be called canonical.
where n i−1 is labeled and p i is unlabeled.
It should be reminded that:
• N D(L) = N L +N E and N D(U) = N U +µ(N E ), with D µ (N L ) ⊆ N U and D µ injective on N E (the last point comes from the fact that µ is Ω-injective);
• ρ(N X L ) ⊆ N X R and the restriction of ρ to N X L is injective, since L D(L) δL o o ρ G G R
is a rewrite rule.
Case 1: n i−1 is a node of R. Then n i−1 ∈ N Ω R . Since ρ(N X L ) ⊆ N X R and p i is unlabeled, it follows that p i ∈ N E . Then, since D µ maps N E to µ(N E ), n i ∈ µ(N E ). Then k > i, since the last node in the chain is labeled. Since D µ is injective on N E , and maps N L to N U , it follows that p i+1 = n i . So, p i = p i+1 , which is impossible since the chain is minimal.
p i ∈ N E F w w n n n n n n @ @ p i+1 ∈ N E E v v m m m m m m m n i−1 ∈ N Ω R n i ∈ µ(N E )
Case 2: n i−1 is a node of D(U ). Then n i−1 ∈ N Ω U . Since D µ maps N E to µ(N E ) and D µ (N L ) on N U , it follows that p i ∈ N X L . Since ρ maps N X L to N X R , it follows that n i ∈ N X R . Then k > i, since the last node in the chain is labeled. Then p i+1 ∈ N X L + N E . If p i+1 ∈ N E , a contradiction follows as in case 1. Hence, p i+1 ∈ N X L . Since the restriction of ρ to N X L is injective, p i+1 = p i , which is also impossible since the chain is minimal.
p i ∈ N X L F w w n n n n n n 9 9
x x x x x x p i+1 ∈ N X L + N E D u u l l l l l l l n i−1 ∈ N Ω U n i ∈ N X R Finally, it has been proved that all the nodes in this chain are labeled, which concludes the proof.
Proof of Corollary 4.8. We use the proof of theorem 4.7, as well as the notations in this proof. Let n ∈ N Ω V , we have to choose a representative r(n) of n. It should be reminded that N Ω D(U) = N Ω U . (R.) If there is a node n R ∈ N Ω R such that n = ν(n R ), let us prove that it is unique. Let n ′ R ∈ N Ω R be another node such that n = ν(n ′ R ), i.e., such that n R ∼ n ′ R . Let us consider a chain with minimal length k ≥ 1 from n R (= n 0 ) to n ′ R (= n k ); we know that all the nodes in this chain are labeled. Since n 0 and n 1 cannot be both in N R , it follows that n 1 ∈ N Ω U , so that p 0 , p 1 ∈ N Ω L and n 1 = µ(p 0 ) = µ(p 1 ). The Ω-injectivity of µ implies that p 0 = p 1 , but this is impossible. So, we have proved that ν Ω : N Ω R → N Ω V is injective, and we define r(n) = n R .
(U.) If there is no node n R ∈ N Ω R such that n = ν(n R ), then there is a node n U ∈ N Ω U such that n = ρ ′ (n U ). Let us prove that it is unique. Let n ′ U ∈ N Ω U be another node such that n = ρ ′ (n ′ U ), i.e., such that n U ∼ n ′ U . Let us consider a chain with minimal length k ≥ 1 from n U (= n 0 ) to n ′ U (= n k ); we know that all the nodes in this chain are labeled. Since n 0 and n 1 cannot be both in N U , it follows that n 1 ∈ N Ω R , which contradicts our assumption: there is no node n R ∈ N Ω R such that n = ν(n R ). Let N Ω U denote the subset of N Ω U made of the nodes which are not equivalent to any node in N Ω R . So, we have proved that the restriction of ρ ′ Ω : N Ω D(U) → N Ω V to N Ω U is injective, and we define r(n) = n U . (L.) We still have to prove that N Ω U = N Ω U − µ(N Ω L ), i.e., that a node n U ∈ N Ω U is equivalent to a node n R ∈ N Ω R if and only if there is node n L ∈ N Ω L such that n U = µ(n L ). Clearly, if n L ∈ N Ω L and n U = µ(n L ), let n R = ρ(n L ), then n R ∈ N Ω R and n U ∼ n R . Now, let n U ∼ n R for some n U ∈ N Ω U and n R ∈ N Ω R . Let us consider a chain with minimal length k ≥ 1 from n R (= n 0 ) to n U (= n k ); we know that all the nodes in this chain are labeled. If k > 1, then the Ω-injectivity of µ leads to a contradiction, as in part (R) of the proof. Hence k = 1, which means that p 1 ∈ N Ω L is such that n R = ρ(p 1 ) and n U = µ(p 1 ), so that there is node n L = p 1 ∈ N Ω L such that n U = µ(n L ).
22
This concludes the proof that:
| 5,817 |
cs0503065
|
1678440633
|
We tackle the problem of data-structure rewriting including pointer redirections. We propose two basic rewrite steps: (i) Local Redirection and Replacement steps the aim of which is redirecting specific pointers determined by means of a pattern, as well as adding new information to an existing data ; and (ii) Global Redirection steps which are aimed to redirect all pointers targeting a node towards another one. We define these two rewriting steps following the double pushout approach. We define first the category of graphs we consider and then define rewrite rules as pairs of graph homomorphisms of the form "L R". Unfortunately, inverse pushouts (complement pushouts) are not unique in our setting and pushouts do not always exist. Therefore, we define rewriting steps so that a rewrite rule can always be performed once a matching is found.
|
Recently, in @cite_11 the authors are also interested in classical data-structures built by using pointers. Their work is complementary to ours in the sense that they are rather concerned by data-structure shapes by means of so called Graph reduction specifications.
|
{
"abstract": [
"We present a new algorithm for checking the shape-safety of pointer manipulation programs. In our model, an abstract, data-less pointer structure is a graph. A shape is a language of graphs. A pointer manipulation program is modelled abstractly as a set of graph rewrite rules over such graphs where each rule corresponds to a pointer manipulation step. Each rule is annotated with the intended shape of its domain and range and our algorithm checks these annotations."
],
"cite_N": [
"@cite_11"
],
"mid": [
"1820275044"
]
}
|
Data-Structure Rewriting
|
Rewriting techniques have been proven to be very useful to establish formal bases for high level programming laguages as well as theorem provers. These techniques have been widely investigated for strings [7], trees or terms [2] and term graphs [19,6].
In this paper we tackle the problem of rewriting classical data-structures such as circular lists, double-chained lists, etc. Even if such data-structures can be easily simulated by string or tree processing, they remain very useful in designing algorithms with good complexity. The investigation of data-structure rewrite systems will contribute to define a clean semantics and proof techniques for "pointer" handling. It will also provide a basis for multiparadigm programming languages integrating declarative (functional and logic) and imperative features.
General frameworks of graph transformation are now well established, see e.g. [22,11,12]. Unfortunately, rewriting classical data-structures represented as cyclic graphs did not benefit yet of the same effort as for terms or term graphs. Our aim in this paper is to investigate basic rewrite steps for datastructure transformation. It turns out that pointer redirection is the key issue we had to face, in addition to classical replacement and garbage collection. We distinguish two kinds of redirections: (i)Global redirection which consists in redirecting in a row all edges pointing to a given node, to another node ; and (ii) Local redirection which consists in redirecting a particular pointer, specified e.g. by a pattern, in order to point to a new target node. Global redirection is very often used in the implementation of functional programming languages, for instance when changing roots of term graphs. As for local redirection, it is useful to express classical imperative algorithms.
We introduce two kind of rewrite steps. The first is one called local redirection and replacement and the second kind is dedicated to global redirection. We define these steps following the double pushout approach [8,16]. We have chosen this approach because it simplifies drastically the presentation of our results. The algorithmic fashion, which we followed first, turns out to be arduous. Thus, basic rewrite rules are given by a pair of graph homomorphisms L ← K → R. We precise the rôle that plays K in order to perform local or global redirection of pointers. The considered homomorphisms are not necessarily injective in our setting, unlike classical assumptions as in the recent proposals dedicated to graph programs [20,17]. This means that inverse pushouts (complement pushouts) are not unique.
The paper is organized as follows: The next section introduces the category of graphs which we consider in the paper. Section 3 states some technical results that help defining rewrite steps. Section 4 introduces data-structure rewriting and defines mainly two rewrite steps, namely LRR-rewriting and GR-rewriting. We compare our proposal to related work in section 5. Concluding remarks are given in section 6. Proofs are found in the appendix. We assume the reader is familiar with basic notions of category theory (see e.g. [1] for an introduction).
Graphs
In this section we introduce the category of graphs we consider in the paper. These graphs are supposed to represent data-structures. We define below such graphs in a mono-sorted setting. Lifting our results to the many-sorted case is straightforward.
Definition 2.1 (Signature) A signature Ω is a set of operation symbols such that each operation symbol in Ω, say f , is provided by a natural number, n, representing its arity. We write ar(f ) = n.
In the sequel, we use the following notations. Let A be a set. We note A * the set of strings made of elements in A. Let f : A → B be a function. We note f * : A * → B * the unique extension of f over strings defined by f * (ǫ) = ǫ where ǫ is the empty string and f * (a 1 . . . a n ) = f (a 1 ) . . . f (a n ).
We assume that Ω is fixed throughout the rest of the paper.
Definition 2.2 (Graph)
A graph G is made of:
• a set of nodes N G ,
• a subset of labeled nodes N Ω G ⊆ N G , • a labeling function L G : N Ω G → Ω, • and a successor function
S G : N Ω G → N * G ,
such that, for each labeled node n, the length of the string S G (n) is the arity of the operation L G (n).
This definition can be illustrated by the following diagram, where lg(u) is the length of the string u. :
N G N Ω G ⊇ o o LG SG G G = N * G lg Ω ar G G N Moreover:
• the arity of a node n is defined as the arity of its label,
• the i-th successor of a node n is denoted succ G (n, i),
• the edges of a graph G are the pairs (n, i) where n ∈ N Ω G and i ∈ {1, . . . , ar(n)}, the source of an edge (n, i) is the node n, and its target is the node succ G (n, i),
• the fact that f = L G (n) can be written as n : f ,
• the set of unlabeled nodes of G is denoted N X G , so that:
N G = N Ω G + 1 N X G .
Example 2.3 Let G be the graph defined by
• N G = {m; n; o; p; q; r} • N Ω G = {m; o; p} • N X G = {n; q; r} • L G is defined by: [m → f ; o → g; p → h] • S G is defined by: [m → no; o → np; p → qrm] 1 + stands for disjoint union.
Graphically we represent this graph as:
m : f z z u u u u u u n : • o : g o o G G p : h y y t t t t t t t q :
• r : • We use • to denote lack of label. Informally, one may think of • as anonymous variables.
Definition 2.4 (Graph homomorphism) A graph homomorphism ϕ : G → H is a map ϕ : N G → N H such that ϕ(N Ω G ) is included in N Ω
H and, for each node n ∈ N Ω G : L H (ϕ(n)) = L G (n) and S H (ϕ(n)) = ϕ * (S G (n)) . Let ϕ Ω : N Ω G → N Ω H denote the restriction of ϕ to the subset N Ω G . Then, the properties in the definition above mean that the following diagrams are commutative:
N Ω G ϕ Ω LG 9 9 y y y y y y = Ω N Ω H LH U U o o o o o o N Ω G ϕ Ω SG G G = N * G ϕ * N Ω H SH G G N * H
The image ϕ(n, i) of an edge (n, i) of G is defined as the edge (ϕ(n), i) of H. It is easy to check that the graphs (as objects) together with the graph homomorphisms (as arrows) form a category, which is called the category of graphs and noted Gr .
Disconnected graphs and homomorphisms
This section is dedicated to some technical definitions the aim of which is the simplification of the definition of rewrite rules given in the following section. The next definition introduces the notion of what we call disconnected graph. Roughly speaking, the disconnected graph associated to a graph G and a set of edges E is obtained by redirecting every edge in E (whether it is yet disconnected or not) towards a new, unlabeled, target.
Definition 3.2 (Disconnected graph)
The disconnected graph associated to a graph G and a set of edges E of G is the following graph D(G, E):
• N D(G,E) = N G + N E , where N E is made of one new node n[i] for each edge (n, i) ∈ E, • N Ω D(G,E) = N Ω G ,
• for each n ∈ N Ω G : L D(G,E) (n) = L G (n), • for each n ∈ N Ω G and i ∈ {1, . . . , ar(n)}:
-if (n, i) ∈ E then succ D(G,E) (n, i) = succ G (n, i), -if (n, i) ∈ E then succ D(G,E) (n, i) = n[i].
Definition 3.3 (Connection homomorphism)
The connection homomorphism associated to a graph G and a set of edges E of G is the homomorphism δ G,E : D(G, E) → G such that:
• if n ∈ N G then δ G,E (n) = n, • if n[i] ∈ N E then δ G,E (n[i]) = succ G (n, i).
It is easy to check that δ G,E is a graph homomorphism. • if n ∈ N G then D ϕ,E (n) = ϕ(n),
• if n[i] ∈ N E then D ϕ,E (n[i]) = ϕ(n)[i].
It is easy to check that D ϕ,E is a graph homomorphism.
Data-structure rewriting
In this section we define data structure rewriting as a succession of rewrite steps. A rewrite step is defined from a rewrite rule and a matching. A rewrite rule is a span of graphs, i.e., a pair of graph homomorphisms with a common source:
L K δ o o ρ G G R A matching is a morphism of graphs: L µ G G G .
There are two kinds of rewrite steps.
• The first kind is called Local Redirection and Replacement Rewriting (LRRrewriting, for short). Its rôle is twofold: adding to G a copy of the instance of the right-hand side R, and performing some local redirections of edges specified by means of the rewrite rule.
• The second kind of rewrite steps is called Global Redirection Rewriting (GR-Rewriting, for short). Its rôle consists in performing redirections: all incoming edges of some node a in G are redirected to a node b.
We define LRR-rewriting and GR-rewriting in the two following subsections. We use in both cases the double-pushout approach to define rewrite steps.
LRR-rewriting
Before defining LRR-rewrite rules and steps, we state first a technical result about the existence of inverse pushouts in our setting.
L µ D(L, E) Dµ,E δL,E o o U D(U, µ(E)) δ U,µ(E) o o
Proof. This result is an easy corollary of Theorem A.2.
Definition 4.2 (Disconnecting pushout) Let µ : L → U be a graph homomorphism and E a set of edges of L. The disconnecting pushout associated to µ and E is the pushout from Theorem 4.1.
It can be noted that the disconnecting pushout is not unique, in the sense that there are generally several inverse pushouts of:
L µ D(L, E) δL,E o o U Before stating the next definition, it should be reminded that N D(L,E) = N L + N E = N Ω L + N X L + N E .L D(L, E) δL,E o o ρ G G R
where E is a set of edges of L, and where ρ(N X L ) ⊆ N X R and the restriction of ρ to N X L is injective.
G G m : cons | | o : • p : • R δ L,{(m,2)} - ρ
In this example we show how (local) edge redirection can be achieved through edge disconnection. Since an element is added to the head of a circular list (of length 1), one has to make the curve pointer (m, 2) to point to the new added cell. For this we disconnect the edge (m, 2) in D(L, {(m, 2)}) in order to be able to redirect it, thanks to an appropriate homomorphism ρ, to the new cell in R,
namely q. Here, ρ = [n → n; m[2] → q; · · ·]
One may also remark that graph R still has a node labelled by add. In this paper we do not tackle the problem of garbage collection which has been treated in a categorical way in e.g. [4].
rewrite rule L D(L, E) δL,E o o ρ G G R is a graph homomorphism µ : L → U
that is Ω-injective, which means that the restriction of the map µ to N Ω G is injective.
Definition 4.6 (LRR-Rewrite step) Let r = ( L D(L, E) δL,E o o ρ G G R )
be a rewrite rule, and µ : L → U a matching with respect to r. Then U rewrites into V using rule r if there are graph homomorphisms ν : R → V and ρ ′ : D(U, µ(E)) → V such that the following square is a pushout in the category of graphs (Gr):
D(L, E) Dµ,E ρ G G R ν D(U, µ(E)) ρ ′ G G V
Thus, a rewrite step corresponds to a double pushout in the category of graphs:
L µ D(L, E) Dµ,E δL,E o o ρ G G R ν U D(U, µ(E)) δ U,µ(E) o o ρ ′ G G V
Theorem 4.7 (Rewrite step is feasible) Let r be a rewrite rule, and µ : L → U a matching with respect to r. Then U can be rewritten using rule r. More precisely, the required pushout can be built as follows (the notations are simplified by dropping E and µ(E)):
• the set of nodes of V is N V = (N R +N D(U) )/ ∼, where ∼ is the equivalence relation generated by D µ (n) ∼ ρ(n) for each node n of D(L),
• the maps ν and ρ ′ , on the sets of nodes, are the inclusions of N R and N D(U) in N R + N D(U) , respectively, followed by the quotient map with respect to ∼,
• N Ω V is made of the classes modulo ∼ which contain at least one labeled node, and a section π : N Ω V → N Ω R + N Ω D(U) of the quotient map is chosen, which means that the class of π(n) is n, for each n ∈ N Ω V , • for each n ∈ N Ω V , the label of n is the label of π(n), • for each n ∈ N Ω V , the successors of n are the classes of the successors of π(n).
Moreover, the resulting pushout does not depend on the choice of the section π.
N Ω V = (N Ω U − µ(N Ω L )) + N Ω R .
Proof. Both Theorem 4.7 and Corollary 4.8 are derived from Theorem A.4, their proofs are given at the end of the appendix.
Example 4.9
Here we consider the case of a non Ω-injective matching in order to show that there may be no double pushout in such cases. Thus justifying our restriction over acceptable matchings (see Definition 4.5).
In this example we identify two nodes of L labelled by g via the homomorphism µ, namely n 1 and n 2 , to a single one, m. In the span we disconnect the two edges coming from g's and redirect them to two different nodes labeled by different constants : b and c.This is done by the homomorphism ρ = id. Now, as both edges have been merged by the matching in U , the second (right) pushout cannot exist since a single edge cannot point to both b and c in the same time. Note that this impossibility does not denote a limitation of our formalism. Figure 1 we give the span for lists of size greater than 1, as well as the application of the rule to a list of size 3. Notice how the disconnection is actually used in order to redirect the pointer (n 6 , 2). The homomorphisms of the bottom layer show that the disconnected edge, pointing to the unlabeled node c 4 [2] is mapped to c 1 to the left and to n 8 to the right. The mechanism of disconnection allows the categorical manipulation of an edge.
The Ω-injectivity hypothesis is also useful in this rule since edges (n 6 , 2) and (n 3 , 2) must be different, thus a list of size less than or equal to one cannot be matched by this rule.
U L R [n6[2] → n8] [n6[2] → n3]? [n1 → o n3 → c1 n6 → c4] ? [n1 → o n3 → c1 n6 → c4] ? [n1 → o n3 → c1 n6 → c4]
GR-Rewriting
Let U be graph and let a, b ∈ N U . we say that U rewrites into V using the global redirection from a to b and write U a→b −→ V iff V is obtained from U by redirecting all edges targeting node a to point towards node b. This kind of rewriting is very useful when dealing with rooted term graphs (see, e.g. [4]). We define below one GR-rewriting step following the double pushout approach.
P SW λ o o ρ G G P where
• P is made of two unlabeled nodes ar and pr,
• SW (switch graph) is made of three unlabeled nodes ar, pr and mr,
• λ(ar) = λ(mr) = ar and λ(pr) = pr,
• ρ(ar) = ar and ρ(pr) = ρ(mr) = pr. G G P be a GR-rewrite rule, and µ : P → U be a GR-matching. LetD µ : SW → D(U, µ(ar)) be the homomorphism defined byD µ (ar) = µ(ar),D µ (pr) = µ(pr) andD µ (mr) = mr. Then U rewrites into V using rule r if there are graph homomorphisms ν : P → V and ρ ′ :D(U, µ(ar)) → V such that the following square is a pushout in the category of graphs (Gr):
SW Dµ ρ G G P ν D(U, µ(ar)) ρ ′ G G V
Thus, a GR-rewrite step, U µ(ar)→µ(pr) −→ V , corresponds to a double pushout in the category of graphs:
P µ SW Dµ δµ o o ρ G G P ν U D(U, µ(ar)) δµ o o ρ ′ G G V
The construction of graph V is straightforward. It may be deduced from Theorem A.4 given in the appendix.
Example 4.16
In this example we show how global redirection works. In the graph G, given in Example 2.3, we want redirect all edges with target n towards q. For this pupose, we define the homomorphism µ from P to G by mapping appropriately the nodes ar (ante-rewriting), and pr (post-rewriting). I.e. in our case µ = [ar → n; pr → q]. Applying this on G, we get the following double push-out: Notice how node mr (midrewriting) is used. It is mapped to n on the left and to q on the right. Thus in the middle graph, mr allows to disconnect edges targeting n in order to redirect them towards q.
1 0 ( ) m : f { { x x x x x n : • o : g o o G G p : h { { w w w w w q q q : • r : • 1 0 ( ) mr : • m : f o o n : • o : g d d s s s s s s G G p : h { { w w w w w q q q : • r : • - 1 0 ( ) m : f 2 2 n : • o : g G G p : h { { w w w w w
Example 4.17 In this additional example, we give rewriting rules defining the function length (written ♯) which computes the size of non-empty circular lists. In this example every LRR-rewriting is followed by a GR-rewriting. That is why we precise the global rewriting that should be performed after each LRR-rewrite step.
The first rule simply introduces an auxiliary function, ♯ b , which has two arguments. The first one indicates the head of the list while the second one will move along the list in order to measure it. We have the following span for ♯: (m, i). The next rule defines ♯ b when its arguments are different. Once again we use the hypothesis of Ω-injectivity to ensure that both cons nodes cannot be identified via matching.
Conclusion
We defined two basic rewrite steps dedicated to data-structure rewriting. The rewrite relationships induced by LRR-rewrite rules as well as GR-rewrite rules over graphs are trickier than the classical ones over terms (trees). There was no room in the present paper to discuss classical properties of the rewrite relationship induced by the above definitions such as confluence and termination or its extension to narrowing. However, our preliminary investigation shows that confluence is not guaranted even for nonoverlapping rewrite systems, and thus user-definable strategies are necessary when using all the power of datastructure rewriting. In addition, integration of LRR and GR rewriting in one step is also possible and can be helpful in describing some algorithms.
On the other hand, data-structures are better represented by means of graphics (e.g. [21]). Our purpose in this paper was rather the definition of the basic rewrite steps for data-structures. We intend to consider syntactical issue in a future work.
where G 0 , G 1 and G 2 are made of only one node: n 0 in G 0 is unlabeled, n 1 : a 1 in G 1 and n 2 : a 2 in G 2 , where a 1 and a 2 are distinct constants. This span has no pushout, because there cannot be any commutative square of graphs based on it.
Theorem A.2 below states a sufficient condition for a commutative square of graphs to be a pushout, and Theorem A.4 states a sufficient condition for a span of graphs to have a pushout, together with a construction of this pushout.
In the following, when G i occurs as an index, it is replaced by i.
Theorem A.2 (Pushout of graphs from pushout of sets) If a square Γ of the following form in the category of graphs:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 ψ1 3 3 g g g g G 2 ψ2 } } { { { { G 3 is such that: 1. Γ is a commutative square in Gr,
N (Γ) is a pushout in Set,
and each
n ∈ N Ω 3 is in ψ i (N Ω i ) for i = 1 or i = 2,
then Γ is a pushout in Gr.
Point (2) implies that each n ∈ N 3 is the image of at least a node in G 1 or in G 2 , and point (3) adds that, if n is labeled, then it is the image of at least a labeled node in G 1 or in G 2 . Proof. Let us consider a commutative square Γ ′ in Gr of the form:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 θ1 3 3 g g g g G 2 θ2 } } { { { { G 4
Then N (Γ ′ ) is a commutative square in Set, and since N (Γ) is a pushout in Set, there is a unique map θ :
N 3 → N 4 such that θ • ψ i = θ i , for i = 1, 2. N 0 ϕ1 v v m m m m m m m m m m ϕ2 @ @ N 1 ψ1 3 3 h h h h θ1 B B N 2 ψ2 t t h h h h h h h h h h h h h h h h θ2 } } z z z z N 3 θ G G N 4
Let us now prove that θ actually is a graph homomorphism. According to Definition 2.4, we have to prove that, for each labeled node n of G 3 , its image n ′ = θ(n) is a labeled node of G 4 , and that L 4 (n ′ ) = L 3 (n) and S 4 (n ′ ) = θ * (S 3 (n)).
So, let n ∈ N Ω 3 , and let n ′ = θ(n) ∈ N 4 . ¿From our third assumption, without loss of generality, n = ψ 1 (n 1 ) for some n 1 ∈ N Ω 1 . It follows that θ 1 (n 1 ) = θ(ψ 1 (n 1 )) = θ(n) = n ′ : n = ψ 1 (n 1 ) and n ′ = θ 1 (n 1 ) .
Since n 1 is labeled and θ 1 is a graph homomorphism, the node n ′ is labeled. Since ψ 1 and θ 1 are graph homomorphisms, L 3 (n) = L 1 (n 1 ) and L 4 (n ′ ) = L 1 (n 1 ), thus L 3 (n) = L 4 (n ′ ), as required for labels.
Since ψ 1 and θ 1 are graph homomorphisms, S 3 (n) = ψ * 1 (S 1 (n 1 )) and S 4 (n ′ ) = θ 1 * (S 1 (n 1 )). So, θ * (S 3 (n)) = θ * (ψ * 1 (S 1 (n 1 ))) = θ 1 * (S 1 (n 1 ) = S 4 (n ′ ), as required for successors.
This proves that θ : G 3 → G 4 is a graph homomorphism. Then, from the faithfulness of the functor N (Proposition A.1), for i ∈ {1, 2}, the equality of the underlying maps θ • ψ i = θ i : N i → N 4 is an equality of graph homomorphisms:
θ • ψ i = θ i : G i → G 4 . Now, let θ ′ : G 3 → G 4 be a graph homomorphism such that θ ′ • ψ i = θ i for i ∈ {1, 2}. Since N (Γ)
is a pushout in Set, the underlying maps are equal: θ = θ ′ : N 3 → N 4 . Then, it follows from the faithfulness of the functor N that the graph homomorphisms are equal: θ = θ ′ : G 3 → G 4 .
For each span of graphs Σ:
G 0 ϕ1 } } { { { { ϕ2 3 3 g g g g G 1 G 2
let ∼ denote the equivalence relation on the disjoint union N 1 + N 2 generated by: ϕ 1 (n 0 ) ∼ ϕ 2 (n 0 ) for all n 0 ∈ N 0 , let N 3 be the quotient set N 3 = (N 1 + N 2 )/ ∼, and ψ : N 1 + N 2 → N 3 the quotient map. Two nodes n, n ′ in N 1 + N 2 are called equivalent if n ∼ n ′ . For i ∈ {1, 2}, let ψ i : N i → N 3 be made of the inclusion of N i in N 1 + N 2 followed by ψ. Then, it is well-known that the square of sets:
N 0 ϕ1 } } z z z z ϕ2 3 3 h h h h N 1 ψ1 3 3 h h h h N 2 ψ2 } } z z z z N 3
is a pushout, which can be called canonical.
where n i−1 is labeled and p i is unlabeled.
It should be reminded that:
• N D(L) = N L +N E and N D(U) = N U +µ(N E ), with D µ (N L ) ⊆ N U and D µ injective on N E (the last point comes from the fact that µ is Ω-injective);
• ρ(N X L ) ⊆ N X R and the restriction of ρ to N X L is injective, since L D(L) δL o o ρ G G R
is a rewrite rule.
Case 1: n i−1 is a node of R. Then n i−1 ∈ N Ω R . Since ρ(N X L ) ⊆ N X R and p i is unlabeled, it follows that p i ∈ N E . Then, since D µ maps N E to µ(N E ), n i ∈ µ(N E ). Then k > i, since the last node in the chain is labeled. Since D µ is injective on N E , and maps N L to N U , it follows that p i+1 = n i . So, p i = p i+1 , which is impossible since the chain is minimal.
p i ∈ N E F w w n n n n n n @ @ p i+1 ∈ N E E v v m m m m m m m n i−1 ∈ N Ω R n i ∈ µ(N E )
Case 2: n i−1 is a node of D(U ). Then n i−1 ∈ N Ω U . Since D µ maps N E to µ(N E ) and D µ (N L ) on N U , it follows that p i ∈ N X L . Since ρ maps N X L to N X R , it follows that n i ∈ N X R . Then k > i, since the last node in the chain is labeled. Then p i+1 ∈ N X L + N E . If p i+1 ∈ N E , a contradiction follows as in case 1. Hence, p i+1 ∈ N X L . Since the restriction of ρ to N X L is injective, p i+1 = p i , which is also impossible since the chain is minimal.
p i ∈ N X L F w w n n n n n n 9 9
x x x x x x p i+1 ∈ N X L + N E D u u l l l l l l l n i−1 ∈ N Ω U n i ∈ N X R Finally, it has been proved that all the nodes in this chain are labeled, which concludes the proof.
Proof of Corollary 4.8. We use the proof of theorem 4.7, as well as the notations in this proof. Let n ∈ N Ω V , we have to choose a representative r(n) of n. It should be reminded that N Ω D(U) = N Ω U . (R.) If there is a node n R ∈ N Ω R such that n = ν(n R ), let us prove that it is unique. Let n ′ R ∈ N Ω R be another node such that n = ν(n ′ R ), i.e., such that n R ∼ n ′ R . Let us consider a chain with minimal length k ≥ 1 from n R (= n 0 ) to n ′ R (= n k ); we know that all the nodes in this chain are labeled. Since n 0 and n 1 cannot be both in N R , it follows that n 1 ∈ N Ω U , so that p 0 , p 1 ∈ N Ω L and n 1 = µ(p 0 ) = µ(p 1 ). The Ω-injectivity of µ implies that p 0 = p 1 , but this is impossible. So, we have proved that ν Ω : N Ω R → N Ω V is injective, and we define r(n) = n R .
(U.) If there is no node n R ∈ N Ω R such that n = ν(n R ), then there is a node n U ∈ N Ω U such that n = ρ ′ (n U ). Let us prove that it is unique. Let n ′ U ∈ N Ω U be another node such that n = ρ ′ (n ′ U ), i.e., such that n U ∼ n ′ U . Let us consider a chain with minimal length k ≥ 1 from n U (= n 0 ) to n ′ U (= n k ); we know that all the nodes in this chain are labeled. Since n 0 and n 1 cannot be both in N U , it follows that n 1 ∈ N Ω R , which contradicts our assumption: there is no node n R ∈ N Ω R such that n = ν(n R ). Let N Ω U denote the subset of N Ω U made of the nodes which are not equivalent to any node in N Ω R . So, we have proved that the restriction of ρ ′ Ω : N Ω D(U) → N Ω V to N Ω U is injective, and we define r(n) = n U . (L.) We still have to prove that N Ω U = N Ω U − µ(N Ω L ), i.e., that a node n U ∈ N Ω U is equivalent to a node n R ∈ N Ω R if and only if there is node n L ∈ N Ω L such that n U = µ(n L ). Clearly, if n L ∈ N Ω L and n U = µ(n L ), let n R = ρ(n L ), then n R ∈ N Ω R and n U ∼ n R . Now, let n U ∼ n R for some n U ∈ N Ω U and n R ∈ N Ω R . Let us consider a chain with minimal length k ≥ 1 from n R (= n 0 ) to n U (= n k ); we know that all the nodes in this chain are labeled. If k > 1, then the Ω-injectivity of µ leads to a contradiction, as in part (R) of the proof. Hence k = 1, which means that p 1 ∈ N Ω L is such that n R = ρ(p 1 ) and n U = µ(p 1 ), so that there is node n L = p 1 ∈ N Ω L such that n U = µ(n L ).
22
This concludes the proof that:
| 5,817 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.