title
stringlengths
4
343
abstract
stringlengths
4
4.48k
the complexity of reasoning for fragments of default logic
default logic was introduced by reiter in 1980. in 1992, gottlob classified the complexity of the extension existence problem for propositional default logic as $\sigmaptwo$-complete, and the complexity of the credulous and skeptical reasoning problem as sigmap2-complete, resp. pip2-complete. additionally, he investigated restrictions on the default rules, i.e., semi-normal default rules. selman made in 1992 a similar approach with disjunction-free and unary default rules. in this paper we systematically restrict the set of allowed propositional connectives. we give a complete complexity classification for all sets of boolean functions in the meaning of post's lattice for all three common decision problems for propositional default logic. we show that the complexity is a hexachotomy (sigmap2-, deltap2-, np-, p-, nl-complete, trivial) for the extension existence problem, while for the credulous and skeptical reasoning problem we obtain similar classifications without trivial cases.
spectral sparsification of graphs
we introduce a new notion of graph sparsificaiton based on spectral similarity of graph laplacians: spectral sparsification requires that the laplacian quadratic form of the sparsifier approximate that of the original. this is equivalent to saying that the laplacian of the sparsifier is a good preconditioner for the laplacian of the original. we prove that every graph has a spectral sparsifier of nearly linear size. moreover, we present an algorithm that produces spectral sparsifiers in time $\softo{m}$, where $m$ is the number of edges in the original graph. this construction is a key component of a nearly-linear time algorithm for solving linear equations in diagonally-dominant matrcies. our sparsification algorithm makes use of a nearly-linear time algorithm for graph partitioning that satisfies a strong guarantee: if the partition it outputs is very unbalanced, then the larger part is contained in a subgraph of high conductance.
languages recognized with unbounded error by quantum finite automata
this paper has been superseded by arxiv:1007.3624
instruction sequences and non-uniform complexity theory
we develop theory concerning non-uniform complexity in a setting in which the notion of single-pass instruction sequence considered in program algebra is the central notion. we define counterparts of the complexity classes p/poly and np/poly and formulate a counterpart of the complexity theoretic conjecture that np is not included in p/poly. in addition, we define a notion of completeness for the counterpart of np/poly using a non-uniform reducibility relation and formulate complexity hypotheses which concern restrictions on the instruction sequences used for computation. we think that the theory developed opens up an additional way of investigating issues concerning non-uniform complexity.
approximating the volume of unions and intersections of high-dimensional geometric objects
we consider the computation of the volume of the union of high-dimensional geometric objects. while showing that this problem is #p-hard already for very simple bodies (i.e., axis-parallel boxes), we give a fast fpras for all objects where one can: (1) test whether a given point lies inside the object, (2) sample a point uniformly, (3) calculate the volume of the object in polynomial time. all three oracles can be weak, that is, just approximate. this implies that klee's measure problem and the hypervolume indicator can be approximated efficiently even though they are #p-hard and hence cannot be solved exactly in time polynomial in the number of dimensions unless p=np. our algorithm also allows to approximate efficiently the volume of the union of convex bodies given by weak membership oracles. for the analogous problem of the intersection of high-dimensional geometric objects we prove #p-hardness for boxes and show that there is no multiplicative polynomial-time $2^{d^{1-\epsilon}}$-approximation for certain boxes unless np=bpp, but give a simple additive polynomial-time $\epsilon$-approximation.
the golden ratio encoder
this paper proposes a novel nyquist-rate analog-to-digital (a/d) conversion algorithm which achieves exponential accuracy in the bit-rate despite using imperfect components. the proposed algorithm is based on a robust implementation of a beta-encoder where the value of the base beta is equal to golden mean. it was previously shown that beta-encoders can be implemented in such a way that their exponential accuracy is robust against threshold offsets in the quantizer element. this paper extends this result by allowing for imperfect analog multipliers with imprecise gain values as well. a formal computational model for algorithmic encoders and a general test bed for evaluating their robustness is also proposed.
stability of maximum likelihood based clustering methods: exploring the backbone of classifications (who is keeping you in that community?)
components of complex systems are often classified according to the way they interact with each other. in graph theory such groups are known as clusters or communities. many different techniques have been recently proposed to detect them, some of which involve inference methods using either bayesian or maximum likelihood approaches. in this article, we study a statistical model designed for detecting clusters based on connection similarity. the basic assumption of the model is that the graph was generated by a certain grouping of the nodes and an expectation maximization algorithm is employed to infer that grouping. we show that the method admits further development to yield a stability analysis of the groupings that quantifies the extent to which each node influences its neighbors group membership. our approach naturally allows for the identification of the key elements responsible for the grouping and their resilience to changes in the network. given the generality of the assumptions underlying the statistical model, such nodes are likely to play special roles in the original system. we illustrate this point by analyzing several empirical networks for which further information about the properties of the nodes is available. the search and identification of stabilizing nodes constitutes thus a novel technique to characterize the relevance of nodes in complex networks.
a computer verified, monadic, functional implementation of the integral
we provide a computer verified exact monadic functional implementation of the riemann integral in type theory. together with previous work by o'connor, this may be seen as the beginning of the realization of bishop's vision to use constructive mathematics as a programming language for exact analysis.
computing with classical real numbers
there are two incompatible coq libraries that have a theory of the real numbers; the coq standard library gives an axiomatic treatment of classical real numbers, while the corn library from nijmegen defines constructively valid real numbers. unfortunately, this means results about one structure cannot easily be used in the other structure. we present a way interfacing these two libraries by showing that their real number structures are isomorphic assuming the classical axioms already present in the standard library reals. this allows us to use o'connor's decision procedure for solving ground inequalities present in corn to solve inequalities about the reals from the coq standard library, and it allows theorems from the coq standard library to apply to problem about the corn reals.
multirate anypath routing in wireless mesh networks
in this paper, we present a new routing paradigm that generalizes opportunistic routing in wireless mesh networks. in multirate anypath routing, each node uses both a set of next hops and a selected transmission rate to reach a destination. using this rate, a packet is broadcast to the nodes in the set and one of them forwards the packet on to the destination. to date, there is no theory capable of jointly optimizing both the set of next hops and the transmission rate used by each node. we bridge this gap by introducing a polynomial-time algorithm to this problem and provide the proof of its optimality. the proposed algorithm runs in the same running time as regular shortest-path algorithms and is therefore suitable for deployment in link-state routing protocols. we conducted experiments in a 802.11b testbed network, and our results show that multirate anypath routing performs on average 80% and up to 6.4 times better than anypath routing with a fixed rate of 11 mbps. if the rate is fixed at 1 mbps instead, performance improves by up to one order of magnitude.
fairness in combinatorial auctioning systems
one of the multi-agent systems that is widely used by various government agencies, buyers and sellers in a market economy, in such a manner so as to attain optimized resource allocation, is the combinatorial auctioning system (cas). we study another important aspect of resource allocations in cas, namely fairness. we present two important notions of fairness in cas, extended fairness and basic fairness. we give an algorithm that works by incorporating a metric to ensure fairness in a cas that uses the vickrey-clark-groves (vcg) mechanism, and uses an algorithm of sandholm to achieve optimality. mathematical formulations are given to represent measures of extended fairness and basic fairness.
largest empty circle centered on a query line
the largest empty circle problem seeks the largest circle centered within the convex hull of a set $p$ of $n$ points in $\mathbb{r}^2$ and devoid of points from $p$. in this paper, we introduce a query version of this well-studied problem. in our query version, we are required to preprocess $p$ so that when given a query line $q$, we can quickly compute the largest empty circle centered at some point on $q$ and within the convex hull of $p$. we present solutions for two special cases and the general case; all our queries run in $o(\log n)$ time. we restrict the query line to be horizontal in the first special case, which we preprocess in $o(n \alpha(n) \log n)$ time and space, where $\alpha(n)$ is the slow growing inverse of the ackermann's function. when the query line is restricted to pass through a fixed point, the second special case, our preprocessing takes $o(n \alpha(n)^{o(\alpha(n))} \log n)$ time and space. we use insights from the two special cases to solve the general version of the problem with preprocessing time and space in $o(n^3 \log n)$ and $o(n^3)$ respectively.
extended asp tableaux and rule redundancy in normal logic programs
we introduce an extended tableau calculus for answer set programming (asp). the proof system is based on the asp tableaux defined in [gebser&schaub, iclp 2006], with an added extension rule. we investigate the power of extended asp tableaux both theoretically and empirically. we study the relationship of extended asp tableaux with the extended resolution proof system defined by tseitin for sets of clauses, and separate extended asp tableaux from asp tableaux by giving a polynomial-length proof for a family of normal logic programs p_n for which asp tableaux has exponential-length minimal proofs with respect to n. additionally, extended asp tableaux imply interesting insight into the effect of program simplification on the lengths of proofs in asp. closely related to extended asp tableaux, we empirically investigate the effect of redundant rules on the efficiency of asp solving. to appear in theory and practice of logic programming (tplp).
fermions and loops on graphs. i. loop calculus for determinant
this paper is the first in the series devoted to evaluation of the partition function in statistical models on graphs with loops in terms of the berezin/fermion integrals. the paper focuses on a representation of the determinant of a square matrix in terms of a finite series, where each term corresponds to a loop on the graph. the representation is based on a fermion version of the loop calculus, previously introduced by the authors for graphical models with finite alphabets. our construction contains two levels. first, we represent the determinant in terms of an integral over anti-commuting grassman variables, with some reparametrization/gauge freedom hidden in the formulation. second, we show that a special choice of the gauge, called bp (bethe-peierls or belief propagation) gauge, yields the desired loop representation. the set of gauge-fixing bp conditions is equivalent to the gaussian bp equations, discussed in the past as efficient (linear scaling) heuristics for estimating the covariance of a sparse positive matrix.
fermions and loops on graphs. ii. monomer-dimer model as series of determinants
we continue the discussion of the fermion models on graphs that started in the first paper of the series. here we introduce a graphical gauge model (ggm) and show that : (a) it can be stated as an average/sum of a determinant defined on the graph over $\mathbb{z}_{2}$ (binary) gauge field; (b) it is equivalent to the monomer-dimer (md) model on the graph; (c) the partition function of the model allows an explicit expression in terms of a series over disjoint directed cycles, where each term is a product of local contributions along the cycle and the determinant of a matrix defined on the remainder of the graph (excluding the cycle). we also establish a relation between the md model on the graph and the determinant series, discussed in the first paper, however, considered using simple non-belief-propagation choice of the gauge. we conclude with a discussion of possible analytic and algorithmic consequences of these results, as well as related questions and challenges.
dynamic tree algorithms
in this paper, a general tree algorithm processing a random flow of arrivals is analyzed. capetanakis--tsybakov--mikhailov's protocol in the context of communication networks with random access is an example of such an algorithm. in computer science, this corresponds to a trie structure with a dynamic input. mathematically, it is related to a stopped branching process with exogeneous arrivals (immigration). under quite general assumptions on the distribution of the number of arrivals and on the branching procedure, it is shown that there exists a positive constant $\lambda_c$ so that if the arrival rate is smaller than $\lambda_c$, then the algorithm is stable under the flow of requests, that is, that the total size of an associated tree is integrable. at the same time, a gap in the earlier proofs of stability in the literature is fixed. when the arrivals are poisson, an explicit characterization of $\lambda_c$ is given. under the stability condition, the asymptotic behavior of the average size of a tree starting with a large number of individuals is analyzed. the results are obtained with the help of a probabilistic rewriting of the functional equations describing the dynamics of the system. the proofs use extensively this stochastic background throughout the paper. in this analysis, two basic limit theorems play a key role: the renewal theorem and the convergence to equilibrium of an auto-regressive process with a moving average.
regularities of the distribution of abstract van der corput sequences
similarly to $\beta$-adic van der corput sequences, abstract van der corput sequences can be defined for abstract numeration systems. under some assumptions, these sequences are low discrepancy sequences. the discrepancy function is computed explicitely, and a characterization of bounded remainder sets of the form $[0,y)$ is provided.
modelling interdependencies between the electricity and information infrastructures
the aim of this paper is to provide qualitative models characterizing interdependencies related failures of two critical infrastructures: the electricity infrastructure and the associated information infrastructure. the interdependencies of these two infrastructures are increasing due to a growing connection of the power grid networks to the global information infrastructure, as a consequence of market deregulation and opening. these interdependencies increase the risk of failures. we focus on cascading, escalating and common-cause failures, which correspond to the main causes of failures due to interdependencies. we address failures in the electricity infrastructure, in combination with accidental failures in the information infrastructure, then we show briefly how malicious attacks in the information infrastructure can be addressed.
m\'acajov\'a and \v{s}koviera conjecture on cubic graphs
a conjecture of m\'a\u{c}ajov\'a and \u{s}koviera asserts that every bridgeless cubic graph has two perfect matchings whose intersection does not contain any odd edge cut. we prove this conjecture for graphs with few vertices and we give a stronger result for traceable graphs.
3d mimo scheme for broadcasting future digital tv in single frequency networks
this letter introduces a 3d space-time-space block code for future digital tv systems. the code is based on a double layer structure for inter-cell and intra-cell transmission mode in single frequency networks. without increasing the complexity of the receiver, the proposed code is very efficient for different transmission scenarios.
a coded bit-loading linear precoded discrete multitone solution for power line communication
linear precoded discrete multitone modulation (lp-dmt) system has been already proved advantageous with adaptive resource allocation algorithm in a power line communication (plc) context. in this paper, we investigate the bit and energy allocation algorithm of an adaptive lp-dmt system taking into account the channel coding scheme. a coded adaptive lp-dmt system is presented in the plc context with a loading algorithm which ccommodates the channel coding gains in bit and energy calculations. the performance of a concatenated channel coding scheme, consisting of an inner wei's 4-dimensional 16-states trellis code and an outer reed-solomon code, in combination with the roposed algorithm is analyzed. simulation results are presented for a fixed target bit error rate in a multicarrier scenario under power spectral density constraint. using a multipath model of plc channel, it is shown that the proposed coded adaptive lp-dmt system performs better than classical coded discrete multitone.
coded adaptive linear precoded discrete multitone over plc channel
discrete multitone modulation (dmt) systems exploit the capabilities of orthogonal subcarriers to cope efficiently with narrowband interference, high frequency attenuations and multipath fadings with the help of simple equalization filters. adaptive linear precoded discrete multitone (lp-dmt) system is based on classical dmt, combined with a linear precoding component. in this paper, we investigate the bit and energy allocation algorithm of an adaptive lp-dmt system taking into account the channel coding scheme. a coded adaptive lpdmt system is presented in the power line communication (plc) context with a loading algorithm which accommodates the channel coding gains in bit and energy calculations. the performance of a concatenated channel coding scheme, consisting of an inner wei's 4-dimensional 16-states trellis code and an outer reed-solomon code, in combination with the proposed algorithm is analyzed. theoretical coding gains are derived and simulation results are presented for a fixed target bit error rate in a multicarrier scenario under power spectral density constraint. using a multipath model of plc channel, it is shown that the proposed coded adaptive lp-dmt system performs better than coded dmt and can achieve higher throughput for plc applications.
cognitive radio with partial channel state information at the transmitter
in this paper, we present the cognitive radio system design with partial channel state information known at the transmitter (csit).we replace the dirty paper coding (dpc) used in the cognitive radio with full csit by the linear assignment gel'fand-pinsker coding (la-gpc), which can utilize the limited knowledge of the channel more efficiently. based on the achievable rate derived from the la-gpc, two optimization problems under the fast and slow fading channels are formulated. we derive semianalytical solutions to find the relaying ratios and precoding coefficients. the critical observation is that the complex rate functions in these problems are closely related to ratios of quadratic form. simulation results show that the proposed semi-analytical solutions perform close to the optimal solutions found by brute-force search, and outperform the systems based on naive dpc. asymptotic analysis also shows that these solutions converge to the optimal ones solved with full csit when the k-factor of rician channel approaches infinity. moreover, a new coding scheme is proposed to implement the la-gpc in practice. simulation results show that the proposed practical coding scheme can efficiently reach the theoretical rate performance.
a linear time algorithm for l(2,1)-labeling of trees
an l(2,1)-labeling of a graph $g$ is an assignment $f$ from the vertex set $v(g)$ to the set of nonnegative integers such that $|f(x)-f(y)|\ge 2$ if $x$ and $y$ are adjacent and $|f(x)-f(y)|\ge 1$ if $x$ and $y$ are at distance 2, for all $x$ and $y$ in $v(g)$. a $k$-l(2,1)-labeling is an assignment $f:v(g)\to\{0,..., k\}$, and the l(2,1)-labeling problem asks the minimum $k$, which we denote by $\lambda(g)$, among all possible assignments. it is known that this problem is np-hard even for graphs of treewidth 2, and tree is one of a very few classes for which the problem is polynomially solvable. the running time of the best known algorithm for trees had been $\mo(\delta^{4.5} n)$ for more than a decade, however, an $\mo(n^{1.75})$-time algorithm has been proposed recently, which substantially improved the previous one, where $\delta$ is the maximum degree of $t$ and $n=|v(t)|$. in this paper, we finally establish a linear time algorithm for l(2,1)-labeling of trees.
bi-directional half-duplex protocols with multiple relays
in a bi-directional relay channel, two nodes wish to exchange independent messages over a shared wireless half-duplex channel with the help of relays. recent work has considered information theoretic limits of the bi-directional relay channel with a single relay. in this work we consider bi-directional relaying with multiple relays. we derive achievable rate regions and outer bounds for half-duplex protocols with multiple decode and forward relays and compare these to the same protocols with amplify and forward relays in an additive white gaussian noise channel. we consider three novel classes of half-duplex protocols: the (m,2) 2 phase protocol with m relays, the (m,3) 3 phase protocol with m relays, and general (m, t) multiple hops and multiple relays (mhmr) protocols, where m is the total number of relays and 3<t< m+3 is the number of temporal phases in the protocol. the (m,2) and (m,3) protocols extend previous bi-directional relaying protocols for a single m=1 relay, while the new (m,t) protocol efficiently combines multi-hop routing with message-level network coding. finally, we provide a comprehensive treatment of the mhmr protocols with decode and forward relaying and amplify and forward relaying in the gaussian noise, obtaining their respective achievable rate regions, outer bounds and relative performance under different snrs and relay geometries, including an analytical comparison on the protocols at low and high snr.
an analytical model of information dissemination for a gossip-based protocol
we develop an analytical model of information dissemination for a gossiping protocol that combines both pull and push approaches. with this model we analyse how fast an item is replicated through a network, and how fast the item spreads in the network, and how fast the item covers the network. we also determine the optimal size of the exchange buffer, to obtain fast replication. our results are confirmed by large-scale simulation experiments.
on finite functions with non-trivial arity gap
given an $n$-ary $k-$valued function $f$, $gap(f)$ denotes the minimal number of essential variables in $f$ which become fictive when identifying any two distinct essential variables in $f$. we particularly solve a problem concerning the explicit determination of $n$-ary $k-$valued functions $f$ with $2\leq gap(f)\leq n\leq k$. our methods yield new combinatorial results about the number of such functions.
a mordell inequality for lattices over maximal orders
in this paper we prove an analogue of mordell's inequality for lattices in finite-dimensional complex or quaternionic hermitian space that are modules over a maximal order in an imaginary quadratic number field or a totally definite rational quaternion algebra. this inequality implies that the 16-dimensional barnes-wall lattice has optimal density among all 16-dimensional lattices with hurwitz structures.
faster and better: a machine learning approach to corner detection
the repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. the repeatability is importand because the same scene viewed from different positions should yield features which correspond to the same real-world 3d locations [schmid et al 2000]. the efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. three advances are described in this paper. first, we present a new heuristic for feature detection, and using machine learning we derive a feature detector from this which can fully process live pal video using less than 5% of the available processing time. by comparison, most other detectors cannot even operate at frame rate (harris detector 115%, sift 195%). second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3d scenes. we show that despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and very high quality.
quantum robot: structure, algorithms and applications
this paper has been withdrawn.
a minimum relative entropy principle for learning and acting
this paper proposes a method to construct an adaptive agent that is universal with respect to a given class of experts, where each expert is an agent that has been designed specifically for a particular environment. this adaptive control problem is formalized as the problem of minimizing the relative entropy of the adaptive agent from the expert that is most suitable for the unknown environment. if the agent is a passive observer, then the optimal solution is the well-known bayesian predictor. however, if the agent is active, then its past actions need to be treated as causal interventions on the i/o stream rather than normal probability conditions. here it is shown that the solution to this new variational problem is given by a stochastic controller called the bayesian control rule, which implements adaptive behavior as a mixture of experts. furthermore, it is shown that under mild assumptions, the bayesian control rule converges to the control law of the most suitable expert.
optimal codes in deletion and insertion metric
we improve the upper bound of levenshtein for the cardinality of a code of length 4 capable of correcting single deletions over an alphabet of even size. we also illustrate that the new upper bound is sharp. furthermore we will construct an optimal perfect code capable of correcting single deletions for the same parameters.
directed transmission method, a fully asynchronous approach to solve sparse linear systems in parallel
in this paper, we propose a new distributed algorithm, called directed transmission method (dtm). dtm is a fully asynchronous and continuous-time iterative algorithm to solve spd sparse linear system. as an architecture-aware algorithm, dtm could be freely running on all kinds of heterogeneous parallel computer. we proved that dtm is convergent by making use of the final-value theorem of laplacian transformation. numerical experiments show that dtm is stable and efficient.
best-effort group service in dynamic networks
we propose a group membership service for dynamic ad hoc networks. it maintains as long as possible the existing groups and ensures that each group diameter is always smaller than a constant, fixed according to the application using the groups. the proposed protocol is self-stabilizing and works in dynamic distributed systems. moreover, it ensures a kind of continuity in the service offer to the application while the system is converging, except if too strong topology changes happen. such a best effort behavior allows applications to rely on the groups while the stabilization has not been reached, which is very useful in dynamic ad hoc networks.
sums of residues on algebraic surfaces and application to coding theory
in this paper, we study residues of differential 2-forms on a smooth algebraic surface over an arbitrary field and give several statements about sums of residues. afterwards, using these results we construct algebraic-geometric codes which are an extension to surfaces of the well-known differential codes on curves. we also study some properties of these codes and extend to them some known properties for codes on curves.
bicycle cycles and mobility patterns - exploring and characterizing data from a community bicycle program
this paper provides an analysis of human mobility data in an urban area using the amount of available bikes in the stations of the community bicycle program bicing in barcelona. the data was obtained by periodic mining of a kml-file accessible through the bicing website. although in principle very noisy, after some preprocessing and filtering steps the data allows to detect temporal patterns in mobility as well as identify residential, university, business and leisure areas of the city. the results lead to a proposal for an improvement of the bicing website, including a prediction of the number of available bikes in a certain station within the next minutes/hours. furthermore a model for identifying the most probable routes between stations is briefly sketched.
camera distortion self-calibration using the plumb-line constraint and minimal hough entropy
in this paper we present a simple and robust method for self-correction of camera distortion using single images of scenes which contain straight lines. since the most common distortion can be modelled as radial distortion, we illustrate the method using the harris radial distortion model, but the method is applicable to any distortion model. the method is based on transforming the edgels of the distorted image to a 1-d angular hough space, and optimizing the distortion correction parameters which minimize the entropy of the corresponding normalized histogram. properly corrected imagery will have fewer curved lines, and therefore less spread in hough space. since the method does not rely on any image structure beyond the existence of edgels sharing some common orientations and does not use edge fitting, it is applicable to a wide variety of image types. for instance, it can be applied equally well to images of texture with weak but dominant orientations, or images with strong vanishing points. finally, the method is performed on both synthetic and real data revealing that it is particularly robust to noise.
an axiomatic characterization of a two-parameter extended relative entropy
the uniqueness theorem for a two-parameter extended relative entropy is proven. this result extends our previous one, the uniqueness theorem for a one-parameter extended relative entropy, to a two-parameter case. in addition, the properties of a two-parameter extended relative entropy are studied.
relating web pages to enable information-gathering tasks
we argue that relationships between web pages are functions of the user's intent. we identify a class of web tasks - information-gathering - that can be facilitated by a search engine that provides links to pages which are related to the page the user is currently viewing. we define three kinds of intentional relationships that correspond to whether the user is a) seeking sources of information, b) reading pages which provide information, or c) surfing through pages as part of an extended information-gathering process. we show that these three relationships can be productively mined using a combination of textual and link information and provide three scoring mechanisms that correspond to them: {\em seekrel}, {\em factrel} and {\em surfrel}. these scoring mechanisms incorporate both textual and link information. we build a set of capacitated subnetworks - each corresponding to a particular keyword - that mirror the interconnection structure of the world wide web. the scores are computed by computing flows on these subnetworks. the capacities of the links are derived from the {\em hub} and {\em authority} values of the nodes they connect, following the work of kleinberg (1998) on assigning authority to pages in hyperlinked environments. we evaluated our scoring mechanism by running experiments on four data sets taken from the web. we present user evaluations of the relevance of the top results returned by our scoring mechanisms and compare those to the top results returned by google's similar pages feature, and the {\em companion} algorithm proposed by dean and henzinger (1999).
model checking memoryful linear-time logics over one-counter automata
we study complexity of the model-checking problems for ltl with registers (also known as freeze ltl) and for first-order logic with data equality tests over one-counter automata. we consider several classes of one-counter automata (mainly deterministic vs. nondeterministic) and several logical fragments (restriction on the number of registers or variables and on the use of propositional variables for control locations). the logics have the ability to store a counter value and to test it later against the current counter value. we show that model checking over deterministic one-counter automata is pspace-complete with infinite and finite accepting runs. by constrast, we prove that model checking freeze ltl in which the until operator is restricted to the eventually operator over nondeterministic one-counter automata is undecidable even if only one register is used and with no propositional variable. as a corollary of our proof, this also holds for first-order logic with data equality tests restricted to two variables. this makes a difference with the facts that several verification problems for one-counter automata are known to be decidable with relatively low complexity, and that finitary satisfiability for the two logics are decidable. our results pave the way for model-checking memoryful (linear-time) logics over other classes of operational models, such as reversal-bounded counter machines.
effective complexity and its relation to logical depth
effective complexity measures the information content of the regularities of an object. it has been introduced by m. gell-mann and s. lloyd to avoid some of the disadvantages of kolmogorov complexity, also known as algorithmic information content. in this paper, we give a precise formal definition of effective complexity and rigorous proofs of its basic properties. in particular, we show that incompressible binary strings are effectively simple, and we prove the existence of strings that have effective complexity close to their lengths. furthermore, we show that effective complexity is related to bennett's logical depth: if the effective complexity of a string $x$ exceeds a certain explicit threshold then that string must have astronomically large depth; otherwise, the depth can be arbitrarily small.
interpolation of shifted-lacunary polynomials
given a "black box" function to evaluate an unknown rational polynomial f in q[x] at points modulo a prime p, we exhibit algorithms to compute the representation of the polynomial in the sparsest shifted power basis. that is, we determine the sparsity t, the shift s (a rational), the exponents 0 <= e1 < e2 < ... < et, and the coefficients c1,...,ct in q\{0} such that f(x) = c1(x-s)^e1+c2(x-s)^e2+...+ct(x-s)^et. the computed sparsity t is absolutely minimal over any shifted power basis. the novelty of our algorithm is that the complexity is polynomial in the (sparse) representation size, and in particular is logarithmic in deg(f). our method combines previous celebrated results on sparse interpolation and computing sparsest shifts, and provides a way to handle polynomials with extremely high degree which are, in some sense, sparse in information.
a complexity dichotomy for hypergraph partition functions
we consider the complexity of counting homomorphisms from an $r$-uniform hypergraph $g$ to a symmetric $r$-ary relation $h$. we give a dichotomy theorem for $r>2$, showing for which $h$ this problem is in fp and for which $h$ it is #p-complete. this generalises a theorem of dyer and greenhill (2000) for the case $r=2$, which corresponds to counting graph homomorphisms. our dichotomy theorem extends to the case in which the relation $h$ is weighted, and the goal is to compute the \emph{partition function}, which is the sum of weights of the homomorphisms. this problem is motivated by statistical physics, where it arises as computing the partition function for particle models in which certain combinations of $r$ sites interact symmetrically. in the weighted case, our dichotomy theorem generalises a result of bulatov and grohe (2005) for graphs, where $r=2$. when $r=2$, the polynomial time cases of the dichotomy correspond simply to rank-1 weights. surprisingly, for all $r>2$ the polynomial time cases of the dichotomy have rather more structure. it turns out that the weights must be superimposed on a combinatorial structure defined by solutions of an equation over an abelian group. our result also gives a dichotomy for a closely related constraint satisfaction problem.
embedding non-ground logic programs into autoepistemic logic for knowledge base combination
in the context of the semantic web, several approaches to the combination of ontologies, given in terms of theories of classical first-order logic and rule bases, have been proposed. they either cast rules into classical logic or limit the interaction between rules and ontologies. autoepistemic logic (ael) is an attractive formalism which allows to overcome these limitations, by serving as a uniform host language to embed ontologies and nonmonotonic logic programs into it. for the latter, so far only the propositional setting has been considered. in this paper, we present three embeddings of normal and three embeddings of disjunctive non-ground logic programs under the stable model semantics into first-order ael. while the embeddings all correspond with respect to objective ground atoms, differences arise when considering non-atomic formulas and combinations with first-order theories. we compare the embeddings with respect to stable expansions and autoepistemic consequences, considering the embeddings by themselves, as well as combinations with classical theories. our results reveal differences and correspondences of the embeddings and provide useful guidance in the choice of a particular embedding for knowledge combination.
on the dynamics of social balance on general networks (with an application to xor-sat)
we study nondeterministic and probabilistic versions of a discrete dynamical system (due to t. antal, p. l. krapivsky, and s. redner) inspired by heider's social balance theory. we investigate the convergence time of this dynamics on several classes of graphs. our contributions include: 1. we point out the connection between the triad dynamics and a generalization of annihilating walks to hypergraphs. in particular, this connection allows us to completely characterize the recurrent states in graphs where each edge belongs to at most two triangles. 2. we also solve the case of hypergraphs that do not contain edges consisting of one or two vertices. 3. we show that on the so-called "triadic cycle" graph, the convergence time is linear. 4. we obtain a cubic upper bound on the convergence time on 2-regular triadic simplexes g. this bound can be further improved to a quantity that depends on the cheeger constant of g. in particular this provides some rigorous counterparts to previous experimental observations. we also point out an application to the analysis of the random walk algorithm on certain instances of the 3-xor-sat problem.
non-classical role of potential energy in adiabatic quantum annealing
adiabatic quantum annealing is a paradigm of analog quantum computation, where a given computational job is converted to the task of finding the global minimum of some classical potential energy function and the search for the global potential minimum is performed by employing external kinetic quantum fluctuations and subsequent slow reduction (annealing) of them. in this method, the entire potential energy landscape (pel) may be accessed simultaneously through a delocalized wave-function, in contrast to a classical search, where the searcher has to visit different points in the landscape (i.e., individual classical configurations) sequentially. thus in such searches, the role of the potential energy might be significantly different in the two cases. here we discuss this in the context of searching of a single isolated hole (potential minimum) in a golf-course type gradient free pel. we show, that the quantum particle would be able to locate the hole faster if the hole is deeper, while the classical particle of course would have no scope to exploit the depth of the hole. we also discuss the effect of the underlying quantum phase transition on the adiabatic dynamics.
the complexity of propositional implication
the question whether a set of formulae g implies a formula f is fundamental. the present paper studies the complexity of the above implication problem for propositional formulae that are built from a systematically restricted set of boolean connectives. we give a complete complexity classification for all sets of boolean functions in the meaning of post's lattice and show that the implication problem is efficentily solvable only if the connectives are definable using the constants {false,true} and only one of {and,or,xor}. the problem remains conp-complete in all other cases. we also consider the restriction of g to singletons.
modeling social annotation: a bayesian approach
collaborative tagging systems, such as delicious, citeulike, and others, allow users to annotate resources, e.g., web pages or scientific papers, with descriptive labels called tags. the social annotations contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. traditional text inference methods do not make best use of social annotation, since they do not take into account variations in individual users' perspectives and vocabulary. in a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated resources. unfortunately, that approach had one major shortcoming: the number of topics and interests must be specified a priori. to address this drawback, we extend the model to a fully bayesian framework, which offers a way to automatically estimate these numbers. in particular, the model allows the number of interests and topics to change as suggested by the structure of the data. we evaluate the proposed model in detail on the synthetic and real-world data by comparing its performance to latent dirichlet allocation on the topic extraction task. for the latter evaluation, we apply the model to infer topics of web resources from social annotations obtained from delicious in order to discover new resources similar to a specified one. our empirical results demonstrate that the proposed model is a promising method for exploiting social knowledge contained in user-generated annotations.
on the long time behavior of the tcp window size process
the tcp window size process appears in the modeling of the famous transmission control protocol used for data transmission over the internet. this continuous time markov process takes its values in $[0,\infty)$, is ergodic and irreversible. it belongs to the additive increase multiplicative decrease class of processes. the sample paths are piecewise linear deterministic and the whole randomness of the dynamics comes from the jump mechanism. several aspects of this process have already been investigated in the literature. in the present paper, we mainly get quantitative estimates for the convergence to equilibrium, in terms of the $w_1$ wasserstein coupling distance, for the process and also for its embedded chain.
computing voting power in easy weighted voting games
weighted voting games are ubiquitous mathematical models which are used in economics, political science, neuroscience, threshold logic, reliability theory and distributed systems. they model situations where agents with variable voting weight vote in favour of or against a decision. a coalition of agents is winning if and only if the sum of weights of the coalition exceeds or equals a specified quota. the banzhaf index is a measure of voting power of an agent in a weighted voting game. it depends on the number of coalitions in which the agent is the difference in the coalition winning or losing. it is well known that computing banzhaf indices in a weighted voting game is np-hard. we give a comprehensive classification of weighted voting games which can be solved in polynomial time. among other results, we provide a polynomial ($o(k{(\frac{n}{k})}^k)$) algorithm to compute the banzhaf indices in weighted voting games in which the number of weight values is bounded by $k$.
an efficient algorithm for partial order production
we consider the problem of partial order production: arrange the elements of an unknown totally ordered set t into a target partially ordered set s, by comparing a minimum number of pairs in t. special cases include sorting by comparisons, selection, multiple selection, and heap construction. we give an algorithm performing itlb + o(itlb) + o(n) comparisons in the worst case. here, n denotes the size of the ground sets, and itlb denotes a natural information-theoretic lower bound on the number of comparisons needed to produce the target partial order. our approach is to replace the target partial order by a weak order (that is, a partial order with a layered structure) extending it, without increasing the information theoretic lower bound too much. we then solve the problem by applying an efficient multiple selection algorithm. the overall complexity of our algorithm is polynomial. this answers a question of yao (siam j. comput. 18, 1989). we base our analysis on the entropy of the target partial order, a quantity that can be efficiently computed and provides a good estimate of the information-theoretic lower bound.
ag codes from polyhedral divisors
a description of complete normal varieties with lower dimensional torus action has been given by altmann, hausen, and suess, generalizing the theory of toric varieties. considering the case where the acting torus t has codimension one, we describe t-invariant weil and cartier divisors and provide formulae for calculating global sections, intersection numbers, and euler characteristics. as an application, we use divisors on these so-called t-varieties to define new evaluation codes called t-codes. we find estimates on their minimum distance using intersection theory. this generalizes the theory of toric codes and combines it with ag codes on curves. as the simplest application of our general techniques we look at codes on ruled surfaces coming from decomposable vector bundles. already this construction gives codes that are better than the related product code. further examples show that we can improve these codes by constructing more sophisticated t-varieties. these results suggest to look further for good codes on t-varieties.
geometric properties of satisfying assignments of random $\epsilon$-1-in-k sat
we study the geometric structure of the set of solutions of random $\epsilon$-1-in-k sat problem. for $l\geq 1$, two satisfying assignments $a$ and $b$ are $l$-connected if there exists a sequence of satisfying assignments connecting them by changing at most $l$ bits at a time. we first prove that w.h.p. two assignments of a random $\epsilon$-1-in-$k$ sat instance are $o(\log n)$-connected, conditional on being satisfying assignments. also, there exists $\epsilon_{0}\in (0,\frac{1}{k-2})$ such that w.h.p. no two satisfying assignments at distance at least $\epsilon_{0}\cdot n$ form a "hole" in the set of assignments. we believe that this is true for all $\epsilon >0$, and thus satisfying assignments of a random 1-in-$k$ sat instance form a single cluster.
packing and covering properties of subspace codes for error control in random linear network coding
codes in the projective space and codes in the grassmannian over a finite field - referred to as subspace codes and constant-dimension codes (cdcs), respectively - have been proposed for error control in random linear network coding. for subspace codes and cdcs, a subspace metric was introduced to correct both errors and erasures, and an injection metric was proposed to correct adversarial errors. in this paper, we investigate the packing and covering properties of subspace codes with both metrics. we first determine some fundamental geometric properties of the projective space with both metrics. using these properties, we then derive bounds on the cardinalities of packing and covering subspace codes, and determine the asymptotic rates of optimal packing and optimal covering subspace codes with both metrics. our results not only provide guiding principles for the code design for error control in random linear network coding, but also illustrate the difference between the two metrics from a geometric perspective. in particular, our results show that optimal packing cdcs are optimal packing subspace codes up to a scalar for both metrics if and only if their dimension is half of their length (up to rounding). in this case, cdcs suffer from only limited rate loss as opposed to subspace codes with the same minimum distance. we also show that optimal covering cdcs can be used to construct asymptotically optimal covering subspace codes with the injection metric only.
high resolution dynamical mapping of social interactions with active rfid
in this paper we present an experimental framework to gather data on face-to-face social interactions between individuals, with a high spatial and temporal resolution. we use active radio frequency identification (rfid) devices that assess contacts with one another by exchanging low-power radio packets. when individuals wear the beacons as a badge, a persistent radio contact between the rfid devices can be used as a proxy for a social interaction between individuals. we present the results of a pilot study recently performed during a conference, and a subsequent preliminary data analysis, that provides an assessment of our method and highlights its versatility and applicability in many areas concerned with human dynamics.
entanglement-assisted communication of classical and quantum information
we consider the problem of transmitting classical and quantum information reliably over an entanglement-assisted quantum channel. our main result is a capacity theorem that gives a three-dimensional achievable rate region. points in the region are rate triples, consisting of the classical communication rate, the quantum communication rate, and the entanglement consumption rate of a particular coding scheme. the crucial protocol in achieving the boundary points of the capacity region is a protocol that we name the classically-enhanced father protocol. the classically-enhanced father protocol is more general than other protocols in the family tree of quantum shannon theoretic protocols, in the sense that several previously known quantum protocols are now child protocols of it. the classically-enhanced father protocol also shows an improvement over a time-sharing strategy for the case of a qubit dephasing channel--this result justifies the need for simultaneous coding of classical and quantum information over an entanglement-assisted quantum channel. our capacity theorem is of a multi-letter nature (requiring a limit over many uses of the channel), but it reduces to a single-letter characterization for at least three channels: the completely depolarizing channel, the quantum erasure channel, and the qubit dephasing channel.
hybrid: a definitional two-level approach to reasoning with higher-order abstract syntax
combining higher-order abstract syntax and (co)induction in a logical framework is well known to be problematic. previous work described the implementation of a tool called hybrid, within isabelle hol, which aims to address many of these difficulties. it allows object logics to be represented using higher-order abstract syntax, and reasoned about using tactical theorem proving and principles of (co)induction. in this paper we describe how to use it in a multi-level reasoning fashion, similar in spirit to other meta-logics such as twelf. by explicitly referencing provability in a middle layer called a specification logic, we solve the problem of reasoning by (co)induction in the presence of non-stratifiable hypothetical judgments, which allow very elegant and succinct specifications of object logic inference rules.
mapping images with the coherence length diagrams
statistical pattern recognition methods based on the coherence length diagram (cld) have been proposed for medical image analyses, such as quantitative characterisation of human skin textures, and for polarized light microscopy of liquid crystal textures. further investigations are made on image maps originated from such diagram and some examples related to irregularity of microstructures are shown.
omnidirectional relay in wireless networks
for wireless networks with multiple sources, an omnidirectional relay scheme is developed, where each node can simultaneously relay different messages in different directions. this is accomplished by the decode-and-forward relay strategy, with each relay binning the multiple messages to be transmitted, in the same spirit of network coding. specially for the all-source all-cast problem, where each node is an independent source to be transmitted to all the other nodes, this scheme completely eliminates interference in the whole network, and the signal transmitted by any node can be used by any other node. for networks with some kind of symmetry, assuming no beamforming is to be performed, this omnidirectional relay scheme is capable of achieving the maximum achievable rate.
dilation, smoothed distance, and minimization diagrams of convex functions
we study voronoi diagrams for distance functions that add together two convex functions, each taking as its argument the difference between cartesian coordinates of two planar points. when the functions do not grow too quickly, then the voronoi diagram has linear complexity and can be constructed in near-linear randomized expected time. additionally, the level sets of the distances from the sites form a family of pseudocircles in the plane, all cells in the voronoi diagram are connected, and the set of bisectors separating any one cell in the diagram from each of the others forms an arrangement of pseudolines in the plane. we apply these results to the smoothed distance or biotope transform metric, a geometric analogue of the jaccard distance whose voronoi diagrams can be used to determine the dilation of a star network with a given hub. for sufficiently closely spaced points in the plane, the voronoi diagram of smoothed distance has linear complexity and can be computed efficiently. we also experiment with a variant of lloyd's algorithm, adapted to smoothed distance, to find uniformly spaced point samples with exponentially decreasing density around a given point.
hierarchy and equivalence of multi-letter quantum finite automata
multi-letter {\it quantum finite automata} (qfas) were a new one-way qfa model proposed recently by belovs, rosmanis, and smotrovs (lncs, vol. 4588, springer, berlin, 2007, pp. 60-71), and they showed that multi-letter qfas can accept with no error some regular languages ($(a+b)^{*}b$) that are unacceptable by the one-way qfas. in this paper, we continue to study multi-letter qfas. we mainly focus on two issues: (1) we show that $(k+1)$-letter qfas are computationally more powerful than $k$-letter qfas, that is, $(k+1)$-letter qfas can accept some regular languages that are unacceptable by any $k$-letter qfa. a comparison with the one-way qfas is made by some examples; (2) we prove that a $k_{1}$-letter qfa ${\cal a}_1$ and another $k_{2}$-letter qfa ${\cal a}_2$ are equivalent if and only if they are $(n_{1}+n_{2})^{4}+k-1$-equivalent, and the time complexity of determining the equivalence of two multi-letter qfas using this method is $o(n^{12}+k^{2}n^{4}+kn^{8})$, where $n_{1}$ and $n_{2}$ are the numbers of states of ${\cal a}_{1}$ and ${\cal a}_{2}$, respectively, and $k=\max(k_{1},k_{2})$. some other issues are addressed for further consideration.
linear-time algorithms for geometric graphs with sublinearly many edge crossings
we provide linear-time algorithms for geometric graphs with sublinearly many crossings. that is, we provide algorithms running in o(n) time on connected geometric graphs having n vertices and k crossings, where k is smaller than n by an iterated logarithmic factor. specific problems we study include voronoi diagrams and single-source shortest paths. our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. instead, our algorithms are based on a planarization method that "zeroes in" on edge crossings, together with methods for extending planar separator decompositions to geometric graphs with sublinearly many crossings. incidentally, our planarization algorithm also solves an open computational geometry problem of chazelle for triangulating a self-intersecting polygonal chain having n segments and k crossings in linear time, for the case when k is sublinear in n by an iterated logarithmic factor.
ecotrade - a multi player network game of a tradable permit market for biodiversity credits
ecotrade is a multi player network game of a virtual biodiversity credit market. each player controls the land use of a certain amount of parcels on a virtual landscape. the biodiversity credits of a particular parcel depend on neighboring parcels, which may be owned by other players. the game can be used to study the strategies of players in experiments or classroom games and also as a communication tool for stakeholders participating in credit markets that include spatially interdependent credits.
adaptive uncertainty resolution in bayesian combinatorial optimization problems
in several applications such as databases, planning, and sensor networks, parameters such as selectivity, load, or sensed values are known only with some associated uncertainty. the performance of such a system (as captured by some objective function over the parameters) is significantly improved if some of these parameters can be probed or observed. in a resource constrained situation, deciding which parameters to observe in order to optimize system performance itself becomes an interesting and important optimization problem. this general problem is the focus of this paper. one of the most important considerations in this framework is whether adaptivity is required for the observations. adaptive observations introduce blocking or sequential operations in the system whereas non-adaptive observations can be performed in parallel. one of the important questions in this regard is to characterize the benefit of adaptivity for probes and observation. we present general techniques for designing constant factor approximations to the optimal observation schemes for several widely used scheduling and metric objective functions. we show a unifying technique that relates this optimization problem to the outlier version of the corresponding deterministic optimization. by making this connection, our technique shows constant factor upper bounds for the benefit of adaptivity of the observation schemes. we show that while probing yields significant improvement in the objective function, being adaptive about the probing is not beneficial beyond constant factors.
decidability of the equivalence of multi-letter quantum finite automata
multi-letter {\it quantum finite automata} (qfas) were a quantum variant of classical {\it one-way multi-head finite automata} (j. hromkovi\v{c}, acta informatica 19 (1983) 377-384), and it has been shown that this new one-way qfas (multi-letter qfas) can accept with no error some regular languages $(a+b)^{*}b$ that are unacceptable by the previous one-way qfas. in this paper, we study the decidability of the equivalence of multi-letter qfas, and the main technical contributions are as follows: (1) we show that any two automata, a $k_{1}$-letter qfa ${\cal a}_1$ and a $k_{2}$-letter qfa ${\cal a}_2$, over the same input alphabet $\sigma$ are equivalent if and only if they are $(n^2m^{k-1}-m^{k-1}+k)$-equivalent, where $m=|\sigma|$ is the cardinality of $\sigma$, $k=\max(k_{1},k_{2})$, and $n=n_{1}+n_{2}$, with $n_{1}$ and $n_{2}$ being the numbers of states of ${\cal a}_{1}$ and ${\cal a}_{2}$, respectively. when $k=1$, we obtain the decidability of equivalence of measure-once qfas in the literature. it is worth mentioning that our technical method is essentially different from that for the decidability of the case of single input alphabet (i.e., $m=1$). (2) however, if we determine the equivalence of multi-letter qfas by checking all strings of length not more than $ n^2m^{k-1}-m^{k-1}+k$, then the worst time complexity is exponential, i.e., $o(n^6m^{n^2m^{k-1}-m^{k-1}+2k-1})$. therefore, we design a polynomial-time $o(m^{2k-1}n^{8}+km^kn^{6})$ algorithm for determining the equivalence of any two multi-letter qfas. here, the time complexity is concerning the number of states in the multi-letter qfas, and $k$ is thought of as a constant.
cooperative hybrid arq protocols: unified frameworks for protocol analysis
cooperative hybrid-arq (harq) protocols, which can exploit the spatial and temporal diversities, have been widely studied. the efficiency of cooperative harq protocols is higher than that of cooperative protocols, because retransmissions are only performed when necessary. we classify cooperative harq protocols as three decode-and-forward based harq (df-harq) protocols and two amplified-and-forward based (af-harq) protocols. to compare these protocols and obtain the optimum parameters, two unified frameworks are developed for protocol analysis. using the frameworks, we can evaluate and compare the maximum throughput and outage probabilities according to the snr, the relay location, and the delay constraint for the protocols.
on the decoder error probability of rank metric codes and constant-dimension codes
rank metric codes and constant-dimension codes (cdcs) have been considered for error control in random network coding. since decoder errors are more detrimental to system performance than decoder failures, in this paper we investigate the decoder error probability (dep) of bounded distance decoders (bdds) for rank metric codes and cdcs. for rank metric codes, we consider a channel motivated by network coding, where errors with the same row space are equiprobable. over such channels, we establish upper bounds on the deps of bdds, determine the exact dep of bdds for maximum rank distance (mrd) codes, and show that mrd codes have the greatest deps up to a scalar. to evaluate the deps of bdds for cdcs, we first establish some fundamental geometric properties of the projective space. using these geometric properties, we then consider bdds in both subspace and injection metrics and derive analytical expressions of their deps for cdcs, over a symmetric operator channel, as functions of their distance distributions. finally, we focus on cdcs obtained by lifting rank metric codes and establish two important results: first, we derive asymptotically tight upper bounds on the deps of bdds in both metrics; second, we show that the deps for kk codes are the greatest up to a scalar among all cdcs obtained by lifting rank metric codes.
a note on regular ramsey graphs
we prove that there is an absolute constant $c>0$ so that for every natural $n$ there exists a triangle-free \emph{regular} graph with no independent set of size at least $c\sqrt{n\log n}$.
the violation heap: a relaxed fibonacci-like heap
we give a priority queue that achieves the same amortized bounds as fibonacci heaps. namely, find-min requires o(1) worst-case time, insert, meld and decrease-key require o(1) amortized time, and delete-min requires $o(\log n)$ amortized time. our structure is simple and promises an efficient practical behavior when compared to other known fibonacci-like heaps. the main idea behind our construction is to propagate rank updates instead of performing cascaded cuts following a decrease-key operation, allowing for a relaxed structure.
linearly parameterized bandits
we consider bandit problems involving a large (possibly infinite) collection of arms, in which the expected reward of each arm is a linear function of an $r$-dimensional random vector $\mathbf{z} \in \mathbb{r}^r$, where $r \geq 2$. the objective is to minimize the cumulative regret and bayes risk. when the set of arms corresponds to the unit sphere, we prove that the regret and bayes risk is of order $\theta(r \sqrt{t})$, by establishing a lower bound for an arbitrary policy, and showing that a matching upper bound is obtained through a policy that alternates between exploration and exploitation phases. the phase-based policy is also shown to be effective if the set of arms satisfies a strong convexity condition. for the case of a general set of arms, we describe a near-optimal policy whose regret and bayes risk admit upper bounds of the form $o(r \sqrt{t} \log^{3/2} t)$.
indoor channel measurements and communications system design at 60 ghz
this paper presents a brief overview of several studies concerning the indoor wireless communications at 60 ghz performed by the ietr. the characterization and the modeling of the radio propagation channel are based on several measurement campaigns realized with the channel sounder developed at ietr. some typical residential environments were also simulated by ray tracing and gaussian beam tracking. the obtained results show a good agreement with the similar experimental results. currently, the ietr is developing a high data rate wireless communication system operating at 60 ghz. the single-carrier architecture of this system is also presented.
a pseudopolynomial algorithm for alexandrov's theorem
alexandrov's theorem states that every metric with the global topology and local geometry required of a convex polyhedron is in fact the intrinsic metric of a unique convex polyhedron. recent work by bobenko and izmestiev describes a differential equation whose solution leads to the polyhedron corresponding to a given metric. we describe an algorithm based on this differential equation to compute the polyhedron to arbitrary precision given the metric, and prove a pseudopolynomial bound on its running time. along the way, we develop pseudopolynomial algorithms for computing shortest paths and weighted delaunay triangulations on a polyhedral surface, even when the surface edges are not shortest paths.
a novel clustering algorithm based upon games on evolving network
this paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. in the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. on the network describing relationships among data points, an edge-removing-and-rewiring (err) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. as such, the connections among data points vary over time. during the evolution of network, some strategies are spread in the network. as a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
maximum entropy on compact groups
on a compact group the haar probability measure plays the role of uniform distribution. the entropy and rate distortion theory for this uniform distribution is studied. new results and simplified proofs on convergence of convolutions on compact groups are presented and they can be formulated as entropy increases to its maximum. information theoretic techniques and markov chains play a crucial role. the convergence results are also formulated via rate distortion functions. the rate of convergence is shown to be exponential.
towards the characterization of individual users through web analytics
we perform an analysis of the way individual users navigate in the web. we focus primarily in the temporal patterns of they return to a given page. the return probability as a function of time as well as the distribution of time intervals between consecutive visits are measured and found to be independent of the level of activity of single users. the results indicate a rich variety of individual behaviors and seem to preclude the possibility of defining a characteristic frequency for each user in his/her visits to a single site.
on the optimal convergence probability of univariate estimation of distribution algorithms
in this paper, we obtain bounds on the probability of convergence to the optimal solution for the compact genetic algorithm (cga) and the population based incremental learning (pbil). we also give a sufficient condition for convergence of these algorithms to the optimal solution and compute a range of possible values of the parameters of these algorithms for which they converge to the optimal solution with a confidence level.
multicasting correlated multi-source to multi-sink over a network
the problem of network coding with multicast of a single source to multisink has first been studied by ahlswede, cai, li and yeung in 2000, in which they have established the celebrated max-flow mini-cut theorem on non-physical information flow over a network of independent channels. on the other hand, in 1980, han has studied the case with correlated multisource and a single sink from the viewpoint of polymatroidal functions in which a necessary and sufficient condition has been demonstrated for reliable transmission over the network. this paper presents an attempt to unify both cases, which leads to establish a necessary and sufficient condition for reliable transmission over a network multicasting correlated multisource to multisink. here, the problem of separation of source coding and channel coding is also discussed.
distributed power allocation in multi-user multi-channel relay networks
this paper has been withdrawn by the authors as they feel it inappropriate to publish this paper for the time being.
simple channel coding bounds
new channel coding converse and achievability bounds are derived for a single use of an arbitrary channel. both bounds are expressed using a quantity called the "smooth 0-divergence", which is a generalization of renyi's divergence of order 0. the bounds are also studied in the limit of large block-lengths. in particular, they combine to give a general capacity formula which is equivalent to the one derived by verdu and han.
weighted well-covered graphs without cycles of length 4, 5, 6 and 7
a graph is well-covered if every maximal independent set has the same cardinality. the recognition problem of well-covered graphs is known to be co-np-complete. let w be a weight function defined on the vertices of g. then g is w-well-covered if all maximal independent sets of g are of the same weight. the set of weight functions w for which a graph is w-well-covered is a vector space. we prove that finding the vector space of weight functions under which an input graph is w-well-covered can be done in polynomial time, if the input graph does not contain cycles of length 4, 5, 6 and 7.
when do nonlinear filters achieve maximal accuracy?
the nonlinear filter for an ergodic signal observed in white noise is said to achieve maximal accuracy if the stationary filtering error vanishes as the signal to noise ratio diverges. we give a general characterization of the maximal accuracy property in terms of various systems theoretic notions. when the signal state space is a finite set explicit necessary and sufficient conditions are obtained, while the linear gaussian case reduces to a classic result of kwakernaak and sivan (1972).
novel architectures and algorithms for delay reduction in back-pressure scheduling and routing
the back-pressure algorithm is a well-known throughput-optimal algorithm. however, its delay performance may be quite poor even when the traffic load is not close to network capacity due to the following two reasons. first, each node has to maintain a separate queue for each commodity in the network, and only one queue is served at a time. second, the back-pressure routing algorithm may route some packets along very long routes. in this paper, we present solutions to address both of the above issues, and hence, improve the delay performance of the back-pressure algorithm. one of the suggested solutions also decreases the complexity of the queueing data structures to be maintained at each node.
multishot codes for network coding: bounds and a multilevel construction
the subspace channel was introduced by koetter and kschischang as an adequate model for the communication channel from the source node to a sink node of a multicast network that performs random linear network coding. so far, attention has been given to one-shot subspace codes, that is, codes that use the subspace channel only once. in contrast, this paper explores the idea of using the subspace channel more than once and investigates the so called multishot subspace codes. we present definitions for the problem, a motivating example, lower and upper bounds for the size of codes, and a multilevel construction of codes based on block-coded modulation.
pilot contamination and precoding in multi-cell tdd systems
this paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. for precoding at the base stations, channel state information (csi) is essential at the base stations. a popular technique for obtaining this csi in time division duplex (tdd) systems is uplink training by utilizing the reciprocity of the wireless medium. this paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. when non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. this paper analyzes this fundamental problem of pilot contamination in multi-cell systems. furthermore, it develops a new multi-cell mmse-based precoding method that mitigate this problem. in addition to being a linear precoding method, this precoding method has a simple closed-form expression that results from an intuitive optimization problem formulation. numerical results show significant performance gains compared to certain popular single-cell precoding methods.
interference avoidance game in the gaussian interference channel: sub-optimal and optimal schemes
this paper considers a distributed interference avoidance problem employing frequency assignment in the gaussian interference channel (ic). we divide the common channel into several subchannels and each user chooses the subchannel with less amount of interference from other users as the transmit channel. this mechanism named interference avoidance in this paper can be modeled as a competitive game model. and a completely autonomous distributed iterative algorithm called tdistributed interference avoidance algorithm (dia) is adopted to achieve the nash equilibriumt (ne) of the game. due to the self-optimum, dia is a sub-optimal algorithm. therefore, through introducing an optimal compensation into the competitive game model, we successfully develop a compensation-based game model to approximate the optimal interference avoidance problem. moreover, an optimal algorithm called iterative optimal interference avoidance algorithm (ioia) is proposed to reach the optimality of the interference avoidance scheme. we analyze the implementation complexities of the two algorithms. we also give the proof on the convergence of the proposed algorithms. the performance upper bound and lower bound are also derived for the proposed algorithms. the simulation results show that ioia does reach the optimality under condition of interference avoidance mechanism.
concept-oriented model and query language
we describe a new approach to data modeling, called the concept-oriented model (com), and a novel concept-oriented query language (coql). the model is based on three principles: duality principle postulates that any element is a couple consisting of one identity and one entity, inclusion principle postulates that any element has a super-element, and order principle assumes that any element has a number of greater elements within a partially ordered set. concept-oriented query language is based on a new data modeling construct, called concept, inclusion relation between concepts, and concept partial ordering in which greater concepts are represented by their field types. it is demonstrated how com and coql can be used to solve three general data modeling tasks: logical navigation, multidimensional analysis and inference. logical navigation is based on two operations of projection and de-projection. multidimensional analysis uses product operation for producing a cube from level concepts chosen along the chosen dimension paths. inference is defined as a two-step procedure where input constraints are first propagated downwards using de-projection and then the constrained result is propagated upwards using projection.
distributed large scale network utility maximization
recent work by zymnis et al. proposes an efficient primal-dual interior-point method, using a truncated newton method, for solving the network utility maximization (num) problem. this method has shown superior performance relative to the traditional dual-decomposition approach. other recent work by bickson et al. shows how to compute efficiently and distributively the newton step, which is the main computational bottleneck of the newton method, utilizing the gaussian belief propagation algorithm. in the current work, we combine both approaches to create an efficient distributed algorithm for solving the num problem. unlike the work of zymnis, which uses a centralized approach, our new algorithm is easily distributed. using an empirical evaluation we show that our new method outperforms previous approaches, including the truncated newton method and dual-decomposition methods. as an additional contribution, this is the first work that evaluates the performance of the gaussian belief propagation algorithm vs. the preconditioned conjugate gradient method, for a large scale problem.
a hybrid multicast-unicast infrastructure for efficient publish-subscribe in enterprise networks
one of the main challenges in building a large scale publish-subscribe infrastructure in an enterprise network, is to provide the subscribers with the required information, while minimizing the consumed host and network resources. typically, previous approaches utilize either ip multicast or point-to-point unicast for efficient dissemination of the information. in this work, we propose a novel hybrid framework, which is a combination of both multicast and unicast data dissemination. our hybrid framework allows us to take the advantages of both multicast and unicast, while avoiding their drawbacks. we investigate several algorithms for computing the best mapping of publishers' transmissions into multicast and unicast transport. using extensive simulations, we show that our hybrid framework reduces consumed host and network resources, outperforming traditional solutions. to insure the subscribers interests closely resemble those of real-world settings, our simulations are based on stock market data and on recorded ibm webshpere subscriptions.
peer-to-peer secure multi-party numerical computation facing malicious adversaries
we propose an efficient framework for enabling secure multi-party numerical computations in a peer-to-peer network. this problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and other tasks, where the computing nodes is expected to preserve the privacy of their inputs while performing a joint computation of a certain function. although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale peer-to-peer networks. in this work, we try to bridge the gap between theoretical algorithms in the security domain, and a practical peer-to-peer deployment. we consider two security models. the first is the semi-honest model where peers correctly follow the protocol, but try to reveal private information. we provide three possible schemes for secure multi-party numerical computation for this model and identify a single light-weight scheme which outperforms the others. using extensive simulation results over real internet topologies, we demonstrate that our scheme is scalable to very large networks, with up to millions of nodes. the second model we consider is the malicious peers model, where peers can behave arbitrarily, deliberately trying to affect the results of the computation as well as compromising the privacy of other peers. for this model we provide a fourth scheme to defend the execution of the computation against the malicious peers. the proposed scheme has a higher complexity relative to the semi-honest model. overall, we provide the peer-to-peer network designer a set of tools to choose from, based on the desired level of security.
language recognition by generalized quantum finite automata with unbounded error (abstract & poster)
in this note, we generalize the results of arxiv:0901.2703v1 we show that all one-way quantum finite automaton (qfa) models that are at least as general as kondacs-watrous qfa's are equivalent in power to classical probabilistic finite automata in this setting. unlike their probabilistic counterparts, allowing the tape head to stay put for some steps during its traversal of the input does enlarge the class of languages recognized by such qfa's with unbounded error. (note that, the proof of theorem 1 in the abstract was presented in the previous version (arxiv:0901.2703v1).)
an extension of the order bound for ag codes
the most successful method to obtain lower bounds for the minimum distance of an algebraic geometric code is the order bound, which generalizes the feng-rao bound. we provide a significant extension of the bound that improves the order bounds by beelen and by duursma and park. we include an exhaustive numerical comparison of the different bounds for 10168 two-point codes on the suzuki curve of genus g=124 over the field of 32 elements. keywords: algebraic geometric code, order bound, suzuki curve.
entropy measures vs. algorithmic information
algorithmic entropy and shannon entropy are two conceptually different information measures, as the former is based on size of programs and the later in probability distributions. however, it is known that, for any recursive probability distribution, the expected value of algorithmic entropy equals its shannon entropy, up to a constant that depends only on the distribution. we study if a similar relationship holds for r\'{e}nyi and tsallis entropies of order $\alpha$, showing that it only holds for r\'{e}nyi and tsallis entropies of order 1 (i.e., for shannon entropy). regarding a time bounded analogue relationship, we show that, for distributions such that the cumulative probability distribution is computable in time $t(n)$, the expected value of time-bounded algorithmic entropy (where the alloted time is $nt(n)\log (nt(n))$) is in the same range as the unbounded version. so, for these distributions, shannon entropy captures the notion of computationally accessible information. we prove that, for universal time-bounded distribution $\m^t(x)$, tsallis and r\'{e}nyi entropies converge if and only if $\alpha$ is greater than 1.
on linear balancing sets
let n be an even positive integer and f be the field \gf(2). a word in f^n is called balanced if its hamming weight is n/2. a subset c \subseteq f^n$ is called a balancing set if for every word y \in f^n there is a word x \in c such that y + x is balanced. it is shown that most linear subspaces of f^n of dimension slightly larger than 3/2\log_2(n) are balancing sets. a generalization of this result to linear subspaces that are "almost balancing" is also presented. on the other hand, it is shown that the problem of deciding whether a given set of vectors in f^n spans a balancing set, is np-hard. an application of linear balancing sets is presented for designing efficient error-correcting coding schemes in which the codewords are balanced.
a low density lattice decoder via non-parametric belief propagation
the recent work of sommer, feder and shalvi presented a new family of codes called low density lattice codes (ldlc) that can be decoded efficiently and approach the capacity of the awgn channel. a linear time iterative decoding scheme which is based on a message-passing formulation on a factor graph is given. in the current work we report our theoretical findings regarding the relation between the ldlc decoder and belief propagation. we show that the ldlc decoder is an instance of non-parametric belief propagation and further connect it to the gaussian belief propagation algorithm. our new results enable borrowing knowledge from the non-parametric and gaussian belief propagation domains into the ldlc domain. specifically, we give more general convergence conditions for convergence of the ldlc decoder (under the same assumptions of the original ldlc convergence analysis). we discuss how to extend the ldlc decoder from latin square to full rank, non-square matrices. we propose an efficient construction of sparse generator matrix and its matching decoder. we report preliminary experimental results which show our decoder has comparable symbol to error rate compared to the original ldlc decoder.%
computing rooted and unrooted maximum consistent supertrees
a chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. a standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. we give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time o(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. the algorithms extend to weighted triplets (quartets). we further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. finally, for a set t of m rooted or unrooted trees with maximum degree d and distinctly leaf-labeled by some subset of a set l of n labels, we compute, in o(2^{md} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset x of l that all trees in t, when restricted to x, are consistent with.
a boundary approximation algorithm for distributed sensor networks
we present an algorithm for boundary approximation in locally-linked sensor networks that communicate with a remote monitoring station. delaunay triangulations and voronoi diagrams are used to generate a sensor communication network and define boundary segments between sensors, respectively. the proposed algorithm reduces remote station communication by approximating boundaries via a decentralized computation executed within the sensor network. moreover, the algorithm identifies boundaries based on differences between neighboring sensor readings, and not absolute sensor values. an analysis of the bandwidth consumption of the algorithm is presented and compared to two naive approaches. the proposed algorithm reduces the amount of remote communication (compared to the naive approaches) and becomes increasingly useful in networks with more nodes.
symmetric tensor decomposition
we present an algorithm for decomposing a symmetric tensor, of dimension n and order d as a sum of rank-1 symmetric tensors, extending the algorithm of sylvester devised in 1886 for binary forms. we recall the correspondence between the decomposition of a homogeneous polynomial in n variables of total degree d as a sum of powers of linear forms (waring's problem), incidence properties on secant varieties of the veronese variety and the representation of linear forms as a linear combination of evaluations at distinct points. then we reformulate sylvester's approach from the dual point of view. exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of hankel (and quasi-hankel) matrices, derived from multivariate polynomials and normal form computations. this leads to the resolution of polynomial equations of small degree in non-generic cases. we propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with these hankel matrices. the impact of this contribution is two-fold. first it permits an efficient computation of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. alternate least squares or gradient descents). second, it gives tools for understanding uniqueness conditions, and for detecting the rank.
remembering what we like: toward an agent-based model of web traffic
analysis of aggregate web traffic has shown that pagerank is a poor model of how people actually navigate the web. using the empirical traffic patterns generated by a thousand users over the course of two months, we characterize the properties of web traffic that cannot be reproduced by markovian models, in which destinations are independent of past decisions. in particular, we show that the diversity of sites visited by individual users is smaller and more broadly distributed than predicted by the pagerank model; that link traffic is more broadly distributed than predicted; and that the time between consecutive visits to the same site by a user is less broadly distributed than predicted. to account for these discrepancies, we introduce a more realistic navigation model in which agents maintain individual lists of bookmarks that are used as teleportation targets. the model can also account for branching, a traffic property caused by browser features such as tabs and the back button. the model reproduces aggregate traffic patterns such as site popularity, while also generating more accurate predictions of diversity, link traffic, and return time distributions. this model for the first time allows us to capture the extreme heterogeneity of aggregate traffic measurements while explaining the more narrowly focused browsing patterns of individual users.
distributed lossy averaging
an information theoretic formulation of the distributed averaging problem previously studied in computer science and control is presented. we assume a network with m nodes each observing a wgn source. the nodes communicate and perform local processing with the goal of computing the average of the sources to within a prescribed mean squared error distortion. the network rate distortion function r^*(d) for a 2-node network with correlated gaussian sources is established. a general cutset lower bound on r^*(d) is established and shown to be achievable to within a factor of 2 via a centralized protocol over a star network. a lower bound on the network rate distortion function for distributed weighted-sum protocols, which is larger in order than the cutset bound by a factor of log m is established. an upper bound on the network rate distortion function for gossip-base weighted-sum protocols, which is only log log m larger in order than the lower bound for a complete graph network, is established. the results suggest that using distributed protocols results in a factor of log m increase in order relative to centralized protocols.
fixing convergence of gaussian belief propagation
gaussian belief propagation (gabp) is an iterative message-passing algorithm for inference in gaussian graphical models. it is known that when gabp converges it converges to the correct map estimate of the gaussian random vector and simple sufficient conditions for its convergence have been established. in this paper we develop a double-loop algorithm for forcing convergence of gabp. our method computes the correct map estimate even in cases where standard gabp would not have converged. we further extend this construction to compute least-squares solutions of over-constrained linear systems. we believe that our construction has numerous applications, since the gabp algorithm is linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. as a case study, we discuss the linear detection problem. we show that using our new construction, we are able to force convergence of montanari's linear detection algorithm, in cases where it would originally fail. as a consequence, we are able to increase significantly the number of users that can transmit concurrently.