title
stringlengths
4
343
abstract
stringlengths
4
4.48k
on the approximability of the maximum agreement subtree and maximum compatible tree problems
this paper has been withdrawn by the corresponding author because the newest version is now published in discrete applied mathematics.
wavelet and curvelet moments for image classification: application to aggregate mixture grading
we show the potential for classifying images of mixtures of aggregate, based themselves on varying, albeit well-defined, sizes and shapes, in order to provide a far more effective approach compared to the classification of individual sizes and shapes. while a dominant (additive, stationary) gaussian noise component in image data will ensure that wavelet coefficients are of gaussian distribution, long tailed distributions (symptomatic, for example, of extreme values) may well hold in practice for wavelet coefficients. energy (2nd order moment) has often been used for image characterization for image content-based retrieval, and higher order moments may be important also, not least for capturing long tailed distributional behavior. in this work, we assess 2nd, 3rd and 4th order moments of multiresolution transform -- wavelet and curvelet transform -- coefficients as features. as analysis methodology, taking account of image types, multiresolution transforms, and moments of coefficients in the scales or bands, we use correspondence analysis as well as k-nearest neighbors supervised classification.
are complex systems hard to evolve?
evolutionary complexity is here measured by the number of trials/evaluations needed for evolving a logical gate in a non-linear medium. behavioural complexity of the gates evolved is characterised in terms of cellular automata behaviour. we speculate that hierarchies of behavioural and evolutionary complexities are isomorphic up to some degree, subject to substrate specificity of evolution and the spectrum of evolution parameters.
rich, sturmian, and trapezoidal words
in this paper we explore various interconnections between rich words, sturmian words, and trapezoidal words. rich words, first introduced in arxiv:0801.1656 by the second and third authors together with j. justin and s. widmer, constitute a new class of finite and infinite words characterized by having the maximal number of palindromic factors. every finite sturmian word is rich, but not conversely. trapezoidal words were first introduced by the first author in studying the behavior of the subword complexity of finite sturmian words. unfortunately this property does not characterize finite sturmian words. in this note we show that the only trapezoidal palindromes are sturmian. more generally we show that sturmian palindromes can be characterized either in terms of their subword complexity (the trapezoidal property) or in terms of their palindromic complexity. we also obtain a similar characterization of rich palindromes in terms of a relation between palindromic complexity and subword complexity.
directive words of episturmian words: equivalences and normalization
episturmian morphisms constitute a powerful tool to study episturmian words. indeed, any episturmian word can be infinitely decomposed over the set of pure episturmian morphisms. thus, an episturmian word can be defined by one of its morphic decompositions or, equivalently, by a certain directive word. here we characterize pairs of words directing a common episturmian word. we also propose a way to uniquely define any episturmian word through a normalization of its directive words. as a consequence of these results, we characterize episturmian words having a unique directive word.
sensing danger: innate immunology for intrusion detection
the immune system provides an ideal metaphor for anomaly detection in general and computer security in particular. based on this idea, artificial immune systems have been used for a number of years for intrusion detection, unfortunately so far with little success. however, these previous systems were largely based on immunological theory from the 1970s and 1980s and over the last decade our understanding of immunological processes has vastly improved. in this paper we present two new immune inspired algorithms based on the latest immunological discoveries, such as the behaviour of dendritic cells. the resultant algorithms are applied to real world intrusion problems and show encouraging results. overall, we believe there is a bright future for these next generation artificial immune algorithms.
safety alternating automata on data words
a data word is a sequence of pairs of a letter from a finite alphabet and an element from an infinite set, where the latter can only be compared for equality. safety one-way alternating automata with one register on infinite data words are considered, their nonemptiness is shown expspace-complete, and their inclusion decidable but not primitive recursive. the same complexity bounds are obtained for satisfiability and refinement, respectively, for the safety fragment of linear temporal logic with freeze quantification. dropping the safety restriction, adding past temporal operators, or adding one more register, each causes undecidability.
on disjoint matchings in cubic graphs
for $i=2,3$ and a cubic graph $g$ let $\nu_{i}(g)$ denote the maximum number of edges that can be covered by $i$ matchings. we show that $\nu_{2}(g)\geq {4/5}| v(g)| $ and $\nu_{3}(g)\geq {7/6}| v(g)| $. moreover, it turns out that $\nu_{2}(g)\leq \frac{|v(g)|+2\nu_{3}(g)}{4}$.
polynomial time algorithms for bi-criteria, multi-objective and ratio problems in clustering and imaging. part i: normalized cut and ratio regions
partitioning and grouping of similar objects plays a fundamental role in image segmentation and in clustering problems. in such problems a typical goal is to group together similar objects, or pixels in the case of image processing. at the same time another goal is to have each group distinctly dissimilar from the rest and possibly to have the group size fairly large. these goals are often combined as a ratio optimization problem. one example of such problem is the normalized cut problem, another is the ratio regions problem. we devise here the first polynomial time algorithms solving these problems optimally. the algorithms are efficient and combinatorial. this contrasts with the heuristic approaches used in the image segmentation literature that formulate those problems as nonlinear optimization problems, which are then relaxed and solved with spectral techniques in real numbers. these approaches not only fail to deliver an optimal solution, but they are also computationally expensive. the algorithms presented here use as a subroutine a minimum $s,t-cut procedure on a related graph which is of polynomial size. the output consists of the optimal solution to the respective ratio problem, as well as a sequence of nested solution with respect to any relative weighting of the objectives of the numerator and denominator. an extension of the results here to bi-criteria and multi-criteria objective functions is presented in part ii.
stream sampling for variance-optimal estimation of subset sums
from a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. this is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. we present an efficient reservoir sampling scheme, $\varoptk$, that dominates all previous schemes in terms of estimation quality. $\varoptk$ provides {\em variance optimal unbiased estimation of subset sums}. more precisely, if we have seen $n$ items of the stream, then for {\em any} subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. in fact, the optimality is against any off-line scheme with $k$ samples tailored for the concrete set of items seen. in addition to optimal average variance, our scheme provides tighter worst-case bounds on the variance of {\em particular} subsets than previously possible. it is efficient, handling each new item of the stream in $o(\log k)$ time. finally, it is particularly well suited for combination of samples from different streams in a distributed setting.
serious flaws in korf et al.'s analysis on time complexity of a*
this paper has been withdrawn.
on inversion formulas and fibonomial coefficients
a research problem for undergraduates and graduates is being posed as a cap for the prior antecedent regular discrete mathematics exercises. [here cap is not necessarily cap=competitive access provider, though nevertheless ...] the object of the cap problem of final interest i.e. array of fibonomial coefficients and the issue of its combinatorial meaning is to be found in a.k.kwa\'sniewski's source papers. the cap problem number seven - still opened for students has been placed on mathemagics page of the first author [http://ii.uwb.edu.pl/akk/dydaktyka/dyskr/dyskretna.htm]. the indicatory references are to point at a part of the vast domain of the foundations of computer science in arxiv affiliation noted as co.cs.dm. the presentation has been verified in a tutor system of communication with a couple of intelligent students. the result is top secret.temporarily. [contact: wikipedia; theory of cognitive development].
dempster-shafer for anomaly detection
in this paper, we implement an anomaly detection system using the dempster-shafer method. using two standard benchmark problems we show that by combining multiple signals it is possible to achieve better results than by using a single signal. we further show that by applying this approach to a real-world email dataset the algorithm works for email worm detection. dempster-shafer can be a promising method for anomaly detection problems with multiple features (data sources), and two or more classes.
simulation optimization of the crossdock door assignment problem
the purpose of this report is to present the crossdock door assignment problem, which involves assigning destinations to outbound dock doors of crossdock centres such that travel distance by material handling equipment is minimized. we propose a two fold solution; simulation and optimization of the simulation model simulation optimization. the novel aspect of our solution approach is that we intend to use simulation to derive a more realistic objective function and use memetic algorithms to find an optimal solution. the main advantage of using memetic algorithms is that it combines a local search with genetic algorithms. the crossdock door assignment problem is a new domain application to memetic algorithms and it is yet unknown how it will perform.
using intelligent agents to understand organisational behaviour
this paper introduces two ongoing research projects which seek to apply computer modelling techniques in order to simulate human behaviour within organisations. previous research in other disciplines has suggested that complex social behaviours are governed by relatively simple rules which, when identified, can be used to accurately model such processes using computer technology. the broad objective of our research is to develop a similar capability within organisational psychology.
using intelligent agents to understand management practices and retail productivity
intelligent agents offer a new and exciting way of understanding the world of work. in this paper we apply agent-based modeling and simulation to investigate a set of problems in a retail context. specifically, we are working to understand the relationship between human resource management practices and retail productivity. despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents could offer potential for fostering sustainable organizational capabilities in the future. the project is still at an early stage. so far we have conducted a case study in a uk department store to collect data and capture impressions about operations and actors within departments. furthermore, based on our case study we have built and tested our first version of a retail branch simulator which we will present in this paper.
an agent-based simulation of in-store customer experiences
agent-based modelling and simulation offers a new and exciting way of understanding the world of work. in this paper we describe the development of an agent-based simulation model, designed to help to understand the relationship between human resource management practices and retail productivity. we report on the current development of our simulation model which includes new features concerning the evolution of customers over time. to test some of these features we have conducted a series of experiments dealing with customer pool sizes, standard and noise reduction modes, and the spread of the word of mouth. our multi-disciplinary research team draws upon expertise from work psychologists and computer scientists. despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents offer potential for fostering sustainable organisational capabilities in the future.
genetic-algorithm seeding of idiotypic networks for mobile-robot navigation
robot-control designers have begun to exploit the properties of the human immune system in order to produce dynamic systems that can adapt to complex, varying, real-world tasks. jernes idiotypic-network theory has proved the most popular artificial-immune-system (ais) method for incorporation into behaviour-based robotics, since idiotypic selection produces highly adaptive responses. however, previous efforts have mostly focused on evolving the network connections and have often worked with a single, pre-engineered set of behaviours, limiting variability. this paper describes a method for encoding behaviours as a variable set of attributes, and shows that when the encoding is used with a genetic algorithm (ga), multiple sets of diverse behaviours can develop naturally and rapidly, providing much greater scope for flexible behaviour-selection. the algorithm is tested extensively with a simulated e-puck robot that navigates around a maze by tracking colour. results show that highly successful behaviour sets can be generated within about 25 minutes, and that much greater diversity can be obtained when multiple autonomous populations are used, rather than a single one.
investigating a hybrid metaheuristic for job shop rescheduling
previous research has shown that artificial immune systems can be used to produce robust schedules in a manufacturing environment. the main goal is to develop building blocks (antibodies) of partial schedules that can be used to construct backup solutions (antigens) when disturbances occur during production. the building blocks are created based upon underpinning ideas from artificial immune systems and evolved using a genetic algorithm (phase i). each partial schedule (antibody) is assigned a fitness value and the best partial schedules are selected to be converted into complete schedules (antigens). we further investigate whether simulated annealing and the great deluge algorithm can improve the results when hybridised with our artificial immune system (phase ii). we use ten fixed solutions as our target and measure how well we cover these specific scenarios.
an investigation of the sequential sampling method for crossdocking simulation output variance reduction
this paper investigates the reduction of variance associated with a simulation output performance measure, using the sequential sampling method while applying minimum simulation replications, for a class of jit (just in time) warehousing system called crossdocking. we initially used the sequential sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). from our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (total usage cost, with an average mean value of 115,000 pounds). however, the sequential sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. arena (version 11) simulation software was used to conduct this study.
improved squeaky wheel optimisation for driver scheduling
this paper presents a technique called improved squeaky wheel optimisation for driver scheduling problems. it improves the original squeaky wheel optimisations effectiveness and execution speed by incorporating two additional steps of selection and mutation which implement evolution within a single solution. in the iswo, a cycle of analysis-selection-mutation-prioritization-construction continues until stopping conditions are reached. the analysis step first computes the fitness of a current solution to identify troublesome components. the selection step then discards these troublesome components probabilistically by using the fitness measure, and the mutation step follows to further discard a small number of components at random. after the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. the repair is carried out by using the prioritization step to first produce priorities that determine an order by which the following construction step then schedules the remaining components. therefore, the optimisation in the iswo is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. encouraging experimental results are reported.
the application of bayesian optimization and classifier systems in nurse scheduling
two ideas taken from bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each persons assignment. unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. to achieve this target, the bayesian optimization algorithm builds a bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. it is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
danger theory: the link between ais and ids?
we present ideas about creating a next generation intrusion detection system based on the latest immunological theories. the central challenge with computer security is determining the difference between normal and potentially harmful activity. for half a century, developers have protected their systems by coding rules that identify and block specific events. however, the nature of current and future threats in conjunction with ever larger it systems urgently requires the development of automated and adaptive defensive tools. a promising solution is emerging in the form of artificial immune systems. the human immune system can detect and defend against harmful and previously unseen invaders, so can we not build a similar intrusion detection system for our computers.
constant-rank codes and their connection to constant-dimension codes
constant-dimension codes have recently received attention due to their significance to error control in noncoherent random linear network coding. what the maximal cardinality of any constant-dimension code with finite dimension and minimum distance is and how to construct the optimal constant-dimension code (or codes) that achieves the maximal cardinality both remain open research problems. in this paper, we introduce a new approach to solving these two problems. we first establish a connection between constant-rank codes and constant-dimension codes. via this connection, we show that optimal constant-dimension codes correspond to optimal constant-rank codes over matrices with sufficiently many rows. as such, the two aforementioned problems are equivalent to determining the maximum cardinality of constant-rank codes and to constructing optimal constant-rank codes, respectively. to this end, we then derive bounds on the maximum cardinality of a constant-rank code with a given minimum rank distance, propose explicit constructions of optimal or asymptotically optimal constant-rank codes, and establish asymptotic bounds on the maximum rate of a constant-rank code.
enhanced direct and indirect genetic algorithm approaches for a mall layout and tenant selection problem
during our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. previously this balance was achieved manually. here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.
an indirect genetic algorithm for set covering problems
this paper presents a new type of genetic algorithm for the set covering problem. it differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. the genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. second, it will be shown that results can be further improved by adding another indirect optimisation layer. the decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. these are then post optimised by another hill-climbing algorithm. although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.
on the application of hierarchical coevolutionary genetic algorithms: recombination and evaluation partners
this paper examines the use of a hierarchical coevolutionary genetic algorithm under different partnering strategies. cascading clusters of sub-populations are built from the bottom up, with higher-level sub-populations optimising larger parts of the problem. hence higher-level sub-populations potentially search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. the effects of different partner selection schemes amongst the sub-populations on solution quality are examined for two constrained optimisation problems. we examine a number of recombination partnering strategies in the construction of higher-level individuals and a number of related schemes for evaluating sub-solutions. it is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub)fitness measurements.
building better nurse scheduling algorithms
the aim of this research is twofold: firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. the comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. this is achieved even when some results are non-numeric or missing due to infeasibility. the final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
an indirect genetic algorithm for a nurse scheduling problem
this paper describes a genetic algorithms approach to a manpower-scheduling problem arising at a major uk hospital. although genetic algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical genetic algorithms paradigm in handling the conflict between objectives and constraints. the approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from these permutations. computational experiments based on 52 weeks of live data are used to evaluate three different decoders with varying levels of intelligence, and four well-known crossover operators. results are further enhanced by introducing a hybrid crossover operator and by making use of simple bounds to reduce the size of the solution space. the results reveal that the proposed algorithm is able to find high quality solutions and is both faster and more flexible than a recently published tabu search approach.
a recommender system based on idiotypic artificial immune networks
the immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. this paper presents an artificial immune system (ais) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (cf). natural evolution and in particular the immune system have not been designed for classical optimisation. however, for this problem, we are not interested in finding a single optimum. rather we intend to identify a sub-set of good matches on which recommendations can be based. it is our hypothesis that an ais built on two central aspects of the biological immune system will be an ideal candidate to achieve this: antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. computational results are presented in support of this conjecture and compared to those found by other cf techniques.
rule generalisation in intrusion detection systems using snort
intrusion detection systems (ids)provide an important layer of security for computer systems and networks, and are becoming more and more necessary as reliance on internet services increases and systems with sensitive data are more commonly open to internet access. an ids responsibility is to detect suspicious or unacceptable system and network activity and to alert a systems administrator to this activity. the majority of ids use a set of signatures that define what suspicious traffic is, and snort is one popular and actively developing open-source ids that uses such a set of signatures known as snort rules. our aim is to identify a way in which snort could be developed further by generalising rules to identify novel attacks. in particular, we attempted to relax and vary the conditions and parameters of current snort rules, using a similar approach to classic rule learning operators such as generalisation and specialisation. we demonstrate the effectiveness of our approach through experiments with standard datasets and show that we are able to detect previously undeleted variants of various attacks. we conclude by discussing the general effectiveness and appropriateness of generalisation in snort based ids rule processing.
an estimation of distribution algorithm for nurse scheduling
schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. this paper presents an estimation of distribution algorithm (eda) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. unlike previous work that used genetic algorithms (ga) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. the eda is applied to implement such explicit learning by building a bayesian network of the joint distribution of solutions. the conditional probability of each variable in the network is computed according to an initial set of promising solutions. subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. if stopping conditions are not met, the conditional probabilities for all nodes in the bayesian network are updated again using the current set of promising rule strings. computational results from 52 real data instances demonstrate the success of this approach. it is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
idiotypic immune networks in mobile robot control
jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. the theory has proved the most popular artificial immune system (ais) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. this paper aims to address these issues. a method for integrating an idiotypic ais network with a reinforcement learning based control system (rl) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. the basic rl, a simplified hybrid ais-rl that implements idiotypic selection independently of derived concentration levels and a full hybrid ais-rl scheme are examined. the test bed takes the form of a simulated pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.
robustness and regularization of support vector machines
we consider regularized support vector machines (svms) and show that they are precisely equivalent to a new robust optimization formulation. we show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. in terms of algorithms, the equivalence suggests more general svm-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. on the analysis front, the equivalence of robustness and regularization, provides a robust optimization interpretation for the success of regularized svms. we use the this new robustness interpretation of svms to give a new proof of consistency of (kernelized) svms, thus establishing robustness as the reason regularized svms generalize well.
a component based heuristic search method with adaptive perturbations for hospital personnel scheduling
nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. this paper presents a new component-based approach with adaptive perturbations, for a nurse scheduling problem arising at a major uk hospital. the main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then mimic a natural evolutionary process on these components to iteratively deliver better schedules. the worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. this demonstration employs a dynamic evaluation function which evaluates how well each component contributes towards the final objective. two perturbation steps are then applied: the first perturbation eliminates a number of components that are deemed not worthy to stay in the current schedule; the second perturbation may also throw out, with a low level of probability, some worthy components. the eliminated components are replenished with new ones using a set of constructive heuristics using local optimality criteria. computational results using 52 data instances demonstrate the applicability of the proposed approach in solving real-world problems.
artificial immune systems tutorial
the biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. it is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. it does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. there are two major branches of the immune system. the innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. this remarkable information processing biological system has caught the attention of computer science in recent years. a novel computational intelligence technique, inspired by immunology, has emerged, called artificial immune systems. several concepts from the immune have been extracted and applied for solution to real world science and engineering problems. in this tutorial, we briefly describe the immune system metaphors that are relevant to existing artificial immune systems methods. we will then show illustrative real-world problems suitable for artificial immune systems and give a step-by-step algorithm walkthrough for one such problem. a comparison of the artificial immune systems to other well-known algorithms, areas for future work, tips & tricks and a list of resources will round this tutorial off. it should be noted that as artificial immune systems is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from time to time and from those examples given here.
reflective visualization and verbalization of unconscious preference
a new method is presented, that can help a person become aware of his or her unconscious preferences, and convey them to others in the form of verbal explanation. the method combines the concepts of reflection, visualization, and verbalization. the method was tested in an experiment where the unconscious preferences of the subjects for various artworks were investigated. in the experiment, two lessons were learned. the first is that it helps the subjects become aware of their unconscious preferences to verbalize weak preferences as compared with strong preferences through discussion over preference diagrams. the second is that it is effective to introduce an adjustable factor into visualization to adapt to the differences in the subjects and to foster their mutual understanding.
optimization of enzymatic biochemical logic for noise reduction and scalability: how many biocomputing gates can be interconnected in a circuit?
we report an experimental evaluation of the "input-output surface" for a biochemical and gate. the obtained data are modeled within the rate-equation approach, with the aim to map out the gate function and cast it in the language of logic variables appropriate for analysis of boolean logic for scalability. in order to minimize "analog" noise, we consider a theoretical approach for determining an optimal set for the process parameters to minimize "analog" noise amplification for gate concatenation. we establish that under optimized conditions, presently studied biochemical gates can be concatenated for up to order 10 processing steps. beyond that, new paradigms for avoiding noise build-up will have to be developed. we offer a general discussion of the ideas and possible future challenges for both experimental and theoretical research for advancing scalable biochemical computing.
cryptanalysis of two mceliece cryptosystems based on quasi-cyclic codes
we cryptanalyse here two variants of the mceliece cryptosystem based on quasi-cyclic codes. both aim at reducing the key size by restricting the public and secret generator matrices to be in quasi-cyclic form. the first variant considers subcodes of a primitive bch code. we prove that this variant is not secure by finding and solving a linear system satisfied by the entries of the secret permutation matrix. the other variant uses quasi-cyclic low density parity-check codes. this scheme was devised to be immune against general attacks working for mceliece type cryptosystems based on low density parity-check codes by choosing in the mceliece scheme more general one-to-one mappings than permutation matrices. we suggest here a structural attack exploiting the quasi-cyclic structure of the code and a certain weakness in the choice of the linear transformations that hide the generator matrix of the code. our analysis shows that with high probability a parity-check matrix of a punctured version of the secret code can be recovered in cubic time complexity in its length. the complete reconstruction of the secret parity-check matrix of the quasi-cyclic low density parity-check codes requires the search of codewords of low weight which can be done with about $2^{37}$ operations for the specific parameters proposed.
bayesian optimisation algorithm for nurse scheduling
our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. this chapter presents a bayesian optimization algorithm (boa) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurses assignment. based on the idea of using probabilistic models, the boa builds a bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. computational results from 52 real data instances demonstrate the success of this approach. it is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.
an artificial immune system as a recommender system for web sites
artificial immune systems have been used successfully to build recommender systems for film databases. in this research, an attempt is made to extend this idea to web site recommendation. a collection of more than 1000 individuals web profiles (alternatively called preferences / favourites / bookmarks file) will be used. urls will be classified using the dmoz (directory mozilla) database of the open directory project as our ontology. this will then be used as the data for the artificial immune systems rather than the actual addresses. the first attempt will involve using a simple classification code number coupled with the number of pages within that classification code. however, this implementation does not make use of the hierarchical tree-like structure of dmoz. consideration will then be given to the construction of a similarity measure for web profiles that makes use of this hierarchical information to build a better-informed artificial immune system.
explicit learning: an effort towards human scheduling algorithms
scheduling problems are generally np-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. however, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. mimicking the natural evolutionary process of the survival of the fittest, genetic algorithms (gas) have attracted much attention in solving difficult scheduling problems in recent years. some obstacles exist when using gas: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. to overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where gas are used by mapping the solution space, and separate decoding routines then build solutions to the original problem.
a memetic algorithm for the generalized traveling salesman problem
the generalized traveling salesman problem (gtsp) is an extension of the well-known traveling salesman problem. in gtsp, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. the recent studies on this subject consider different variations of a memetic algorithm approach to the gtsp. the aim of this paper is to present a new memetic algorithm for gtsp with a powerful local search procedure. the experiments show that the proposed algorithm clearly outperforms all of the known heuristics with respect to both solution quality and running time. while the other memetic algorithms were designed only for the symmetric gtsp, our algorithm can solve both symmetric and asymmetric instances.
generalized traveling salesman problem reduction algorithms
the generalized traveling salesman problem (gtsp) is an extension of the well-known traveling salesman problem. in gtsp, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. the aim of this paper is to present a problem reduction algorithm that deletes redundant vertices and edges, preserving the optimal solution. the algorithm's running time is o(n^3) in the worst case, but it is significantly faster in practice. the algorithm has reduced the problem size by 15-20% on average in our experiments and this has decreased the solution time by 10-60% for each of the considered solvers.
immune system approaches to intrusion detection - a review
the use of artificial immune systems in intrusion detection is an appealing concept for two reasons. firstly, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. secondly, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. it is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. here we review the algorithms used, the development of the systems and the outcome of their implementation. we provide an introduction and analysis of the key developments within this field, in addition to making suggestions for future research.
data reduction in intrusion alert correlation
network intrusion detection sensors are usually built around low level models of network traffic. this means that their output is of a similarly low level and as a consequence, is difficult to analyze. intrusion alert correlation is the task of automating some of this analysis by grouping related alerts together. attack graphs provide an intuitive model for such analysis. unfortunately alert flooding attacks can still cause a loss of service on sensors, and when performing attack graph correlation, there can be a large number of extraneous alerts included in the output graph. this obscures the fine structure of genuine attacks and makes them more difficult for human operators to discern. this paper explores modified correlation algorithms which attempt to minimize the impact of this attack.
mechanizing the metatheory of lf
lf is a dependent type theory in which many other formal systems can be conveniently embedded. however, correct use of lf relies on nontrivial metatheoretic developments such as proofs of correctness of decision procedures for lf's judgments. although detailed informal proofs of these properties have been published, they have not been formally verified in a theorem prover. we have formalized these properties within isabelle/hol using the nominal datatype package, closely following a recent article by harper and pfenning. in the process, we identified and resolved a gap in one of the proofs and a small number of minor lacunae in others. we also formally derive a version of the type checking algorithm from which isabelle/hol can generate executable code. besides its intrinsic interest, our formalization provides a foundation for studying the adequacy of lf encodings, the correctness of twelf-style metatheoretic reasoning, and the metatheory of extensions to lf.
on affine usages in signal-based communication
we describe a type system for a synchronous pi-calculus formalising the notion of affine usage in signal-based communication. in particular, we identify a limited number of usages that preserve affinity and that can be composed. as a main application of the resulting system, we show that typable programs are deterministic.
necessary and sufficient conditions on sparsity pattern recovery
the problem of detecting the sparsity pattern of a k-sparse vector in r^n from m random noisy measurements is of interest in many areas such as system identification, denoising, pattern recognition, and compressed sensing. this paper addresses the scaling of the number of measurements m, with signal dimension n and sparsity-level nonzeros k, for asymptotically-reliable detection. we show a necessary condition for perfect recovery at any given snr for all algorithms, regardless of complexity, is m = omega(k log(n-k)) measurements. conversely, it is shown that this scaling of omega(k log(n-k)) measurements is sufficient for a remarkably simple ``maximum correlation'' estimator. hence this scaling is optimal and does not require more sophisticated techniques such as lasso or matching pursuit. the constants for both the necessary and sufficient conditions are precisely defined in terms of the minimum-to-average ratio of the nonzero components and the snr. the necessary condition improves upon previous results for maximum likelihood estimation. for lasso, it also provides a necessary condition at any snr and for low snr improves upon previous work. the sufficient condition provides the first asymptotically-reliable detection guarantee at finite snr.
towards physarum robots: computing and manipulating on water surface
plasmodium of physarym polycephalum is an ideal biological substrate for implementing concurrent and parallel computation, including combinatorial geometry and optimization on graphs. we report results of scoping experiments on physarum computing in conditions of minimal friction, on the water surface. we show that plasmodium of physarum is capable for computing a basic spanning trees and manipulating of light-weight objects. we speculate that our results pave the pathways towards design and implementation of amorphous biological robots.
interlace polynomials: enumeration, unimodality, and connections to codes
the interlace polynomial q was introduced by arratia, bollobas, and sorkin. it encodes many properties of the orbit of a graph under edge local complementation (elc). the interlace polynomial q, introduced by aigner and van der holst, similarly contains information about the orbit of a graph under local complementation (lc). we have previously classified lc and elc orbits, and now give an enumeration of the corresponding interlace polynomials of all graphs of order up to 12. an enumeration of all circle graphs of order up to 12 is also given. we show that there exist graphs of all orders greater than 9 with interlace polynomials q whose coefficient sequences are non-unimodal, thereby disproving a conjecture by arratia et al. we have verified that for graphs of order up to 12, all polynomials q have unimodal coefficients. it has been shown that lc and elc orbits of graphs correspond to equivalence classes of certain error-correcting codes and quantum states. we show that the properties of these codes and quantum states are related to properties of the associated interlace polynomials.
towards a stable definition of kolmogorov-chaitin complexity
although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of k(s), the kolmogorov-chaitin complexity of a string s. some attempts have been made to arrive at a framework stable enough for a concrete definition of k, independent of any constant under a programming language, by appealing to the "naturalness" of the language in question. the aim of this paper is to present an approach to overcome the problem by looking at a set of models of computation converging in output probability distribution such that that "naturalness" can be inferred, thereby providing a framework for a stable definition of k under the set of convergent models of computation.
a non-distillability criterion for secret correlations
within entanglement theory there are criteria which certify that some quantum states cannot be distilled into pure entanglement. an example is the positive partial transposition criterion. here we present, for the first time, the analogous thing for secret correlations. we introduce a computable criterion which certifies that a probability distribution between two honest parties and an eavesdropper cannot be (asymptotically) distilled into a secret key. the existence of non-distillable correlations with positive secrecy cost, also known as bound information, is an open question. this criterion may be the key for finding bound information. however, if it turns out that this criterion does not detect bound information, then, a very interesting consequence follows: any distribution with positive secrecy cost can increase the secrecy content of another distribution. in other words, all correlations with positive secrecy cost constitute a useful resource.
alternating automata on data trees and xpath satisfiability
a data tree is an unranked ordered tree whose every node is labelled by a letter from a finite alphabet and an element ("datum") from an infinite set, where the latter can only be compared for equality. the article considers alternating automata on data trees that can move downward and rightward, and have one register for storing data. the main results are that nonemptiness over finite data trees is decidable but not primitive recursive, and that nonemptiness of safety automata is decidable but not elementary. the proofs use nondeterministic tree automata with faulty counters. allowing upward moves, leftward moves, or two registers, each causes undecidability. as corollaries, decidability is obtained for two data-sensitive fragments of the xpath query language.
decoding generalized concatenated codes using interleaved reed-solomon codes
generalized concatenated codes are a code construction consisting of a number of outer codes whose code symbols are protected by an inner code. as outer codes, we assume the most frequently used reed-solomon codes; as inner code, we assume some linear block code which can be decoded up to half its minimum distance. decoding up to half the minimum distance of generalized concatenated codes is classically achieved by the blokh-zyablov-dumer algorithm, which iteratively decodes by first using the inner decoder to get an estimate of the outer code words and then using an outer error/erasure decoder with a varying number of erasures determined by a set of pre-calculated thresholds. in this paper, a modified version of the blokh-zyablov-dumer algorithm is proposed, which exploits the fact that a number of outer reed-solomon codes with average minimum distance d can be grouped into one single interleaved reed-solomon code which can be decoded beyond d/2. this allows to skip a number of decoding iterations on the one hand and to reduce the complexity of each decoding iteration significantly - while maintaining the decoding performance - on the other.
upper bounds for alpha-domination parameters
in this paper, we provide a new upper bound for the alpha-domination number. this result generalises the well-known caro-roditty bound for the domination number of a graph. the same probabilistic construction is used to generalise another well-known upper bound for the classical domination in graphs. we also prove similar upper bounds for the alpha-rate domination number, which combines the concepts of alpha-domination and k-tuple domination.
order to disorder transitions in hybrid intelligent systems: a hatch to the interactions of nations -governments
in this study, under general frame of many connected intelligent particles systems (macips), we reproduce two new simple subsets of such intelligent complex network, namely hybrid intelligent systems, involved a few prominent intelligent computing and approximate reasoning methods: self organizing feature map (som), neuro-fuzzy inference system and rough set theory (rst). over this, we show how our algorithms can be construed as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. so, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. add to this, one may find an indirect mapping among financial systems and eventual market fluctuations with macips.
quasiperiodic and lyndon episturmian words
recently the second two authors characterized quasiperiodic sturmian words, proving that a sturmian word is non-quasiperiodic if and only if it is an infinite lyndon word. here we extend this study to episturmian words (a natural generalization of sturmian words) by describing all the quasiperiods of an episturmian word, which yields a characterization of quasiperiodic episturmian words in terms of their "directive words". even further, we establish a complete characterization of all episturmian words that are lyndon words. our main results show that, unlike the sturmian case, there is a much wider class of episturmian words that are non-quasiperiodic, besides those that are infinite lyndon words. our key tools are morphisms and directive words, in particular "normalized" directive words, which we introduced in an earlier paper. also of importance is the use of "return words" to characterize quasiperiodic episturmian words, since such a method could be useful in other contexts.
submodular approximation: sampling-based algorithms and lower bounds
we introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. the new problems include submodular load balancing, which generalizes load balancing or minimum-makespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. we establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a function-value oracle. the approximation guarantees for most of our algorithms are of the order of sqrt(n/ln n). we show that this is the inherent difficulty of the problems by proving matching lower bounds. we also give an improved lower bound for the problem of approximately learning a monotone submodular function. in addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. this demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.
greedy forwarding in dynamic scale-free networks embedded in hyperbolic metric spaces
we show that complex (scale-free) network topologies naturally emerge from hyperbolic metric spaces. hyperbolic geometry facilitates maximally efficient greedy forwarding in these networks. greedy forwarding is topology-oblivious. nevertheless, greedy packets find their destinations with 100% probability following almost optimal shortest paths. this remarkable efficiency sustains even in highly dynamic networks. our findings suggest that forwarding information through complex networks, such as the internet, is possible without the overhead of existing routing protocols, and may also find practical applications in overlay networks for tasks such as application-level routing, information sharing, and data distribution.
physically-relativized church-turing hypotheses
we turn `the' church-turing hypothesis from an ambiguous source of sensational speculations into a (collection of) sound and well-defined scientific problem(s): examining recent controversies, and causes for misunderstanding, concerning the state of the church-turing hypothesis (cth), suggests to study the cth relative to an arbitrary but specific physical theory--rather than vaguely referring to ``nature'' in general. to this end we combine (and compare) physical structuralism with (models of computation in) complexity theory. the benefit of this formal framework is illustrated by reporting on some previous, and giving one new, example result(s) of computability and complexity in computational physics.
grammatical evolution with restarts for fast fractal generation
in a previous work, the authors proposed a grammatical evolution algorithm to automatically generate lindenmayer systems which represent fractal curves with a pre-determined fractal dimension. this paper gives strong statistical evidence that the probability distributions of the execution time of that algorithm exhibits a heavy tail with an hyperbolic probability decay for long executions, which explains the erratic performance of different executions of the algorithm. three different restart strategies have been incorporated in the algorithm to mitigate the problems associated to heavy tail distributions: the first assumes full knowledge of the execution time probability distribution, the second and third assume no knowledge. these strategies exploit the fact that the probability of finding a solution in short executions is non-negligible and yield a severe reduction, both in the expected execution time (up to one order of magnitude) and in its variance, which is reduced from an infinite to a finite value.
certified exact transcendental real number computation in coq
reasoning about real number expressions in a proof assistant is challenging. several problems in theorem proving can be solved by using exact real number computation. i have implemented a library for reasoning and computing with complete metric spaces in the coq proof assistant and used this library to build a constructive real number implementation including elementary real number functions and proofs of correctness. using this library, i have created a tactic that automatically proves strict inequalities over closed elementary real number expressions by computation.
on the entropy and log-concavity of compound poisson measures
motivated, in part, by the desire to develop an information-theoretic foundation for compound poisson approximation limit theorems (analogous to the corresponding developments for the central limit theorem and for simple poisson approximation), this work examines sufficient conditions under which the compound poisson distribution has maximal entropy within a natural class of probability measures on the nonnegative integers. we show that the natural analog of the poisson maximum entropy property remains valid if the measures under consideration are log-concave, but that it fails in general. a parallel maximum entropy result is established for the family of compound binomial measures. the proofs are largely based on ideas related to the semigroup approach introduced in recent work by johnson for the poisson family. sufficient conditions are given for compound distributions to be log-concave, and specific examples are presented illustrating all the above results.
subresultants in recursive polynomial remainder sequence
we introduce concepts of "recursive polynomial remainder sequence (prs)" and "recursive subresultant," and investigate their properties. in calculating prs, if there exists the gcd (greatest common divisor) of initial polynomials, we calculate "recursively" with new prs for the gcd and its derivative, until a constant is derived. we call such a prs a recursive prs. we define recursive subresultants to be determinants representing the coefficients in recursive prs by coefficients of initial polynomials. finally, we discuss usage of recursive subresultants in approximate algebraic computation, which motivates the present work.
recursive polynomial remainder sequence and the nested subresultants
we give two new expressions of subresultants, nested subresultant and reduced nested subresultant, for the recursive polynomial remainder sequence (prs) which has been introduced by the author. the reduced nested subresultant reduces the size of the subresultant matrix drastically compared with the recursive subresultant proposed by the authors before, hence it is much more useful for investigation of the recursive prs. finally, we discuss usage of the reduced nested subresultant in approximate algebraic computation, which motivates the present work.
recursive polynomial remainder sequence and its subresultants
we introduce concepts of "recursive polynomial remainder sequence (prs)" and "recursive subresultant," along with investigation of their properties. a recursive prs is defined as, if there exists the gcd (greatest common divisor) of initial polynomials, a sequence of prss calculated "recursively" for the gcd and its derivative until a constant is derived, and recursive subresultants are defined by determinants representing the coefficients in recursive prs as functions of coefficients of initial polynomials. we give three different constructions of subresultant matrices for recursive subresultants; while the first one is built-up just with previously defined matrices thus the size of the matrix increases fast as the recursion deepens, the last one reduces the size of the matrix drastically by the gaussian elimination on the second one which has a "nested" expression, i.e. a sylvester matrix whose elements are themselves determinants.
drawing (complete) binary tanglegrams: hardness, approximation, fixed-parameter tractability
a \emph{binary tanglegram} is a drawing of a pair of rooted binary trees whose leaf sets are in one-to-one correspondence; matching leaves are connected by inter-tree edges. for applications, for example, in phylogenetics, it is essential that both trees are drawn without edge crossings and that the inter-tree edges have as few crossings as possible. it is known that finding a tanglegram with the minimum number of crossings is np-hard and that the problem is fixed-parameter tractable with respect to that number. we prove that under the unique games conjecture there is no constant-factor approximation for binary trees. we show that the problem is np-hard even if both trees are complete binary trees. for this case we give an $o(n^3)$-time 2-approximation and a new, simple fixed-parameter algorithm. we show that the maximization version of the dual problem for binary trees can be reduced to a version of maxcut for which the algorithm of goemans and williamson yields a 0.878-approximation.
on convergence-sensitive bisimulation and the embedding of ccs in timed ccs
we propose a notion of convergence-sensitive bisimulation that is built just over the notions of (internal) reduction and of (static) context. in the framework of timed ccs, we characterise this notion of `contextual' bisimulation via the usual labelled transition system. we also remark that it provides a suitable semantic framework for a fully abstract embedding of untimed processes into timed ones. finally, we show that the notion can be refined to include sensitivity to divergence.
unsatisfiable cnf formulas need many conflicts
a pair of clauses in a cnf formula constitutes a conflict if there is a variable that occurs positively in one clause and negatively in the other. a cnf formula without any conflicts is satisfiable. the lovasz local lemma implies that a k-cnf formula is satisfiable if each clause conflicts with at most 2^k/e-1 clauses. it does not, however, give any good bound on how many conflicts an unsatisfiable formula has globally. we show here that every unsatisfiable k-cnf formula requires 2.69^k conflicts and there exist unsatisfiable k-cnf formulas with 3.51^k conflicts.
performance of ldpc codes under faulty iterative decoding
departing from traditional communication theory where decoding algorithms are assumed to perform without error, a system where noise perturbs both computational devices and communication channels is considered here. this paper studies limits in processing noisy signals with noisy circuits by investigating the effect of noise on standard iterative decoders for low-density parity-check codes. concentration of decoding performance around its average is shown to hold when noise is introduced into message-passing and local computation. density evolution equations for simple faulty iterative decoders are derived. in one model, computing nonlinear estimation thresholds shows that performance degrades smoothly as decoder noise increases, but arbitrarily small probability of error is not achievable. probability of error may be driven to zero in another system model; the decoding threshold again decreases smoothly with decoder noise. as an application of the methods developed, an achievability result for reliable memory systems constructed from unreliable components is provided.
improving classical authentication with quantum communication
we propose a quantum-enhanced protocol to authenticate classical messages, with improved security with respect to the classical scheme introduced by brassard in 1983. in that protocol, the shared key is the seed of a pseudo-random generator (prg) and a hash function is used to create the authentication tag of a public message. we show that a quantum encoding of secret bits offers more security than the classical xor function introduced by brassard. furthermore, we establish the relationship between the bias of a prg and the amount of information about the key that the attacker can retrieve from a block of authenticated messages. finally, we prove that quantum resources can improve both the secrecy of the key generated by the prg and the secrecy of the tag obtained with a hidden hash function.
a random search framework for convergence analysis of distributed beamforming with feedback
the focus of this work is on the analysis of transmit beamforming schemes with a low-rate feedback link in wireless sensor/relay networks, where nodes in the network need to implement beamforming in a distributed manner. specifically, the problem of distributed phase alignment is considered, where neither the transmitters nor the receiver has perfect channel state information, but there is a low-rate feedback link from the receiver to the transmitters. in this setting, a framework is proposed for systematically analyzing the performance of distributed beamforming schemes. to illustrate the advantage of this framework, a simple adaptive distributed beamforming scheme that was recently proposed by mudambai et al. is studied. two important properties for the received signal magnitude function are derived. using these properties and the systematic framework, it is shown that the adaptive distributed beamforming scheme converges both in probability and in mean. furthermore, it is established that the time required for the adaptive scheme to converge in mean scales linearly with respect to the number of sensor/relay nodes.
a computer verified theory of compact sets
compact sets in constructive mathematics capture our intuition of what computable subsets of the plane (or any other complete metric space) ought to be. a good representation of compact sets provides an efficient means of creating and displaying images with a computer. in this paper, i build upon existing work about complete metric spaces to define compact sets as the completion of the space of finite sets under the hausdorff metric. this definition allowed me to quickly develop a computer verified theory of compact sets. i applied this theory to compute provably correct plots of uniformly continuous functions.
stabilizing tiny interaction protocols
in this paper we present the self-stabilizing implementation of a class of token based algorithms. in the current work we only consider interactions between weak nodes. they are uniform, they do not have unique identifiers, are static and their interactions are restricted to a subset of nodes called neighbours. while interacting, a pair of neighbouring nodes may create mobile agents (that materialize in the current work the token abstraction) that perform traversals of the network and accelerate the system stabilization. in this work we only explore the power of oblivious stateless agents. our work shows that the agent paradigm is an elegant distributed tool for achieving self-stabilization in tiny interaction protocols (tip). nevertheless, in order to reach the full power of classical self-stabilizing algorithms more complex classes of agents have to be considered (e.g. agents with memory, identifiers or communication skills). interestingly, our work proposes for the first time a model that unifies the recent studies in mobile robots(agents) that evolve in a discrete space and the already established population protocols paradigm.
"minesweeper" and spectrum of discrete laplacians
the paper is devoted to a problem inspired by the "minesweeper" computer game. it is shown that certain configurations of open cells guarantee the existence and the uniqueness of solution. mathematically the problem is reduced to some spectral properties of discrete differential operators. it is shown how the uniqueness can be used to create a new game which preserves the spirit of "minesweeper" but does not require a computer.
computational approaches to measuring the similarity of short contexts : a review of applications and methods
measuring the similarity of short written contexts is a fundamental problem in natural language processing. this article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. the goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. the axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). the unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.
the heap lambda machine
this paper introduces a new machine architecture for evaluating lambda expressions using the normal-order reduction, which guarantees that every lambda expression will be evaluated if the expression has its normal form and the system has enough memory. the architecture considered here operates using heap memory only. lambda expressions are represented as graphs, and all algorithms used in the processing unit of this machine are non-recursive.
concept-oriented programming
object-oriented programming (oop) is aimed at describing the structure and behaviour of objects by hiding the mechanism of their representation and access in primitive references. in this article we describe an approach, called concept-oriented programming (cop), which focuses on modelling references assuming that they also possess application-specific structure and behaviour accounting for a great deal or even most of the overall program complexity. references in cop are completely legalized and get the same status as objects while the functions are distributed among both objects and references. in order to support this design we introduce a new programming construct, called concept, which generalizes conventional classes and concept inclusion relation generalizing class inheritance. the main advantage of cop is that it allows programmers to describe two sides of any program: explicitly used functions of objects and intermediate functionality of references having cross-cutting nature and executed implicitly behind the scenes during object access.
ams without 4-wise independence on product domains
in their seminal work, alon, matias, and szegedy introduced several sketching techniques, including showing that 4-wise independence is sufficient to obtain good approximations of the second frequency moment. in this work, we show that their sketching technique can be extended to product domains $[n]^k$ by using the product of 4-wise independent functions on $[n]$. our work extends that of indyk and mcgregor, who showed the result for $k = 2$. their primary motivation was the problem of identifying correlations in data streams. in their model, a stream of pairs $(i,j) \in [n]^2$ arrive, giving a joint distribution $(x,y)$, and they find approximation algorithms for how close the joint distribution is to the product of the marginal distributions under various metrics, which naturally corresponds to how close $x$ and $y$ are to being independent. by using our technique, we obtain a new result for the problem of approximating the $\ell_2$ distance between the joint distribution and the product of the marginal distributions for $k$-ary vectors, instead of just pairs, in a single pass. our analysis gives a randomized algorithm that is a $(1 \pm \epsilon)$ approximation (with probability $1-\delta$) that requires space logarithmic in $n$ and $m$ and proportional to $3^k$.
myopic coding in multiterminal networks
this paper investigates the interplay between cooperation and achievable rates in multi-terminal networks. cooperation refers to the process of nodes working together to relay data toward the destination. there is an inherent tradeoff between achievable information transmission rates and the level of cooperation, which is determined by how many nodes are involved and how the nodes encode/decode the data. we illustrate this trade-off by studying information-theoretic decode-forward based coding strategies for data transmission in multi-terminal networks. decode-forward strategies are usually discussed in the context of omniscient coding, in which all nodes in the network fully cooperate with each other, both in encoding and decoding. in this paper, we investigate myopic coding, in which each node cooperates with only a few neighboring nodes. we show that achievable rates of myopic decode-forward can be as large as that of omniscient decode-forward in the low snr regime. we also show that when each node has only a few cooperating neighbors, adding one node into the cooperation increases the transmission rate significantly. furthermore, we show that myopic decode-forward can achieve non-zero rates as the network size grows without bound.
a novel mathematical model for the unique shortest path routing problem
link weights are the principal parameters of shortest path routing protocols, the most commonly used protocols for ip networks. the problem of optimally setting link weights for unique shortest path routing is addressed. due to the complexity of the constraints involved, there exist challenges to formulate the problem properly, so that a solution algorithm may be developed which could prove to be more efficient than those already in existence. in this paper, a novel complete formulation with a polynomial number of constraints is first introduced and then mathematically proved to be correct. it is further illustrated that the formulation has advantages over a prior one in terms of both constraint structure and model size for a proposed decomposition method to solve the problem.
graph kernels
we present a unified framework to study graph kernels, special cases of which include the random walk graph kernel \citep{gaeflawro03,borongschvisetal05}, marginalized graph kernel \citep{kastsuino03,kastsuino04,mahuedakuperetal04}, and geometric kernel on graphs \citep{gaertner02}. through extensions of linear algebra to reproducing kernel hilbert spaces (rkhs) and reduction to a sylvester equation, we construct an algorithm that improves the time complexity of kernel computation from $o(n^6)$ to $o(n^3)$. when the graphs are sparse, conjugate gradient solvers or fixed-point iterations bring our algorithm into the sub-cubic domain. experiments on graphs from bioinformatics and other application domains show that it is often more than a thousand times faster than previous approaches. we then explore connections between diffusion kernels \citep{konlaf02}, regularization on graphs \citep{smokon03}, and graph kernels, and use these connections to propose new graph kernels. finally, we show that rational kernels \citep{corhafmoh02,corhafmoh03,corhafmoh04} when specialized to graphs reduce to the random walk graph kernel.
virtual transmission method, a new distributed algorithm to solve sparse linear system
in this paper, we propose a new parallel algorithm which could work naturally on the parallel computer with arbitrary number of processors. this algorithm is named virtual transmission method (vtm). its physical backgroud is the lossless transmission line and microwave network. the basic idea of vtm is to insert lossless transmission lines into the sparse linear system to achieve distributed computing. vtm is proved to be convergent to solve spd linear system. preconditioning method and performance model are presented. numerical experiments show that vtm is efficient, accurate and stable. accompanied with vtm, we bring in a new technique to partition the symmetric linear system, which is named generalized node & branch tearing (gnbt). it is based on kirchhoff's current law from circuit theory. we proved that gnbt is feasible to partition any spd linear system.
a new characteristic property of rich words
originally introduced and studied by the third and fourth authors together with j. justin and s. widmer in arxiv:0801.1656, rich words constitute a new class of finite and infinite words characterized by containing the maximal number of distinct palindromes. several characterizations of rich words have already been established. a particularly nice characteristic property is that all 'complete returns' to palindromes are palindromes. in this note, we prove that rich words are also characterized by the property that each factor is uniquely determined by its longest palindromic prefix and its longest palindromic suffix.
polygon exploration with time-discrete vision
with the advent of autonomous robots with two- and three-dimensional scanning capabilities, classical visibility-based exploration methods from computational geometry have gained in practical importance. however, real-life laser scanning of useful accuracy does not allow the robot to scan continuously while in motion; instead, it has to stop each time it surveys its environment. this requirement was studied by fekete, klein and nuechter for the subproblem of looking around a corner, but until now has not been considered in an online setting for whole polygonal regions. we give the first algorithmic results for this important algorithmic problem that combines stationary art gallery-type aspects with watchman-type issues in an online scenario: we demonstrate that even for orthoconvex polygons, a competitive strategy can be achieved only for limited aspect ratio a (the ratio of the maximum and minimum edge length of the polygon), i.e., for a given lower bound on the size of an edge; we give a matching upper bound by providing an o(log a)-competitive strategy for simple rectilinear polygons, using the assumption that each edge of the polygon has to be fully visible from some scan point.
algorithms for dynamic spectrum access with learning for cognitive radio
we study the problem of dynamic spectrum sensing and access in cognitive radio systems as a partially observed markov decision process (pomdp). a group of cognitive users cooperatively tries to exploit vacancies in primary (licensed) channels whose occupancies follow a markovian evolution. we first consider the scenario where the cognitive users have perfect knowledge of the distribution of the signals they receive from the primary users. for this problem, we obtain a greedy channel selection and access policy that maximizes the instantaneous reward, while satisfying a constraint on the probability of interfering with licensed transmissions. we also derive an analytical universal upper bound on the performance of the optimal policy. through simulation, we show that our scheme achieves good performance relative to the upper bound and improved performance relative to an existing scheme. we then consider the more practical scenario where the exact distribution of the signal from the primary is unknown. we assume a parametric model for the distribution and develop an algorithm that can learn the true distribution, still guaranteeing the constraint on the interference probability. we show that this algorithm outperforms the naive design that assumes a worst case value for the parameter. we also provide a proof for the convergence of the learning algorithm.
the transport capacity of a wireless network is a subadditive euclidean functional
the transport capacity of a dense ad hoc network with n nodes scales like \sqrt(n). we show that the transport capacity divided by \sqrt(n) approaches a non-random limit with probability one when the nodes are i.i.d. distributed on the unit square. we prove that the transport capacity under the protocol model is a subadditive euclidean functional and use the machinery of subadditive functions in the spirit of steele to show the existence of the limit.
finding paths of length k in o*(2^k) time
we give a randomized algorithm that determines if a given graph has a simple path of length at least k in o(2^k poly(n,k)) time.
an image processing analysis of skin textures
colour and coarseness of skin are visually different. when image processing is involved in the skin analysis, it is important to quantitatively evaluate such differences using texture features. in this paper, we discuss a texture analysis and measurements based on a statistical approach to the pattern recognition. grain size and anisotropy are evaluated with proper diagrams. the possibility to determine the presence of pattern defects is also discussed.
stay by thy neighbor? social organization determines the efficiency of biodiversity markets with spatial incentives
market-based conservation instruments, such as payments, auctions or tradable permits, are environmental policies that create financial incentives for landowners to engage in voluntary conservation on their land. but what if ecological processes operate across property boundaries and land use decisions on one property influence ecosystem functions on neighboring sites? this paper examines how to account for such spatial externalities when designing market-based conservation instruments. we use an agent-based model to analyze different spatial metrics and their implications on land use decisions in a dynamic cost environment. the model contains a number of alternative submodels which differ in incentive design and social interactions of agents, the latter including coordinating as well as cooperating behavior of agents. we find that incentive design and social interactions have a strong influence on the spatial allocation and the costs of the conservation market.
mathematical structure of quantum decision theory
one of the most complex systems is the human brain whose formalized functioning is characterized by decision theory. we present a "quantum decision theory" of decision making, based on the mathematical theory of separable hilbert spaces. this mathematical structure captures the effect of superposition of composite prospects, including many incorporated intentions, which allows us to explain a variety of interesting fallacies and anomalies that have been reported to particularize the decision making of real human beings. the theory describes entangled decision making, non-commutativity of subsequent decisions, and intention interference of composite prospects. we demonstrate how the violation of the savage's sure-thing principle (disjunction effect) can be explained as a result of the interference of intentions, when making decisions under uncertainty. the conjunction fallacy is also explained by the presence of the interference terms. we demonstrate that all known anomalies and paradoxes, documented in the context of classical decision theory, are reducible to just a few mathematical archetypes, all of which finding straightforward explanations in the frame of the developed quantum approach.
uniqueness of certain polynomials constant on a line
we study a question with connections to linear algebra, real algebraic geometry, combinatorics, and complex analysis. let $p(x,y)$ be a polynomial of degree $d$ with $n$ positive coefficients and no negative coefficients, such that $p=1$ when $x+y=1$. a sharp estimate $d \leq 2n-3$ is known. in this paper we study the $p$ for which equality holds. we prove some new results about the form of these "sharp" polynomials. using these new results and using two independent computational methods we give a complete classification of these polynomials up to $d=17$. the question is motivated by the problem of classification of cr maps between spheres in different dimensions.
computing the nucleolus of weighted voting games
weighted voting games (wvg) are coalitional games in which an agent's contribution to a coalition is given by his it weight, and a coalition wins if its total weight meets or exceeds a given quota. these games model decision-making in political bodies as well as collaboration and surplus division in multiagent domains. the computational complexity of various solution concepts for weighted voting games received a lot of attention in recent years. in particular, elkind et al.(2007) studied the complexity of stability-related solution concepts in wvgs, namely, of the core, the least core, and the nucleolus. while they have completely characterized the algorithmic complexity of the core and the least core, for the nucleolus they have only provided an np-hardness result. in this paper, we solve an open problem posed by elkind et al. by showing that the nucleolus of wvgs, and, more generally, k-vector weighted voting games with fixed k, can be computed in pseudopolynomial time, i.e., there exists an algorithm that correctly computes the nucleolus and runs in time polynomial in the number of players and the maximum weight. in doing so, we propose a general framework for computing the nucleolus, which may be applicable to a wider of class of games.
evolving clustered random networks
we propose a markov chain simulation method to generate simple connected random graphs with a specified degree sequence and level of clustering. the networks generated by our algorithm are random in all other respects and can thus serve as generic models for studying the impacts of degree distributions and clustering on dynamical processes as well as null models for detecting other structural properties in empirical networks.
resource allocation of mu-ofdm based cognitive radio systems under partial channel state information
this paper has been withdrawn by the author due to some errors.
fitness landscape analysis for dynamic resource allocation in multiuser ofdm based cognitive radio systems
this paper has been withdrawn.
the peculiar phase structure of random graph bisection
the mincut graph bisection problem involves partitioning the n vertices of a graph into disjoint subsets, each containing exactly n/2 vertices, while minimizing the number of "cut" edges with an endpoint in each subset. when considered over sparse random graphs, the phase structure of the graph bisection problem displays certain familiar properties, but also some surprises. it is known that when the mean degree is below the critical value of 2 log 2, the cutsize is zero with high probability. we study how the minimum cutsize increases with mean degree above this critical threshold, finding a new analytical upper bound that improves considerably upon previous bounds. combined with recent results on expander graphs, our bound suggests the unusual scenario that random graph bisection is replica symmetric up to and beyond the critical threshold, with a replica symmetry breaking transition possibly taking place above the threshold. an intriguing algorithmic consequence is that although the problem is np-hard, we can find near-optimal cutsizes (whose ratio to the optimal value approaches 1 asymptotically) in polynomial time for typical instances near the phase transition.
solving the apparent diversity-accuracy dilemma of recommender systems
recommender systems use data on past user preferences to predict possible future likes and interests. a key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. in this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. by tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations.
n-ary fuzzy logic and neutrosophic logic operators
we extend knuth's 16 boolean binary logic operators to fuzzy logic and neutrosophic logic binary operators. then we generalize them to n-ary fuzzy logic and neutrosophic logic operators using the smarandache codification of the venn diagram and a defined vector neutrosophic law. in such way, new operators in neutrosophic logic/set/probability are built.