article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
compressed sensing is a fundamental idea in mathematics , which utilizes the _ a priori _ property of signal of length being sparse in some domain , where , along with an appropriately constructed sensing matrix , to establish a unique solution for an otherwise undetermined system of linear equations : the actual solution , is the vector from the solution set , which has the minimum norm . since this is a np - hard problem , so we choose the solution which minimizes the norm . it is observed that minimizing norm will give an accurate solution provided the sensing matrix satisfies the restricted isometry property ( ) property .however , if the sparsity of the input signal is not precisely known , but known to lie within a specified range , traditional compressed sensing as such can not exploit this fact and would need to use the same number of measurements for all sparsity values in this range . in this case , the compressed sensing algorithm has to work taking into account the worst case , which corresponds to the signal being least sparse .for example if the input signal is a discrete - time digital signal of length and can have sparsity anywhere between to in the frequency domain , for compressed sensing to work , one has to design the sensing matrix keeping in mind the sparsity value .for this case , there are frequencies in the signal which will correspond to complex coefficients ( depending upon the locations of those 25 frequency coefficients ) , it was experimentally observed to take about ( ) measurements for an accurate reconstruction by minimizing the norm . thus if the input signal had sparsity , conventional compressed sensing would take measurements ( since it has been designed for sparsity ) whereas only 40 measurements would have sufficed .thus , we have unnecessarily used 135 more measurements than needed in this case . in this paper, we propose a novel method called compressed shattering to address this particular issue .the central idea of compressed shattering is to adapt compressed sensing to the specified sparsity range by creating shattered signals which have fixed sparsity using a filter - bank .our primary aim is to reduce the number of measurements .the problem is stated as follows . the input signal is a discrete - time digital signal of length which needs to be sensed .it is sparse within a range \frac{(b-1)\times n}{2\times t}\leq k < \frac{n\times b}{2 \times t} a_1 < k\leq a_2 ... ... k = \alpha_{j} k = n-\alpha_{j} k = 0$\\ 0,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;&elsewere% \end{ieeeeqnarraybox}\right.,\ ] ] where is the complex conjugate of and is the reconstructed spectrum of the output of the filter .summing up all respective reconstructed spectrums of the significant filters will give the reconstructed version of the original signal spectrum represented by ( refer to fig .[ reconstruction_compress_filtering ] ) . by taking the inverse of we get which represents the reconstructed version of the original time domain signal .to summarize , compressed shattering has four steps in the following order .the input signal is 1 ) permuted , 2 ) passed through a filter - bank , 3 ) de - permuted , and 4 ) finally sensed by a sensing matrix .there will be such paths corresponding to filters , however only will be significant ( refer to fig . [ forward_compresss_filtering ] ) . since every block is a linear transformation ( up to the thresholding block ) , we can reduce the entire compressed shattering procedure to one single matrix ( for each of the paths ) .this is given by : here can be replaced with the following : where is the permutation matrix , is the inverse permutation matrix and is an circular convolution matrix corresponding to the filter .these matrices can be multiplied to form a single matrix , of size , with complex entries , that takes the input and transforms it into the measurements corresponding to the filter : {2t\times n}\cdot\;\;\;\;\ ; x_{n \times 1}\;\ ; = \left [ \begin{array}{c } y_{1}\\y_{2}\\.\\.\\.\\y_{t}\end{array } \right]_{2t\times 1}.\ ] ]in this section , we perform numerical simulations to test our proposed algorithm and compare it with conventional compressed sensing .the parameters for comparison will be number of measurements stored and number of computations .the input signal to the system is a discrete - time real signal of length and will have sparsity anywhere in the range to frequencies .we report results for both the extreme cases of sparsity : and .the input signal and its dft spectrum corresponding to sparsity are shown in fig .[ input_time_f25 ] and fig .[ input_spectrum_f25 ] respectively owing to space constraints . ] .table [ tab : tableresults ] shows the comparison between compressed sensing and compressed shattering in terms of number of measurements to be stored and number of additions and multiplications .although filters are used in the compressed shattering algorithm ( =11 ) , very few shattered signals have significant energy indicating that most of them are 0-sparse . by choosing a threshold of 0.01 for the ,only very few shattered signals are retained as 1-sparse output signals .the measurements for compressed shattering are complex values whereas as compressed sensing yields real measurement values . however , in the table we have indicated number of real measurements which implies that we have multiplied the number of measurements for compressed shattering by 2 . in all cases , we obtained near - perfect reconstruction since the maximum absolute reconstruction error was . from the table , we can infer that there is a tradeoff between the number of measurements that have to be stored and the computational complexity involved in taking the initial measurement .only half the number of real values have to be stored in the case of compressed shattering compared to the conventional compressed sensing method , but the computational complexity of the former is a little more than twice that of the latter in terms of both number of addition and multiplication .this is the price we pay for the reduction in number of measurements .it also should be noted that the algorithm , as of now , is heavily dependent on the we choose .so if we choose the wrong the algorithm might fail because one of the filters might pick up more than one frequency . a plot of number of measurements stored versus the sparsity is shown in fig. [ measurment_vs_m_n16384 ] , for .the flexibility of compressed shattering to the sparsity range is evident when compared to traditional compressed sensing and thus results in huge gains , especially when sparsity is small .we have proposed compressed shattering - a novel way of extending compressed sensing when the sparsity of the input signal is within a specified range .the idea of using a linear congruential generator on the discrete - time indices helps to randomize the frequency components , and thus in de - clustering the spectrum .this is then exploited by creating 1-sparse signals by means of a filter - bank .reconstruction is very fast owing to a simple deterministic sensing matrix that we have proposed for 1-sparse signals .it is conceivable that a more sophisticated prng could be used to efficiently de - cluster the spectrum .compressed shattering outperforms traditional compressed sensing in terms of number of measurements that needs to be stored but at the cost of increased computational cost .future research directions include studying compressed shattering in the presence of noise , finding optimal choices for , an enhanced prng , and a faster algorithm for generating shattered signals . a. c. gilbert , s. muthukrishnan , m. strauss , `` improved time bounds for near - optimal sparse fourier representations , '' _ in proc .spie wavelets xi _ , m. papdakis , a. f. laine , and m. a. unser , eds ., san diego , ca , 2005 , pp .
the central idea of compressed sensing is to exploit the fact that most signals of interest are sparse in some domain and use this to reduce the number of measurements to encode . however , if the sparsity of the input signal is not precisely known , but known to lie within a specified range , compressed sensing as such can not exploit this fact and would need to use the same number of measurements even for a very sparse signal . in this paper , we propose a novel method called compressed shattering to adapt compressed sensing to the specified sparsity range , without changing the sensing matrix by creating shattered signals which have fixed sparsity . this is accomplished by first suitably permuting the input spectrum and then using a filter bank to create fixed sparsity shattered signals . by ensuring that all the shattered signals are utmost 1-sparse , we make use of a simple but efficient deterministic sensing matrix to yield very low number of measurements . for a discrete - time signal of length 1000 , with a sparsity range of , traditional compressed sensing requires measurements , whereas compressed shattering would only need measurements .
there are several known data structures that answer distance queries in planar graphs .we survey them below .all of these data structures use the following basic idea .they split the graph into _ pieces _ , where each piece is connected to the rest of the graph only through its _boundary vertices_. then , every path can go from one piece to another only through these boundary vertices .the different data structures find different efficient ways to store or compute the distance between two boundary vertices or between a boundary vertex and a non - boundary vertex .frederickson gave the first data structures that can answer distance queries in planar graphs fast .he gave a data structure of linear size with preprocessing time that can find the shortest path tree rooted at any vertex in time , where is the number of vertices in the graph .this leads also to an solution to the all - pairs shortest - paths problem , and implies a distance query data structure of size with query time .feuerstein and marchetti - spaccamela modified the data structure of and showed how to decrease the time of a distance query by increasing the preprocessing time .they do not provide an analysis of their data structure in terms of preprocessing time , storage space , and query time , but they show the total running time of queries , which is , , , for , , , , respectively .this solution actually consists of three different data structures for the three cases , and , where the data structure for the first case is the one of .henzinger , klein , rao and subramanian gave an algorithm for the single - source shortest path problem .this implies a trivial distance query data structure , which uses the algorithm , and takes space and query time .djidjev gave three data structures .we will use the specific section number in 3 , 4 , or 5 , to refer to each one of them .the first one ( * ? ? ?* ( 3 ) ) works for ] and has size , preprocessing time , and query time .the third data structure ( * ? ? ? * ( 5 ) ) works for ] with query time , however the same data structure works for a larger range of , and the running time is actually .[ftnt : d96 ] ] chen and xu presented a data structure with the same time and space bounds ., the bounds here are derived by setting in the bounds that appear below lemma 28 ( page 477 ) of ; the bounds stated in depend on the minimum number of faces required to cover all vertices of the graph ( is called _ face - on - vertex covering _ ) , these bounds are obtained using hammock decomposition , which can be applied to any planar distance data structure.[ftnt : cx00 ] ] fakcharoenphol and rao gave a data structure with space and preprocessing time and query time .klein improved the preprocessing time of the data structure to .cabello presented a data structure that uses space and can be constructed in time for ] , a data structure with space , preprocessing time , and query time .this data structure matches the storage space and query time of ( * ? ? ?* ; * ? ? ?* ( 5 ) ) , which is the best query time for this range of storage space , with a better preprocessing time for . the data structure is obtained by combining a preprocessing algorithm similar to with a data structure similar to ( * ? ? ?* ( 5 ) ). * section [ sec : c ] : for ] .a vertex of is an _ internal vertex _ of .all distance query data structures mentioned in the introduction decompose the planar graph .they take advantage of the fact that a path can go from one piece to another only through boundary vertices . a _ recursive decomposition _ is obtained by starting with itself being the only piece in level 0 of the decomposition . at each level, we split each piece with vertices and boundary vertices that has more than one edge into two pieces , each with at most vertices and at most boundary vertices .we require that the boundary vertices of a piece are also boundary vertices of the subpieces of .each piece in the decomposition has boundary vertices .an _ -decomposition _ is a decomposition of the graph into pieces , each of size at most with boundary vertices .fakcharoenphol and rao showed how to find a recursive decomposition of , such that each piece is connected and has at most a constant number of holes .they use these two properties for their distance algorithm .the construction of the decomposition takes time using space , and is done by recursively applying the separator algorithm of miller .frederickson showed how to find an -decomposition in time and space by recursively applying the separator algorithm of lipton and tarjan .thus , an -decomposition is a limited type of recursive decomposition where we stop the recursion earlier ( when we get to pieces of size ) , and do not store all the levels of the recursion ( we store only the leaves ) .cabello combined the two constructions of ( using instead of ) and constructed an -decomposition with the properties that the number of holes per piece is bounded by a constant , and that each piece is connected . in sect .[ sec : a ] we use a combination of recursive decomposition and -decomposition we decompose the graph recursively , but we decompose each piece into pieces instead of 2 . in sect .[ sec : b ] we use -decomposition . in sect .[ sec : c ] we use -decomposition as well , there we take advantage of the fact that the construction of an -decomposition is the same as of a recursive decomposition , which was stopped earlier . fakcharoenphol and rao define the _ dense distance graph _ of a recursive decomposition . for each piece in the recursive decomposition they add a piece to the dense distance graph that contains the vertices of and for every an edge from to whose length is .the multiple - source shortest paths algorithm of klein finds distances where the sources of all of them are on the same face in time .therefore , using it takes time to find the part of the dense distance graph that corresponds to a piece ( recall that and has a constant number of holes ) .it thus takes time to construct the dense distance graph over all pieces of the recursive decomposition .every single edge defines a piece in the base of the recursion , so it is clear that the distance from to in the dense distance graph is the same as the distance between these two vertices in the original graph .fakcharoenphol and rao noticed that in order to find the distance from to we do not have to search the entire dense distance graph , but that it suffices to consider only edges that correspond to shortest paths between boundary vertices in a limited number of pieces .the pieces are these containing either or , and their siblings in the recursive decomposition .there are such pieces with a total of boundary vertices .fakcharoenphol and rao gave an implementation of dijkstra s algorithm that runs over a subgraph of the dense distance graph with vertices , defined by a partial set of the pieces in the recursive decomposition , in time .this gives the query time of their data structure .we use dense distance graphs in two of our data structures ( sect .[ sec : a ] and [ sec : c ] ) . in both casesit is on a variant of recursive decomposition , as discussed above .a matrix satisfies the _ monge property _ if for every two rows and two columns , satisfies . we can find the minimum element of by transposing , negating and reversing , and using the smawk algorithm for row - maxima on the resulting totally monotone matrix . if we do not store explicitly , but are able to retrieve each entry in time this takes time . note that if we add a constant to an entire row or to an entire column of a matrix with the monge property , then the property remains .consider two disjoint sets and of consecutive boundary vertices on a boundary walk of some piece .rank the vertices of from to according to their order around the boundary walk , and rank the vertices of from to according to their order in the opposite direction around the boundary walk . for and , the shortest path from to inside and the shortest path from to inside must cross each other .let be a vertex common to both paths .then , ( see fig . [fig : monge ] ) .therefore , the matrix such that has the monge property . the monge property was first used explicitly for distance queries in planar graphs by . a _ partial matrix _ is a matrix that may have some blank entries . in a _ falling staircase matrix _the non - blank entries are consecutive in each row starting not before the first non - blank entry of the previous row and ending at the end of the row ( see fig .[ fig : stair ] ) , _ inverse falling staircase matrix _ is defined similarly by exchanging the positions of the non - blanks and the blanks .aggarwal and klawe find the minimum of an ( inverse ) falling staircase matrix whose non - blank entries satisfy the monge property in time by filling the blanks with large enough values to create a monge matrix . in sect .: c ] we use this tool for finding the minimum of two staircase matrices whose non - blank entries satisfy the monge property .in this section we present a data structure with linear space , almost linear preprocessing time , and query time faster than any previous data structure of linear space .we generalize the data structure of fakcharoenphol and rao by combining recursive decomposition of the graph with -decomposition .this is similar to the way that mozes and wulff - nilsen improved the shortest path algorithm of klein , mozes and weimann .mozes and sommer have independently obtained a similar result .we find an -decomposition of into pieces , and then we recursively decompose each piece into subpieces , until we get to pieces with a single edge .the depth of the decomposition is where at level we have pieces , each of size and with boundary vertices .constructing this recursive decomposition takes time .an alternative way to describe this decomposition is to perform a recursive decomposition on while storing only levels for of the recursion tree and the leaves of the recursion ( the pieces containing single edges ) .we compute the dense distance graph for the recursive decomposition , in the same way as in the data structure of .that is , we compute the distance between every pair of boundary vertices in each piece . using the algorithm of klein this takes time for each level , and a total of time .the size of dense distance graph over our recursive decomposition is .when a distance query from to arrives , we use the dijkstra implementation of to answer it .we run the algorithm on the subgraph of the dense distance graph that includes all the pieces that contain either or , and the siblings in the recursive decomposition of each such piece .we require the sibling pieces because the shortest path can get out of a piece into a sibling of without getting out of any piece that contains .therefore , the number of boundary vertices involved in each distance query is .hence the query time using the algorithm of is .we conclude that for a planar graph with vertices and any ] and such that is a boundary vertex of some piece contained in .part ( i ) is from the data structure of cabello .the construction of this part requires time and space per piece . part ( ii ) was used both by djidjev ( * ? ? ? * ( 5 ) ) and by cabello .this is the data structure of ( * ? ? ?* ; * ? ? ?* ( 3 ) ) with , its construction takes time and space per piece . part ( iii ) is from the data structure of ( * ? ? ? * ( 5 ) ) , but we construct it more efficiently .we find the distances for this part using the multiple - source shortest paths algorithm of klein for every boundary walk .the required space per piece for part ( iii ) is and the preprocessing time is .since there are pieces , each with a constant number of holes and boundary vertices , constructing the three parts takes time and space .let be a query pair .we use the data structure of this section to find in time .if and are in the same piece then we find the distance from to using parts ( i ) and ( ii ) of the data structure with the query algorithm of in time ( see details in sect .[ sec : cq ] below ) .if and are in different pieces then we find the distance using parts ( i ) and ( iii ) with the query algorithm of ( * ? ? ?* ( 5 ) ) in time ( see details in appendix [ apx : b ] ) .we conclude that for a planar graph with vertices and any , we can construct in time a data structure of size that computes the distance between any two vertices in time .the sum minimizes at , and for we get : for a planar graph with vertices and ] and ] , we can construct in time a data structure of size that computes the distance between any two vertices in time . 10 aggarwal , a. , klawe , m. m. , moran , s. , shor , p. , wilber , r. : geometric applications of a matrix - searching algorithm .algorithmica 2 , 195 - 208 ( 1987 ) .aggarwal , a. , klawe , m. : applications of generalized matrix searching to geometric algorithms .discrete appl .27 , 3 - 23 ( 1990 ) .arikati , s. r. , chen , d. z. , chew , l. p. , das , g. , smid , m. h. , zaroliagis , c. d. : planar spanners and approximate shortest path queries among obstacles in the plane . in : proceedings of the fourth annual european symposium on algorithms .lecture notes in computer science , vol . 1136 , pp . 514 - 528springer - verlag ( 1996 ) .bodlaender , h. l. : dynamic algorithms for graphs with treewidth 2 . in : proceedings of the 19th international workshop on graph - theoretic concepts in computer science .lecture notes in computer science , vol .790 , pp .112 - 124 .springer - verlag ( 1994 ) .cabello , s. : many distances in planar graphs .algorithmica ( to appear ) .doi : 10.1007/s00453 - 010 - 9459 - 0 .chaudhuri , s. , zaroliagis , c. d. : shortest paths in digraphs of small treewidth .part i : sequential algorithms .algorithmica 27 , 212 - 226 ( 2000 ) .chen , d. z. : on the all - pairs euclidean short path problem . in : proceedings of the sixth annual acm - siam symposium on discrete algorithms , pp . 292 - 301 .siam , philadelphia ( 1995 ) .chen , d. z. , xu , j. : shortest path queries in planar graphs . in : proceedings of the thirty - second annual acm symposium on theory of computing , pp .469 - 478 .acm , new york ( 2000 ) .djidjev , h. n. : efficient algorithms for shortest path queries in planar digraphs . in : graph - theoretic concepts in computer science .lecture notes in computer science , vol . 1197 , pp. 151 - 165 .springer - verlag ( 1997 ) .djidjev , h. n. , pantziou , g. e. , zaroliagis , c. d. : computing shortest paths and distances in planar graphs . in : proceedings of the 18th international colloquium on automata , languages and programming .lecture notes in computer science , vol .510 , pp .327 - 338 .springer - verlag ( 1991 ) .djidjev , h. n. , pantziou , g. e. , zaroliagis , c. d. : on - line and dynamic algorithms for shortest path problems , in : proceedings of 12th stacs .lecture notes in computer science , vol .900 , pp .193 - 204 .springer - verlag ( 1995 ) .dijdjev , h. n. , venkatesan , s. m. : planarization of graphs embedded on surfaces . in : proceedings of the 21st international workshop on graph - theoretic concepts in computer science .lecture notes in computer science , vol . 1017 , pp .springer - verlag ( 1995 ) .eppstein , d. : subgraph isomorphism in planar graphs and related problems . j. graph algorithms appl . 3 , 1 - 27 ( 1999 ) .fakcharoenphol , j. , rao , s. : planar graphs , negative weight edges , shortest paths , and near linear time .j. comput .72 , 868 - 889 ( 2006 ) .feuerstein , e. , marchetti - spaccamela , a. : dynamic algorithms for shortest paths in planar graphs .116 , 359 - 371 ( 1993 ) .frederickson , g. n. : fast algorithms for shortest paths in planar graphs , with applications .siam j. comput .16 , 1004 - 1022 ( 1987 ) .frederickson , g. n. : using cellular graph embeddings in solving all pairs shortest paths problems .j. algorithms 19 , 45 - 85 ( 1995 ) .frederickson , g. n. : searching among intervals and compact routing tables .algorithmica 15 , 448 - 466 ( 1996 ) .henzinger , m. r. , klein , p. , rao , s. , subramanian , s. : faster shortest - path algorithms for planar graphs .j. comput .55 , 3 - 23 ( 1997 ) .hutchinson , j. p. , miller , g. l. : deleting vertices to make graphs of positive genus planar . in: discrete algorithms and complexity theory , pp .81 - 98 . academic press , boston ( 1986 ) .klein , p. : preprocessing an undirected planar network to enable fast approximate distance queries . in : proceedings of the thirteenth annual acm - siam symposium on discrete algorithms , pp .820 - 827 .siam , philadelphia ( 2002 ) .klein , p. n. : multiple - source shortest paths in planar graphs . in : proceedings of the sixteenth annual acm - siam symposium on discrete algorithms , pp .145 - 155 .siam , philadelphia ( 2005 ) .klein , p. n. , mozes , s. , weimann , o. : shortest paths in directed planar graphs with negative lengths : a linear - space -time algorithm .acm trans .algorithms 6 , 1 - 18 ( 2010 ) .kowalik , . ,kurowski , m. : oracles for bounded - length shortest paths in planar graphs .acm trans .algorithms 2 , 335 - 363 ( 2006 ) .lipton , r. j. , tarjan , r. e. : a separator theorem for planar graphs .siam j. on appl .36 , 177 - 189 ( 1979 ) .miller , g. l. : finding small simple cycle separators for 2-connected planar graphs .j. comput .32 , 265 - 279 ( 1986 ) .mozes , s. , sommer , c. : exact distance oracles for planar graphs .arxiv:1011.5549 ( 2010 ) .mozes , s. , wulff - nilsen , c. : shortest paths in planar graphs with real lengths in time . in : algorithms - esa 2010 ,18th annual european symposium .lecture notes in computer science , vol . 6347 , pp . 206 - 217springer ( 2010 ) .sleator , d. d. , tarjan , r. e. : a data structure for dynamic trees . j. comput .26 , 362 - 391 ( 1983 ) .thorup , m. : compact oracles for reachability and approximate distances in planar digraphs . j. acm 51 , 993 - 1024 ( 2004 ) .consider the data structure of sect .[ sec : b ] .let be a query pair , such that is in a piece and is in another piece .the query algorithm that we describe here is similar to the one of ( * ? ? ? * ( 5 ) ) , and the complete details are given there .let be the hole of that contains and let the hole of that contains .denote ] .we assume without loss of generality that contains the infinite face .a shortest path from to contains some vertex and some vertex ( it is possible that ) .we may assume that there is no internal vertex of between and in the shortest path ( since otherwise we can replace with another vertex of ) .therefore , .next we show how to find for every in time . for a fixed vertex it is easy to find that minimizes in time ( the same member of may be for different members of ) , by going over all vertices of and using parts ( iii ) and ( i ) of the data structure. let and be the vertices that minimizes for and .there is a shortest path from to that contains , and similarly a shortest path from to that contains .let be a vertex between and in the clockwise order of starting at .there is a vertex that minimizes located between and in the counterclockwise order of vertices of starting at . since otherwise, every shortest path from to must cross the shortest path from or from to that contains or , respectively .assume without loss of generality that the shortest path from to crosses the shortest path from to , and let be the vertex in which the two shortest paths meet .then , if we replace the suffix of the shortest path from that begins at with the suffix of the shortest path from we get a shorter path , this is a contradiction ( see fig . [fig : x3y3 ] ) .this gives the following algorithm for finding for every . .for between and , the vertex is between and , since otherwise we get that every path from to ( _ dashed _ ) crosses either the shortest path from to or from to at a vertex . ]let be two arbitrary vertices of .find and for and by going over all vertices of .let be the middle vertex between and in the clockwise order of starting at .find by going over all vertices of between and in the counterclockwise order of vertices of starting at .continue recursively for the vertices of between and and the vertices of between and , and also for the vertices of between and and the vertices of between and .similarly , find for every between and in the counterclockwise order of starting at .we conclude that we can find for every in time . now , we go over all vertices of , and using part ( i ) of the data structure we find in time .the total query time is .in this appendix we define a cyclic order on the edges incident to a specific vertex in the graph , which is a subgraph of the dense distance graph .we use this order in the preprocessing algorithm of sect .[ sec : cp ] , in order to find for a boundary vertex .we define the order of the edges such that the leftmost shortest paths from to in , and in , both end at the same vertex of ( and are defined in sect . [sec : c ] ) .a vertex of is a boundary vertex of more than one piece , however the order between two edges in two different pieces is clear from the embedding of ( the pieces of are pairwise edge disjoint ) .therefore , here we define the left - to - right order of the edges inside each piece . the left - to - right order of the edges , is in fact a left - to - right order of the boundary vertices , because the edges of a piece in the dense distance graph connect a vertex on the boundary of the piece to all other vertices on the boundary .we define the left - to - right order from to the other boundary vertices of the piece according to the left - to - right order of the leftmost shortest paths from to the other vertices .this order allows us to find as required .the order that we define does not depend on the specific graph , so we perform the procedure described here only once for every boundary vertex of every piece .let be a boundary vertex of a piece .when we compute the distances from to the other vertices of for the dense distance graph , we use the algorithm of klein , which maintains a _ dynamic tree _ that contains the rightmost shortest path from to every vertex of . since we are interested in leftmost shortest paths we use a symmetric version of , by replacing the roles of left and right .denote this leftmost shortest path tree rooted at by .let and be two vertices of different from .we show how to decide in time which of the two vertices is to the left of the other , with respect to .let be the nearest common ancestor of and in .we can find and the two edges that lead from it to and to in time from the dynamic tree .first assume that .consider the following three edges incident to in the edge that connects to its parent ( if = then we add a dummy edge inside the hole that lies on its boundary for this purpose ) , the edge that leads from to , and the edge that leads from to .the order of these edges around determine the order between and ( see fig . [fig : compare](a ) ) . now assume without loss of generality that .the vertex lies on the boundary of some hole of , denote this hole by .there are two edges incident to on the boundary of .we can find the two edges when we find the piece .consider the edge that connects to its parent in , the edge that leads from to , and the place of among the edges incident to .if in the clockwise order of edges around starting at the edge that connects to it parent , the edge that leads to is before , then is to the left of , otherwise is to the left of ( see fig . [fig : compare](b ) ) .since we compare two vertices in time , we can use comparison sort to sort the vertices of around the vertex from left to right in time .we repeat the process for each vertex of in a total of time . for the pieces of a single layer of the recursive decomposition ,the total time is . andfor all the pieces of the dense distance graph the process takes time .
there are several known data structures that answer distance queries between two arbitrary vertices in a planar graph . the tradeoff is among preprocessing time , storage space and query time . in this paper we present three data structures that answer such queries , each with its own advantage over previous data structures . the first one improves the query time of data structures of linear space . the second improves the preprocessing time of data structures with a space bound of or higher while matching the best known query time . the third data structure improves the query time for a similar range of space bounds , at the expense of a longer preprocessing time . the techniques that we use include modifying the parameters of planar graph decompositions , combining the different advantages of existing data structures , and using the monge property for finding minimum elements of matrices .
_ oblivious transfer _ , _ ot _ for short , is a functionality of great importance in cryptography or , more precisely , _ secure two - party computation _, where two parties , who mutually distrust each other , want to collaborate with the objective of achieving a common goal , e.g. , evaluate a function to which both hold an input|but without revealing unnecessary information about the latter . in ( chosen one - out - of - two bit ) ot , one of the parties , the _ sender _ , inputs two bits and , whereas the other party has a _ choice _ bit .the latter then learns , but remains ignorant about the other message bit .the sender , on the other hand , does not learn any information about . various ways , based on public - key encryption , for instance , have been proposed for realizing ot , where the security for one of the parties , however , is only computational .in fact , oblivious transfer is impossible to achieve in an unconditionally secure way for both parties|even when they are connected by a quantum channel . on the other hand, it has been shown that unconditionally secure ot can be reduced to weak information - theoretic primitives such as simply a noisy communication channel , or so - called _ universal ot _ .a recent result shows that ot can be stored : given one realization of ot , a sample of distributed random variables ( known to ) and ( known to ) can be generated , where the joint distribution is such that and can be used to realize an instance of ot .we will call the distributed pair of random variables an _ oblivious key _ or ok for short ; in some sense , as we will see , this is the _ local _ ( hidden - variable ) part of ot ( as opposed to non - local systems and behavior , see section 1.2 ) .another consequence|observed in |is that , since ok is symmetric , ot is , too .this solved a long - standing open problem posed in ._ entangled _ but possibly distant two - partite quantum systems can show a joint behavior under measurements that can not be explained by `` locality '' or hidden variables , i.e. , distributed classical information ; such behavior is called _ non - local_. there exists , for instance , a so - called _ maximally entangled _state with the following properties .if the parties and controlling the two parts of the system both choose between two fixed possible bases for measuring their system in ( where this pair of bases is not the same for the two parties ) , where the measurement outcome can be or in both cases , then the following statistics are observed .( here , the two possible bases for each party are called and , too . ) \ \ = \ \ 0\ , \\ p_{01 } & : = & { \rm prob\ , } [ \ , \mbox{outcome }=\mbox{outcome }\ , |\ , \mbox{basis }=0\ , , \\mbox{basis }=1\ , ] \ \ = \ \ 1/4\ , \\ p_{10 } & : = & { \rm prob\ , } [ \ , \mbox{outcome }=\mbox{outcome }\ , |\ , \mbox{basis }=1\ , , \\mbox{basis }=0\ , ] \ \ = \ \ 1/4\ , \\ p_{11 } & : = & { \rm prob\ , } [ \ , \mbox{outcome }=\mbox{outcome }\ , |\ , \mbox{basis }=1\ , , \\mbox{basis }=1\ , ] \ \ = \ \ 3/4\ .\end{aligned}\ ] ] it has been shown that such statistics are impossible to achieve between two parties who can not communicate when they share arbitrary _ classical _ information only ( i.e. , agree on a classical strategy beforehand ) .more precisely , the so - called chsh _ bell inequality _ is violated since holds .it is , on the other hand , important to note that this non - local behavior is `` weaker '' that communication between and and does not allow for such|fortunately , since such a possibility would be in contradiction with relativity . with the objective of achieving a better understanding of such `` non - local behavior , '' popescu androhrlich defined a `` non - locality primitive '' behaving in a similar way , but where the probabilities are in other words , both parties have an input bit ( corresponding to the choice of the basis in the quantum model ) and and get an output bit and , respectively , where and are random bits satisfying it is important to note , however , that the behavior of this `` pr primitive '' can not , although it does not allow for communication either , be obtained from any quantum state|it violates a `` quantum bell inequality '' that is even valid for the behavior of quantum states . on the other hand , the primitive _does _ allow for perfectly simulating the behavior of a maximally entangled quantum bit pair under _ any _ possible measurement .the latter has been shown possible , for instance , also between parties who may communicate _one _ classical bit , but the possibility of achieving the same with the pr primitive is of particular interest since this functionality does not allow for any communication .the three _ information - theoretic primitives _ or two - party functionalities described in the previous sections can be modeled by their mutual input - output behavior , i.e. , by a conditional probability distribution , where , , , and are the two parties input and output , respectively ( see figure 1 ) .( 0,0 ) # 1#2#3#4#5 ( 22815,4260)(376,-3994 ) ( 16651,-2386)(0,0)[lb ] ( 16651,-1186)(0,0)[lb ] ( 16501 , 14)(0,0)[lb ] ( 22351,-1186)(0,0)[lb ] ( 22351,-2386)(0,0)[lb ] ( 17776,-3886)(0,0)[lb ] ( 19276,-1786)(0,0)[lb ] ( 22576 , 14)(0,0)[lb ] ( 8776 , 14)(0,0)[lb ] ( 6226,-1186)(0,0)[lb ] ( 6226,-2386)(0,0)[lb ] ( 526,-1711)(0,0)[lb ] ( 526,-1186)(0,0)[lb ] ( 376 , 14)(0,0)[lb ] ( 10726,-3661)(0,0)[lb ] ( 3151,-1786)(0,0)[lb ] ( 11626,-1786)(0,0)[lb ] ( 8026,-2386)(0,0)[lb ] ( 14551,-2386)(0,0)[lb ] ( 6451 , 14)(0,0)[lb ] ( 14776 , 14)(0,0)[lb ] in section 2 , we will show simple perfect and single - copy information - theoretic reductions between the three primitives|in some sense , they are , provocatively speaking , all the same . more precisely , a single - copy reduction of a primitive to another means that the functionality can be realized given one instance of .hereby , no computational assumptions have to be made . _ perfect_ means that no non - zero failure probability has to be tolerated .note , however , that the reduction protocol may use communication ; of course , because from a `` communication and locality viewpoint , '' the three primitives are very different : ot allows for communication , pr does not , but is non - local , whereas ok is simply distributed classical information , i.e. , `` local . ''although we keep an eye on this communication in the reductions|all our reductions minimize the required amount of communication| , our interest is _ privacy _ : when is obtained from , say , then both parties must not obtain more information than specified for .in other words , our viewpoint is the one of _ cryptography _ rather than of communication - complexity theory .note that our reductions have the property that a party who is misbehaving in the protocol _ can not _ obtain more information than specified ( but possibly violate the privacy of her proper inputs ) .[ lemma : ot2nl ] using one instance of , we can simulate . chooses . chooses at random and sends and with . receives and outputs . outputs .we have .[ lemma : nl2ok ] using one instance of , we can simulate . and choose and at random . outputs and . outputs and .we have .[ lemma : ot2ok ] using one instance of , we can simulate . follows directly from lemmas [ lemma : ot2nl ] and [ lemma : nl2ok ] .we get the following protocol : and choose all their input at random . outputs her inputs , his input and his output .[ lemma : nl2ot ] using one instance of , we can simulate using one bit of communication . inputs . inputs . gets and gets . sends to . outputs .we have .since does not receive any message from , she gets no information about . only receives one bit , which is equal to . in ,no communication takes place , but we are able to send one bit using .hence , at least one bit of communication is needed to simulate by .[ lemma : ok2nl ] using one instance of , we can simulate using two bits of communication . sends to . sends to . outputs . outputs .we have . hence they both send their inputs `` xored '' with and , respectively .since the other party has no information about these values , this is a one - time pad , and they receive no information about the other s input .we show that the two bits of communication are optimal in this case .let us assume that there exists a protocol using only one - way communication from to . since can calculate his output for both inputs for and for , we have , and , therefore , . using one instance of , we can simulate using three bits of communication. follows directly from lemmas 4 and 5 .alternatively , we can use the bbcs protocol , which requires three bits of communication as well .here , sends to , whereas sends and to . outputs .we have . s message does not give any information about to , since it is `` one - time padded '' with the value about which has no information . knows either or but has no information about the other value .so , either or gets `` one - time padded , '' and obtains information about that value , even if he is given the other value .three bits of communication are optimal : first of all , two - way communication is needed .if would send less than two bits , but still in such a way that would get the bit he wants , then would have to know which bit has chosen .ot is _ a priori _ an asymmetric functionality , and the possibility of inverting its orientation has been investigated , for instance , in , where a protocol was given using realizations of ot from to |called to|in order to obtain one realization from to , where a failure probability exponentially small in has to be tolerated . since , however , pr is a symmetric functionality , our reductions imply that ot is as well .more precisely , the reductions of ot to pr and _ vice versa _ can be put together to the following protocol inverting ot .this reduction of ot to to given in , is single - copy , information - theoretic , perfect , and minimizes the required additional communication .using one instance of , we can simulate using one bit of communication . inputs to . chooses a random bit and inputs and to . receives and sends to . outputs .we have . does not get any message from , so she does not get any information about . get one message by , which is either equal to , if the xor of his input values is , and otherwise .if he does not choose at random , might be able to get the value , but there is no advantage for .the protocol is obviously optimal since can communicate one bit with using |which she can not using .finally , we show that an can easily be reversed , without any communication .using one instance of , we can simulate without any communication . gets and from , and gets and . outputs and , and outputs and .we have and .the primitive can also be defined in a symmetric way : it is the distribution that we get when both and input a random bit to .we have shown a close connection between the important cryptographic functionality of oblivious transfer and quantum non - locality , more precisely , the `` non - locality machine '' of popescu and rohrlich : they are , modulo a small amount of ( classical ) communication , the same|one can be reduced to the other . as a by - product, we have obtained the insight that _ ot is symmetric _: one instance of ot from to allows for the same functionality from to in a perfect information - theoretic sense .figure 2 shows the reductions between the different functionalities discussed above .the ( optimal ) numbers of bits to be communicated are indicated .in has been shown in that the behavior of an epr pair can be perfectly simulated without any communication if one realization of the pr primitive is available .however , this reduction , although it yields the correct statistics with respect to the two parts behavior , is not `` cryptographic '' or `` private '' in the sense of our reductions : the parties are tolerated to obtain more information about the other party s outcome than they would when actually measuring an epr pair .we state as an open problem to simulate , in this stronger sense , the behavior of an epr pair using the pr primitive . c. h. bennett , g. brassard , c. crpeau , and h. skubiszewska , practical quantum oblivious transfer , _ advances in cryptology | proc . of eurocrypt91 _ , lncs , vol . 576 , pp .351366 , springer - verlag , 1992 .
_ oblivious transfer _ , a central functionality in modern cryptography , allows a party to send two one - bit messages to another who can choose one of them to read , remaining ignorant about the other , whereas the sender does not learn the receiver s choice . oblivious transfer the security of which is information - theoretic for both parties is known impossible to achieve from scratch . | the joint behavior of certain bi - partite quantum states is _ non - local _ , i.e. , can not be explained by shared classical information . in order to better understand such behavior , which is classically _ explainable _ only by communication , but does not _ allow _ for it , popescu and rohrlich have described a `` non - locality machine '' : two parties both input a bit , and both get a random output bit the xor of which is the and of the input bits . | we show a close connection , in a cryptographic sense , between ot and the `` pr primitive . '' more specifically , unconditional ot can be achieved from a single realization of pr , and _ vice versa_. our reductions , which are single - copy , information - theoretic , and perfect , also lead to a simple and optimal protocol allowing for inverting the direction of ot .
mechanical ventilation is the most frequently used life - sustaining intervention in the intensive care unit ( icu ) , where approximately 50% of patients receive ventilatory support . at some point in their management , many patients on mechanical ventilation ( mv ) are described as fighting their ventilator " .this jargonistic expression is used to indicate a mismatch between patient respiratory efforts and ventilator breaths " .this form of disharmony between patient and ventilator results in an increased work of breathing and is a major source of discomfort for the patient with some deleterious effects such as dyspnea and anxiety .dyspnea and anxiety are major drivers of post - traumatic stress disorders frequently observed in patients who survive the icu .therefore , it is crucial to detect patient - ventilator disharmony as early as possible .currently , this relies on monitoring of physiological signals generated indirectly ( pressure , air flow ) or directly ( electromyography ) by the respiratory muscles in response to the descending neural drive to breathe .it is generally assumed that ventilatory support should be adapted to the neural drive to breathe .some approaches address this issue by using the diaphragmatic emg ( e.g. the neurally adjusted ventilatory assist or nava ) .nevertheless , these techniques fail to take into account the fact that , under certain circumstances , the automatic respiratory activity of the brain stem is supplemented by respiratory - related cortical circuits .indeed , inspiratory loading in awake humans elicits a cortical response that can be observed in eeg signals .such responses have been found to correlate with respiratory discomfort in healthy subjects fighting a ventilator .these observations give rise to the prospect of an effective brain - ventilator interface ( bvi ) that would target the neural correlates of respiratory discomfort rather than the automatic drive to breathe .current neuroscience research attempts to understand how brain functions result from dynamic interactions in large - scale cortical networks , and to further identify how cognitive tasks or brain diseases contribute to reshape this organization .covariance analyses of brain data are widely used to elucidate the functional interactions between brain regions during different brain states .the relevance of covariance matrices as a feature for bci has been already assessed and they constitute a very appropriate choice given their ability to reflect spatio - temporal dynamics in eeg . in this paper , we use elements of differential geometry to evaluate the ability of eeg covariance matrices to characterize changes in respiratory states in healthy subjects . the use of brain computer interfaces ( bci ) is increasingly common in clinical environments as a technology to improve patient communication and rehabilitation using brain signals . however , the application of bci in the respiratory context has not yet been explored . in this work ,we propose a framework which provides the basis of a brain - ventilator interface ( see figure [ fig : plan_bvi ] ) .a first possible implementation consists of an open loop configuration that generates an output signal to trigger an alarm in the case of breathing disharmony .the second implementation is a more advanced version that could generate a continuous output signal in a closed loop to directly adapt ventilator parameters to the patient s needs .the different blocks in the proposed bvi are as follows : 1 .acquisition : set of electrodes , amplifiers and a / d converter providing digitized eeg signals .pre - processing : improves signal - to - noise ratios in eeg signals by applying artifact correction / rejection and/or filters .feature extraction : sample covariance matrices ( cms ) are obtained from segmented , pre - processed signals .4 . classification : cms are labelled according to two possible classes : normal and altered breathing .detection of anomalous respiratory states is achieved by one - class learning , measurement of the distance between a number of reference matrices learned during reference condition and the cm corresponding to a particular signal epoch .since cms do not lie in vector space , appropriate distance metrics must consider their natural geometry .translation : external application that converts the binary signal from the classifier to an alarm or ventilator command .the present paper focuses on the signal processing aspects of the bvi pre - processing , feature extraction and classification blocks as a detector of respiratory - related activities compatible with breathing discomfort .the framework is validated with eeg from healthy subjects under two breathing constraints to emulate patient - ventilator disharmony .the reliability and performances of our method are also compared with those obtained by a common spatial patterns ( csp ) method in combination with linear discriminant analysis ( lda ) . the experimental protocol and dataare detailed in section [ database ] .section [ meth ] describes the different signal processing blocks of the bvi , including our riemann geometry based classifier and other standard classification methods that use eeg and breathing signals .section [ setup ] studies bvi settings for optimal detection of breathing discomfort and section [ res ] provides the experimental results and evaluation of the bvi .finally , we conclude the paper with a discussion in section [ concl ] .the database is composed of nine healthy subjects ( 21 - 29 years ; 5 women ) with no prior experience with respiratory or neurophysiology experiments ( for more details see ) . according to the declaration of helsinki , written informed consentwas obtained from each subject after explanation of the study , which was approved by the local institutional ethics committee ( comit de protection des personnes ile - de - france vi , groupe hospitalire piti - salptrire , paris , france ) .subjects were sitting in a comfortable chair and breathed continuously through a mouthpiece .they were asked to avoid body and head movements .they were distracted from the experimental context by watching a movie during the entire experiment , on a screen placed in front of them . to minimize emotional interference ,the movie was a neutral animal documentary .it is worthy to note that , even though many icu patients are supine , clinical practice guidelines recommend that mechanically ventilated patients in the icu should be semi - supine , or semi - seated , to decrease the risk of respiratory infections and increase patient comfort .electroencephalographic activity was recorded via surface electrodes ( acticap , brainproducts gmbh , germany ) using 32 electrodes according to the standard 10 - 20 montage and sampled at 2500 hz .impedance between electrodes and skin were set below 5 k .the mouthpiece was connected to a pneumotachograph ( hans ruldoph inc ., mo , usa ) and a two - way valve to measure air flow and attached , when required , an inspiratory load ( range 18 - 25 cmh20 ) . the experiment was designed to activate cortical regions by altered breathing and consisted of three parts : 1 .normal , spontaneous ventilation ( sv condition ) .breathing is controlled automatically by the autonomous nervous system without cortical contribution .2 . voluntary brisk inhalations or sniffs ( sn condition ) .breathing movements are planned before execution , thus motor and pre - motor cortical regions are solicited .3 . inspiratory loaded breathing ( ld condition ) .ventilatory muscles perform a supplementary effort to overcome an inspiratory threshold load to maintain adequate air flow , a condition known to engage cortical networks .in contrast to the sv condition , where breathing is comfortable , ld is associated with respiratory discomfort . in a clinical context, sv would correspond to patient - ventilator harmony , whereas ld condition would correspond to patient - ventilator disharmony .sn can be considered as a positive control condition where cortical control is expected .for all subjects , 10 minutes of eeg was recorded for each condition , i.e. 10 min of sv , 10 min of sn and 10 min of ln .the experiments were well tolerated by all subjects and no intervention was necessary to modify the amount of load or sniff pattern during the recordings . hence , during 10-minute recordings for each condition , no relevant changes occurred within each of these 10-min blocks regarding experimental conditions or subject behavior .the purpose of this block is to enhance motor cortical activity , whose main rhythms are within and bands .therefore , signals were band - pass filtered by a linear phase fir filter ( see section [ ch_freq ] for more details regarding frequency selection ) .data segments with artifacts due to repetitive eye blinks and ocular movements were visually detected and removed from the original eeg dataset .then , signals were down - sampled to 250 hz and segmented in 5 second sliding , 50% overlapped windows to reduce computational cost in subsequent blocks .this time interval is in concordance with the slow breathing dynamics ( a breath every 2.5 to 5 seconds ) . as mentioned above ,the basis to classify the breathing state in the bvi are sample covariance matrices .the feature extraction block processed eeg data in epochs of samples and channels , as a matrix , and then transformed to a sample covariance matrix .the latter was computed by the unbiased estimator : where the superscript denotes the matrix transpose . by construction , symmetric and positive - definite matrices ( spd ) that do not lie in a vector space but on a riemannian manifold .therefore , previous methods defined on a euclidean structure are not longer adequate .the correct manipulation of these spd matrices requires the application of riemannian geometry concepts , described in section [ riemann ] .thanks to the ability of covariance matrices to capture eeg spatial dynamics , a one - class approach can be chosen to solve the classification problem in a bvi . in one - class algorithms ,the availability of the labels from only one class is enough to classify instances from a second class , as the latter can be mapped in a different ( distant ) region of the space representation . in our framework ,motor cortex activation provoked during uncomfortable breathing should be different to that underlying normal , comfortable breathing ( absence of motor cortical activity ) .the objective of the classifier is to learn from data samples of the reference class sv ( ) in order to label new trials in two classes , , which correspond to breathing comfort ( sv condition ) and discomfort ( sn or ld conditions ) , respectively . during the learning process ,the algorithm first finds matrices , subsequently called prototypes , serving as reference to perform classification .those prototypes constitute centers from the ensemble of covariance matrices and they are estimated by means of a general -means clustering algorithm : 1 .initialize the prototypes , by random selection of matrices from , 2 . for each sample covariance matrix ,compute its distance to all the prototypes and assign it to the closest one in order to form the cluster : 3 . update each prototype by averaging points in as for points lying in a vector space, corresponds to the arithmetic mean of points in 4 .go to step ( 2 ) until convergence ( e.g. the assignments no longer change ) is achieved .the resulting prototypes represent the class , and are then used to classify the new unlabelled data ( sub - index denotes the number of the -sample window ) according to : where is the distance to the closest reference prototype , ; and is a scalar that can be adjusted to a performance criterion , like a statistical significance or a desired specificity / sensitivity value , for instance .classification performance was measured by the area under the curve ( auc ) of the receiver operating characteristic ( roc ) .auc values range from 0.5 ( a random classification ) to 1 ( perfect classification ) .all auc values were computed by applying 10-fold cross validation , excluding the learning period in the classification . within an euclidean framework ,the -means algorithm divides the dataset into groups and attempts to minimize the euclidean distance between samples labeled to be in a cluster and a point designated as the arithmetic mean of that cluster .nevertheless , the spd manifold is not a linear space with the conventional matrix addition operation .a natural way to measure closeness on a manifold is by considering the geodesic distance between two points on the manifold .such distance is defined as the length of the shortest curve connecting the two points . as an example , consider the earth s surface as a manifold : the riemannian distance between the two poles is given by a meridian , while the euclidean distance corresponds to the straight line going through the earth s core from pole to pole . in the space of sdp matrices ,the clustering algorithm must minimize the geodesic distances between each point of the manifold ( the covariance matrices ) and the reference matrix .each cluster center can be then obtained by an averaging process that employs the intrinsic geometrical structure of the underlying set . derived from different geometrical , statistical or information - theoretic considerations , various distance measureshave been proposed for the analysis of spd matrices .although many of these distances try to capture the non - linearity of spd matrices , not all of them are geodesic distances .the space of spd matrices , , constitutes a differentiable riemannian manifold of dimension . at any point is a tangent euclidean space , .let two points and be two points on the tangent space ( e.g. the projection of two spd matrices ) , the scalar product in the tangent space at is defined by that depends on point .the logarithmic map that locally projects a covariance matrix onto the tangent plane is given by : where is the matrix logarithm operator .projecting sdp matrices onto is advantageous because this tangent space is euclidean and distance computations in the manifold can be well approximated by euclidean distances in .the inverse operation that projects a point of the tangent space back to the manifold is given by the exponential mapping : in this paper , we have employed the two most widely used distance measures in the context of riemannian manifolds : the affine - invariant distance and the log - frobenius distance , also referred to as log - euclidean distance . for comparative purposes ,we have also used an euclidean metric . on the space of real square matrices, we have the frobenius inner product and the associated metric : given a set of real square matrices , their arithmetic mean is given by : using ( [ e1 ] ) , the riemannian distance between two spd matrices can be computed as : this metric has several useful theoretical properties : it is symmetric and satisfies the triangle inequality . furthermore , it is scale , rotation and inversion invariant . to find the mean of a set of covariance matrices ,the distance needs to be applied in the following expression : although no closed - form expression exists , this ( geometric ) mean can be computed efficiently using an iterative algorithm .the geometric mean in converges into a unique solution and can be computed efficiently by the algorithm described in .the log - euclidean distance between two spd matrices is given by : this metric maps spd matrices to a flat riemannian space ( of zero curvature ) so that classical euclidean computations can be applied . under this metric, the geodesic distance on the manifold corresponds to a euclidean distance in the tangent space at the identity matrix .this metric is easy to compute and preserves some important theoretical properties , such as scale , rotation and inversion invariance . given a set of covariance matrices , their log - euclidean mean exists and is uniquely determined by : in this paper , we have also evaluated the detection of altered respiratory states using eeg and air flow - based features as inputs for one - class support vector machine ( ocsvm ) classifiers .although one - class classification , or novelty detection , is more appropriate for the clinical monitoring of physiological condition in ventilated patients ( i.e. altered breathing epochs may be detected as departures from `` normal '' breathing state ) , we have compared our results with those obtained from a common spatial pattern ( csp ) method in combination with linear discriminant analysis ( lda ) to detect altered breathing from eeg .csp is a two - class oriented approach that needs a - priori labelled data from two classes previously defined ( the normal and the altered breathing conditions ) to train the classifier , while our one - class approach learns from a single training set containing only normal respiratory epochs .however , we have included these methods in the comparative study as they constitute a standard in brain computer interfacing .one - class support vector machines is a very popular machine learning technique used as outlier or novelty detector for a variety of applications . in ocsvm ,the support vector model is trained on data that has only one class , which represents the _ normal _ class .this model attempts to learn a decision boundary that achieves the maximum separation between the points and the origin .a ocsvm firstly uses a transformation function defined by a kernel function to project the data into a higher dimensional space .the algorithm then learns a decision boundary that encloses the majority of the projected data , and that can be applied to the outliers detection problem .more details on the algorithmic aspects of one - class svms can be found in . to guarantee the existence of a decision boundary we have used here a gaussian kernel : where and denote two feature vectors and the parameter is set to be the median pairwise distances among training points . for this classifier, feature vectors were extracted by computation of covariance matrices in epoched data followed by determination of the vector form of the upper triangular part of .air flow features were computed , as for eeg classifiers , in 50% overlapped -sample windows . in this case ,feature vectors are composed of six air flow descriptors computed in each window : peak value ( l / s ) , average flow ( l / s ) , total volume ( l ) , flow variance , skewness and kurtosis .after an exhaustive feature selection procedure ( testing the classifier with all possible combinations ) using 10-fold cross validation , the greatest averaged aucs was provided by the combination of 3 features : air flow peak value , variance and skewness .csp is a widely used spatial filtering technique in bcis to find linear combinations of electrodes such that filtered eeg maximizes the variance difference between two classes .the computation of csp yields a projection matrix whose column vectors ( ) , called spatial filters , can be considered as an operator that transforms eeg to `` source '' components .the matrix contains the so - called spatial patterns in its columns which can be considered as eeg source distribution vectors . a standard procedure to extract features in eeg epoched data ( ) by csp consists of computation of the log - variances of projected on the spatial filters .following the recommendations in , we used the 4 most relevant spatial filters to obtain the feature vectors , which were then used as inputs in a lda classifier .following bci designs , the bvi should exploit the oscillatory modulations on and bands ( 8 - 30 hz ) following motor and somato - sensorial cortical activations .very low frequency potentials associated with voluntary movements may also be present during respiratory tasks as evidenced by previous work .to study the impact of specific frequency bands on the efficacy of the bvi to detect altered breathing , we compared several frequency ranges ( from 0 to 30 hz ) and bandwidths ( 4 to 22 hz ) to tune the bandpass filter used in the preprocessing block ( see section [ res : freq ] ) . in bci ,the choice of the adequate number of sensors and their location is fundamental . reducing the number of irrelevant electrodes avoids over - fitting and optimizes computational costs , but also improves patient comfort and reduces installation time .several channel selection methods can be employed to simplify the implementation of bcis , being the most popular those based on common spatial pattern ( csp ) and svms .in addition to csp based channel selection , we propose an iterative procedure to rank electrodes according to their discriminating power , the common highest order ranking ( chorra ) .results obtained by both methods are compared in section [ elect_rankings ] ) .chorra is performed in two steps .firstly , an electrode ranking for each subject ( intra - subject rankings ) is found .then , a general ranking of the most significant electrodes is computed over all subjects ( inter - subject rankings ) . to compute intra - subject ranks ,a recursive backward elimination of electrodes was applied until the remaining number was 2 , the minimal dimension to compute a covariance matrix ( see algorithm 1 ) .initialise the subset with the initial electrodes = classification performance without = that provides the minimal remove the electrode from the list * return * at the end of this procedure , the list contains the indices to the electrodes sorted from more to less relevant in terms of classification performances for a particular subject . combining ranking lists from all subjects , a general configuration for a ready - to - use bvi device can be chosen .we have tested two possible rank combinations : 1 .* ranking aggregation * : this method determines electrode position in the ranked list , compared to a null model where all the lists are randomly shuffled . based on order statistics, this algorithm assigns a -value to each electrode in the aggregated list describing the rank improvement compared to expected ( a score of is assigned to channels that are preferentially ranked at the top of all lists ) .the procedure only takes into account the best rank , thus providing a robust re - ordering of elements .* averaged ranking * : this is a classical positional method where the total score of an electrode is simply given by the arithmetic mean of the positions in which the channel appears in each ranking list . to increase rank robustness ,several learning periods were used .hence , after obtaining a set of lists ( -fold cross validation ) per subject , the intra - subject ranking was computed according to one of the above rank combination applied to the lists of electrode rankings . as the electrode positions leading to the maximal auc in a particular subject is nt necessarily the same for another , we studied a common electrode configuration that could be shared by any user .this general electrode set - up is more pertinent in a clinical context as an eeg headset could be interfaced promptly .therefore , we applied the above rank combination methods to all intra - subject rankings to obtain common and unique electrode lists .since csp patterns ( ) can be considered as eeg source distribution vectors , their associated weights can be used to find the most relevant electrodes with regard to the discrimination of the two classes . to compute intra - subject csp - based ranks , we retained the first pattern ( ) as it explains the largest variance for class 0 ( and the lowest for class 1 ) and the last pattern ( ) , that explains the largest variance for class 1 ( and the lowest for class 0 ) . within each pattern , the largest weights ( absolute values )are associated with the most relevant electrodes .hence , the list of electrode ranks was generated by selecting , alternately , in decreasing order , the first weight in , then the first weight in , the second weight in and so on as proposed by wang et al .the final list of inter - subject csp ranks were obtained by normalizing ( by their maximal value ) and in each subject , then finding the average of these patterns across subjects as in , and finally by selecting the weights as done above for intra - subject ranks . for consistency with chorra ranks ,a similar -fold procedure was applied : csp was computed using different mean covariance matrices in class 0 .the average of the resultant projection matrices yielded the final .as mentioned above , the prototypes are local centers that represent the space defined by . to this end, we adopted the prototypes provided by the general -means algorithm as the centers introduced in section [ classif ] .the structure of the manifold defined by is unknown .covariance matrices may be organized in a compact , uniform manner so a few centers are enough to correctly represent class 0 . on the contrary, may have a complex distribution and need more centers to be correctly represented .therefore , in order to achieve good classification , there may be an optimal number of prototypes . to satisfactorily exploit the bvi ,previous knowledge about the learning time needed to train the classifier is necessary .limited learning time may result in a small number of covariance matrices and hence , in a poor representation of the space defined by normal breathing .moreover , the clustering algorithm would provide a less accurate center when few cms are available . on the other hand ,the learning time should meet clinical requirements , as long set - up times in these environments are impractical .since the number of covariance matrices needed to represent class 0 may impact on the optimal number of prototypes , the classification performance has to be computed by modifying and . in next section ,these parameters are tested experimentally to find the optimal values .this section shows and discusses the results of applying the bvi to detect altered respiratory condition ( ld or sn ) after learning a reference period of normal breathing ( sv ) . to find the best settings in the classification block, several methods estimated the distances ( ) to the reference condition sv .distances were then employed to estimate the areas under roc curves ( auc ) as a measure of the classifier s performance .plots in figure [ fig : alpha_delta ] illustrate the feature distances and corresponding aucs for detection of ld condition in one single subject . for the eeg - based classifier , the optimal frequency band for preprocessing signals was first selected .then , euclidean , riemannian and log - euclidean distances were compared to select the best metric used for evaluating the optimal electrode configuration .finally , the impact of learning time and the number of prototypes were also assessed .as seen in figure [ fig : auc_freq ] , the frequency bands that provided the highest discriminant rates in terms of auc were 8 - 24 hz and 8 - 30 hz . since the latter is more susceptible any muscular artifacts , we preferred to set the bandpass cut - off frequencies between 8 - 24 hz .low frequencies ( hz ) were discarded not only for their moderate discrimination power , but also because in a real implementation of the bvi this spectral range would be corrupted by electromagnetic interferences inherent in clinical environments .other frequency combinations were not superior to 8 - 24 hz classification rates .the first test used the 14 most central electrodes , thus covariance matrices were obtained from eeg signals .the classifier settings were and .figure [ fig : auc_ini ] shows that the best performing metric is the log - euclidean , which provided slightly better auc values than riemannian distances in ld condition ( versus ) and almost equal auc values ( ) for the sn condition .the classification performances for sn had the largest variability probably due to the discontinuous occurrence of sniffs ( one every two breaths ) .although the performances of log - euclidean are similar to those obtained by the affine invariant riemannian metric , the former is advantageous from the computational point of view due to its reduced algorithmic complexity ( cpu times were , on average , 3 time faster ) .euclidean distances provided the lowest average classification rates and the most unstable auc values , demonstrating the limitation of linear matrix operators ( i.e. the arithmetic mean ) for eeg covariance matrix classification .the analysis of values also proves that riemannian geometry is a better framework to classify eeg covariance matrices , than the outlier detector based on one - class svms .we notice , however , that the latter provided better results than the classifier with euclidean structure ( mean values are versus ) . in ld condition , classification performances obtained with air flow ( ) remained below than those obtained using log - euclidean and affine invariant distances , and almost equal to the one - class svms eeg - based detector ( ) .because of the characteristic large pattern of air flow during sniffs , air flow signals provided better classification in sn than svm eeg - based detector ( and , respectively ) and similar performances to our approach .finally , if we considered the bvi as a two - class detector ( unsuitable for on - line detection as discomfort classes are not known a priori ) , the classification by csp and lda yielded for ld and for sn , values clearly below our riemannian geometry based approach .we assessed the most relevant electrodes in the proposed bvi , starting from a set of 14 central electrodes from the original 32-electrode montage .retained electrodes included positions on primary sensorimotor cortices ( c3 , c4 ) , higher - level secondary and association cortices ( fz and cz ) , pre - parietal ( cp1 and cp2 ) and pre - frontal cortices ( f3 , f4 ) , pre - motor and supplementary motor areas ( fc1 , fc2 ) , central ( fc5 and fc6 ) and parieto - occipital areas ( cp5 and cp6 ) .the choice of these areas follows results of earlier experiments describing the cortical networks elicited during respiratory load compensation . according to the chorra procedure to find the best channels from the initial set - up ( c.f algorithm 1 in section [ ch_selelct ] ) , we first found intra - subject rankings and then applied a global rank aggregation .as described above , we employed auc as the measure of classification performance and performed the rankings from =10 lists ( i.e. 10 different learning periods ) of each subject .the classification results are shown in figure [ fig : auc_elect]-(a ) for intra - subject auc optimized ranks and figure [ fig : auc_elect]-(b ) for intra - subject csp electrode ranks , where auc values are expressed as a function of the number of removed electrodes . in both cases ,the evolution of the curves during the optimization process display a slight increasing tendency of auc for the log - euclidean metric when electrodes contributing negatively to classifications are removed .classification performances decrease with configurations smaller than six electrodes .results show that customized individual selection of electrodes can provide an optimal auc within a single subject . using an intra - subject auc based optimization ,classification rates can be improved up to 0.95 using the six most significant electrodes . on the other hand ,selecting the electrodes according to csp patterns provides lower aucs , with the best 6-electrode reaching an average auc value of 0.85 in ld . as shown in different plots of figure [ fig : auc_elect ] , for a configuration with more than three electrodes , the log - euclidean metric performs better than the euclidean distance at every optimization step .these results support the idea that large spatio - temporal information of the eeg ( as reflected by the sample covariance matrices ) is optimally captured when the intrinsic geometry structure of the underlying data is taken into account .we applied the two rank combinations proposed by chorra to the lists of sorted electrodes of all subjects to find a general ranking of electrodes for each method .following the rank of the final lists , we computed the auc values by reducing the initial 14-electrode set to two sensors ( see figure [ fig : auc_elect]-(c - d ) ) .classifications provided by inter - subject csp ranks are depicted in figure [ fig : auc_elect]-(e ) .results from both chorra procedures indicate that a good compromise between a reduced number of electrodes and reasonable classifications can be reached by selecting the best 6 electrodes obtained by the averaged ranks ( for both ld and sn conditions ) . in line with intra - subject ranks ,the general electrode list obtained by csp patterns resulted in poorer classification rates , providing for a set of 6 electrodes .results indicate that , for general configurations with more than three electrodes , log - euclidean metric still performs better than the euclidean distance .the aggregated chorra scores displayed in figure [ fig : heads ] indicate scalp zones with more influence for the classification of both sn and ld conditions .although both optimization procedures ( robust rank aggregation and rank averaging ) provide similar classification performance , the spatial concentration for discrimination of scalp regions obtained by the robust aggregation method is larger than those obtained by a classical averaging rank procedure .the spatial distribution of scores and the rankings ( see figure [ fig : heads ] ) indicates that discrimination of ld condition is better if the electrode configuration contains the pre - motor and supplementary motor area ( fc1 ) , the fronto - central region ( fc6 ) and the supramarginal gyrus ( the part covered by cp6 electrode ) .this agrees with previous findings suggesting that during ld , the supplementary motor area ( sma ) is recruited , most likely within the frame of a cortico - subcortical cooperation allowing compensation of the inspiratory load . during sn ,the conscious preparation of a breath activates pre - motor areas and the execution of the breath activates motor areas .indeed , our results show that sn condition is better discriminated if electrodes include the supplementary motor area ( fc1 ) , the central motor area cz and part of the somatosensory association cortex ( covered by the electrode cp1 ) .figure [ fig : heads ] also shows the relative weights of electrode positions related to the most relevant csp patterns . in both ln and sn conditions ,topographic plots reveal that highly ranked positions reside in similar scalp regions . for loaded breathing ,csp weights suggest a bilateral activation of the cortex that includes fronto - central , central ( motor cortex ) and some centro - parietal positions ( somato - sensory cortex ) .on the other hand , during voluntary sniffs , highly scored electrodes match fronto - central positions on the left hemisphere ( motor and pre - motor areas ) .electrode selection based on csp coefficients converges to a great extent with chorra selection .nevertheless , for a given number of electrodes our proposed approach provides better classification performances than the counterpart selection in csp ranks ( see figure [ fig : auc_elect ] ) . to confirm that changes in brain activity during ld and sn epochs detected by the eeg - based classifier are related to breathing , correlations between eeg and air flow were also computed after correcting by autocorrelations and time trends present in time series .results show that mean correlations in central areas ( including c3 , c4 , fz and fc1 electrodes ) increase about .the increase of correlation values between eeg and air flow signals confirms that changes in brain state detected by our classifier are related to respiration .our results suggest that , in general , eeg signals might provide better discriminant features than air flow to detect breathing discomfort .once the effect of different metrics and channels configurations were assessed , we tested the influence of the learning parameters , i.e. the number of covariance matrices used as a reference ( ) and the number of prototypes ( ) used to characterize the reference class . for this , we assessed classification performance for different values of , ranging from 1 to 7 and and for different number of learning matrices , , from 20 to 40 in steps of 5 .this procedure was also repeated with 10-fold cross validation for ld and sn conditions .three different electrode configurations were tested : 1 ) the initial 14-electrode configuration , 2 ) a selection of 6 electrodes optimized for every subject and 3 ) a general configuration with the best 6 electrodes after applying the global channel selection procedure .results are depicted in the color maps of figure [ fig : auc_kl ] , where average auc values are expressed as a function of and .importantly , this figure shows a small effect of both parameters on overall classification rates , regardless of the electrode configuration . in view of these findings, the choice of a small number of prototypes ( ) and a short learning time ( cms ) is an advantageous trade - off between computational complexity and classification performances , even for a common electrode configuration setting .compared to classification based on air flow , our findings demonstrate a better discriminatory power of the covariance patterns of eeg signals to detect a patient - ventilator disharmony ( versus , in average , for the loaded breathing condition ) .the analysis of auc values suggests that riemannian geometry is a better framework to manipulate covariance matrices , even with a few number of channels , than classical matrix operators with an euclidean structure .we notice that one - class svms detector also provides better results than a classifier employing euclidean metrics .in contrast with other classical methods used in bci , ( as csp which assumes a - priori labelled data from two classes ) , the results support the better discriminant capacity of our approach to identify anomalous respiratory periods , by learning from a single training set containing only normal respiratory epochs .the proposed recursive channel selection procedure may provide a subject - customized bvi setting with a reduced number of six channels and maintained performance ( ) .if the channel configuration includes the most significant electrodes across subjects , classification rates are reduced compared to customized optimization .nevertheless , this general setting provides a good compromise between a reduced number of electrodes ( 6 channels ) and reasonable classification performances ( ) , which is advantageous in general clinical practice where ready - to - use devices are necessary .interestingly , overall classification performance do not significantly depend on the other parameters of the classifier ( learning time and number of prototypes ) .similar to our approach , there have been previous attempts to use divergences from information theory ( e.g. kl divergence , jensen - shannon divergences or bregman matrix divergence ) for developing distances on covariance matrices .the use of such dissimilarity measures could improve the performance of our algorithm .an optimization of control parameters for other svms models might also increase classification performances but this subject is beyond the scope of our paper . one major limitation of our approach results from the difficulty to properly estimate the covariance matrices .poor estimates do not accurately reflect the underlying neural process and thus will directly affect the classifier s performances .the estimation of a covariance matrix in an eeg segment assumes a multivariate , stationary stochastic process so it can be strongly affected by artifacts such as eye blinks , muscle activity or swallowing .changes over longer time scales such as changes in electrode impedance , lose of electrodes or sudden electrode shifts , would also deteriorate classification performances .the use of regularization or adaptive learning approaches ( see and references therein ) could improve the robustness of the algorithm .this work provides the first extensive evaluation of what could be a brain - ventilator interface ( bvi ) designed to detect altered respiratory conditions in patients on mechanical ventilation .the novelty of our proposal is a riemannian geometry approach to identify a cortical signature of breathing discomfort from spatio - temporal eeg patterns . in general , characterization of brain networksprovides meaningful insights into the functional organization of cortical activities underlying breathing control and respiratory diseases .our study supports the hypothesis of a strong correlation between voluntary and compensatory respiratory efforts and the activation of cortical circuits .correlation increase between eeg and air flow during breathing discomfort epochs also corroborates that changes in brain state detected by our classifier are related to respiration .a hybrid bci combining eeg and airflow signals could be explored in future studies .the introduction of a bvi is a first step toward a critical class of interfaces for respiratory control applications in a variety of clinical conditions where the use of mechanical ventilation is required to decrease the work of breathing in the patients .for instance , in patients suffering amyotrophic lateral sclerosis , we have recently reported a respiratory - related cortical activity .effective translation of our approach to a suitable device for long - term monitoring of these patients faces difficult challenges that arise from the nature of the heterogeneous population ( e.g. different respiratory problems ) and the numerous sources of artifacts in clinical units . nevertheless , due to its technical simplicity ( portable , non - invasive , with few electrodes and fast computation ) the proposed bvi can be highly operable in clinical environments as well as in custom - designed systems .indeed , our algorithm is currently assessed by a large clinical study undertaken at the icu of our group . beyond short - term applications where the bvi would prompt clinicians to run a dyspnea check - list , future workshould also integrate the bvi in a feedback scheme to automatically set ventilator parameters with minimal physician intervention .this work was supported by the program investissement davenir anr-11-emma-0030 and anr-10-aihu 06 of the french government and by the grant legs poix from the chancellerie de luniversit de paris , france .x. navarro - sune is financially supported by air liquide medical systems s.a .hudson was supported by an nhmrc ( australia ) early career fellowship .10 a. carlucci et al .srlf collaborative group on mechanical ventilation , `` noninvasive versus conventional mechanical ventilation .an epidemiologic survey , '' _ american journal of respiratory and critical care medicine _ , vol .163 , no . 4 , pp .874880 , 2001 .p. leung et al .`` comparison of assisted ventilator modes on triggering , patient effort , and dyspnea , '' _ american journal of respiratory and critical care medicine _ , vol . 155 , no . 6 , pp .19401948 , 1997 .m. raux et al .`` cerebral cortex activation during experimentally induced ventilator fighting in normal humans receiving noninvasive mechanical ventilation , '' _ the journal of the american society of anesthesiologists _ , vol .107 , no . 5 , pp .746755 , 2007 .a. l. hudson et al . , `` electroencephalographic detection of respiratory - related cortical activity in humans : from event - related approaches to continuous connectivity evaluation , '' _ journal of neurophysiology _ , vol . 115 , pp . 22142223 , 2016 .a. cherian et al ., `` jensen - bregman logdet divergence with application to efficient similarity search for covariance matrices , '' _ ieee transactions on pattern analysis and machine intelligence _ , vol .35 , no . 9 , pp .21612174 , 2013 .p. t. fletcher and s. joshi , `` principal geodesic analysis on symmetric spaces : statistics of diffusion tensors , '' in _ computer vision and mathematical methods in medical and biomedical image analysis _3117 , pp . 8798 , 2004 .m. h. quang et al . , `` a unifying framework for vector - valued manifold regularization and multi - view learning , '' in _ proceedings of the 30th international conference on machine learning ( icml-13 ) _ , pp . 100108 , 2013 .j. farquhar et al . , `` regularised csp for sensor selection in bci , '' in _ proceedings of the 3rd international brain - computer interface . workshop and training course 2006 .graz university of technology , austria _ , pp .1415 , 2006 .g. macefield and s. c. gandevia , `` the cortical drive to human respiratory muscles in the awake state assessed by premotor cerebral potentials , '' _ the journal of physiology _ , vol .439 , pp . 545558 , 1991 .m. raux et al ., `` functional magnetic resonance imaging suggests automatization of the cortical response to inspiratory threshold loading in humans , '' _ respiratory physiology & neurobiology _ , vol . 189 , no . 3 , pp .571580 , 2013 .t. similowski et al.,``method for characterising the physiological state of a patient from the analysis of the cerebral electrical activity of said patient , and monitoring device applying said method , '' 2013 . wo patent wo2013164462 ( a1 ) .
* abstract * during mechanical ventilation , patient - ventilator disharmony is frequently observed and may result in increased breathing effort , compromising the patient s comfort and recovery . this circumstance requires clinical intervention and becomes challenging when verbal communication is difficult . in this work , we propose a brain computer interface ( bci ) to automatically and non - invasively detect patient - ventilator disharmony from electroencephalographic ( eeg ) signals : a brain - ventilator interface ( bvi ) . our framework exploits the cortical activation provoked by the inspiratory compensation when the subject and the ventilator are desynchronized . use of a one - class approach and riemannian geometry of eeg covariance matrices allows effective classification of respiratory states . the bvi is validated on nine healthy subjects that performed different respiratory tasks that mimic a patient - ventilator disharmony . classification performances , in terms of areas under roc curves , are significantly improved using eeg signals compared to detection based on air flow . reduction in the number of electrodes that can achieve discrimination can often be desirable ( e.g. for portable bci systems ) . by using an iterative channel selection technique , the common highest order ranking ( chorra ) , we find that a reduced set of electrodes ( n=6 ) can slightly improve for an intra - subject configuration , and it still provides fairly good performances for a general inter - subject setting . results support the discriminant capacity of our approach to identify anomalous respiratory states , by learning from a training set containing only normal respiratory epochs . the proposed framework opens the door to brain - ventilator interfaces for monitoring patient s breathing comfort and adapting ventilator parameters to patient respiratory needs . * keywords : * biomedical signal processing , brain - computer interfaces ( bci ) , biomedical monitoring , medical signal detection , electroencephalography ( eeg ) at ( current page.south ) ;
entropy is applied in information theory and statistics for characterizing the diversity or uncertainty in a probability distribution . for a continuous distribution with density ,the rnyi entropy is defined ( rnyi , 1970 ) as we use to denote the natural logarithm of .the rnyi entropy is a generalization of the shannon entropy ( shannon , 1948 ) : from the statistical point of view , the quadratic rnyi entropy is the simplest point on the rnyi spectrum , where is subset of , such that entropies exist .note that , where we assume that the quadratic functional is well defined , and hence the point belongs to the set .more entropy generalizations are known in information theory , e.g. , the tsallis entropy ( tsallis , 1988 ) : the rnyi entropy ( or information ) for stationary processes can be understood as that of the corresponding ergodic or marginal distributions , see , e.g. , gregorio and iacus ( 2003 ) , where the rnyi entropy is computed for a large class of ergodic diffusion processes .numerous applications of the rnyi entropy in information theoretic learning , statistics ( e.g. , classification , distribution identification problems , statistical inference ) , computer science ( e.g. , average case analysis for random databases , pattern recognition , image matching ) , and econometrics are discussed , e.g. , in principe ( 2010 ) , kapur ( 1989 ) , kapur and kesavan ( 1992 ) , pardo ( 2006 ) , escolano et al . ( 2009 ) , neemuchwala et al .( 2005 ) , ullah ( 1996 ) , baryshnikov et al .( 2009 ) , seleznjev and thalheim ( 2003 , 2010 ) , thalheim ( 2000 ) , leonenko et al .( 2008 ) , and leonenko and seleznjev ( 2010 ) . various estimators for the quadratic functional and the entropy for _ independent _ samples have been studied .leonenko et al .( 2008 ) obtain consistency of nearest - neighbor estimators for , see also penrose and yukich ( 2011 ) and the references therein .bickel and ritov ( 1988 ) and gin and nickl ( 2008 ) show rate optimality , efficiency , and asymptotic normality of kernel - based estimators for in the one - dimensional case .laurent ( 1996 ) builds an efficient and asymptotically normal estimator of ( and more general functionals ) for multidimensional distributions using orthogonal projection .see also references in these papers for more studies under the independence assumption . in our paper, we study -statistic estimators for and based on the number of -close vector observations ( or the number of small inter - point distances ) in a sample from a stationary -dependent sequence with marginal distribution .this extends further the results and approach in leonenko and seleznjev ( 2010 ) ( see also kllberg and seleznjev , 2012 ) , where the same estimators are studied under independence .the number of small inter - point distances in an independent sample exhibits rich asymptotic behaviors , including , e.g. , poisson limits and asymptotic normality ( see jammalammadaka and janson , 1986 , and references therein ) .we show that some of the established limit results for this statistic are still valid when the sample is from a stationary -dependent sequence .it should be noted that our normal limit theorems do not follow from the general theory developed for degenerate variable -statistics under dependence , see , e.g. , kim et al . , 2011 , and references therein .note that the class of stationary -dependent processes is quite large , see , e.g. , the book of joe ( 1997 ) , where there are numerous copula constructions for -dependent sequences with given marginal distribution , or harrelson and houdre ( 2003 ) , where the class of stationary -dependent infinitely divisible sequences is studied .first we introduce some notation . throughout this paper , it is assumed that the sequence of random -vectors is strictly stationary and -dependent , i.e. , and are independent sets of vectors when .let be the ( marginal ) distribution of with density , and entropy .we write for the euclidean distance in and define to be an -ball in with center at and radius .denote by , the volume of the -ball .let and be independent and with distribution and introduce the -ball probability as vectors and are said to be -close _ _ if , for some .the -coincidence _ _ probability for independent vectors is written .then the rnyi -entropy can be used as a measure of uncertainty in ( see seleznjev and thalheim , 2008 , leonenko and seleznjev , 2010 ) . in what follows , let .denote by the cardinality of the finite set and let be the random number of -close observations in the sample , is the indicator of an event .then is a -statistic of hoeffding with varying kernel .for a short introduction to -statistics techniques , see , e.g. , serfling ( 2002 ) , koroljuk and borovskich ( 1994 ) , lee ( 1990 ) .denote by and convergence in distribution and in probability , respectively . for a sequence of random variables , we write as if for any and large enough , there exists such that . moreover , for a numerical sequence , let as if as . the developed technique can also be used for estimation of the corresponding entropy - type characteristics for discrete distributions ( see , e.g. , leonenko and seleznjev , 2010 ) and stationary -dependent sequences . in this case , the applied estimator is a -statistic with fixed kernel and so the problem is simplified in the way that some already established general results yield the limit properties , including consistency and asymptotic normality ( see appendix ) .an approach to statistical estimation of the shannon entropy for discrete stationary -dependent sequences can be found in vatutin and mikhailov ( 1995 ) .the remaining part of the paper is organized as follows . in section[ sec : main ] , the main results for the number of small inter - point distances and the estimators of and are presented .numerical experiments illustrate the rate of convergence in the obtained asymptotic results . in section [ sec : app ] , we discuss applications of these results to -keys in time series databases and distribution identification problems for dependent samples . section [ sec : proofs ] contains the proofs of the statements in section [ sec : main ]. some asymptotic properties of entropy estimation for the discrete case are given in appendix .we formulate the following assumption about finite dimensional distributions of the stationary sequence . 1. the marginal density fulfills .moreover , for each -tuple of distinct positive integers , the distribution of the random vector has a density in that satisfies * remark 1 . *( i ) the integrability ensures that the dependence among is weak enough .in fact , provided , it holds for an independent sequence , so assumption is a generalization of the condition used for studying the same estimators under independence ( kllberg and seleznjev , 2012 ) .\(ii ) if the density is bounded for each distinct , the following condition is sufficient for : for each distinct pair , let the density of satisfy let in the following examples be a sequence of independent identically distributed ( i.i.d . )normal -random variables .* example 1 . *( i ) assumption holds for all vector gaussian sequences .in particular , it is satisfied for the -dependent moving average time series ma generated by , i.e. , \(ii ) an exponential transformation of time series gives a non - linear sequence the finite dimensional distributions of are the multivariate log - normal distributions ( see , e.g. , kotz et al . , 2000 ) and thus is fulfilled in this case .\(iii ) let .then is a stationary -dependent sequence , say , a cauchy sequence .it can be shown that for , that is is valid in this case and similarly for other . since also , condition is satisfied . let the expectation and variance of the number of small inter - point distances be and , respectively . for , we introduce the characteristic .let [ prop : nn1 ] suppose that holds .* then the expectation and variance of fulfill * if , and when , then the asymptotic distribution for depends on the rate of decrease of . some results for under the i.i.d .assumption ( i.e. , ) are obtained in jammalamadaka and janson ( 1986 ) ( see also leonenko and seleznjev , 2010 ) . with only additional weak conditions ,we show that these results are still valid when is stationary and -dependent .let for .[ th : nn2 ] suppose that holds . * if , then . * if , , then and * if and , , and when , then note that definition implies with equality , e.g. , if is uniform .* remark 2 .* the following inference procedure is discussed in leonenko and seleznjev ( 2010 ) for i.i.d . sequences .let . by applying theorem [ th : nn2]_(ii ) _ to the minimum inter - point distance and for a fixed , i.e. , , we get hence has asymptotically exponential distribution and an asymptotic confidence interval for the quadratic functional can be written \ ] ] for certain positive .we consider an estimator for the quadratic functional based on the normalized statistic , defined as let be the corresponding estimator for the entropy .the asymptotic behavior of and depends on the rate of decreasing for . in the following theorem for consistency , we give two versions for different asymptotic rates of , with significantly weaker distribution assumptions in _( ii)_. [ th : p ] * if holds and , then * let and assume that , for all .if , then let and .next we show asymptotic normality properties for the estimators and when and vary accordingly .let and recall definition of .[ th : prelim ] suppose that holds and . *if , and when , then * if , then to evaluate the quadratic functional and the entropy , we introduce smoothness conditions for the marginal density .denote by , a linear space of functions in satisfying a -hlder condition in -norm with constant , i.e. , if and , then note that holds , e.g. , if for some function , there are different ways to define the density smoothness , e.g. , by the conventional or pointwise hlder conditions ( leonenko and seleznjev , 2010 , kllberg et al . , 2012 ) or the fourier characterization ( gin and nickl , 2008 ) . the rate of convergence in probability can now be described in terms of the smoothness of .let , be a slowly varying function as .[ th : cons ] let hold and assume that . * then the bias . * if and , then * if and and , then * remark 3 . *since the aim of this paper is to provide asymptotic properties for estimation under dependence , we leave questions regarding efficiency and optimality of the obtained convergence rates for further research .nevertheless , it could be mentioned that , in the independent one - dimensional case ( ) , bickel and ritov ( 1988 ) show that the rates in theorem [ th : cons ] are optimal in a certain sense ( see also laurent , 1996 , gin and nickl , 2008 ) . in order to make the normal limit results of theorem [ th : prelim ] practical , e.g. , to calculate approximate confidence intervals ,the asymptotic variances have to be estimated .in particular , we need a consistent estimate of the characteristic . assuming that is exactly known might be too strong in our non - parametric setting. however , note that for , so under the less restrictive assumption that a bound for is known , we can use a consistent estimator of . to construct this estimator , for , consider the following estimator of , where and the number of summands .let .[ std ] if holds and , then * remark 4 .* under the conditions of proposition [ std ] , we have and thus is a consistent estimator of the _ cubic _ rnyi entropy . a consistent plug - in estimator for can be set up according to where it is assumed that the sequence satisfies .now we construct asymptotically pivotal quantities by using theorem [ th : prelim ] , the smoothness of the marginal density , and variance estimators .to achieve -rate of convergence , an upper bound has to be available .let be the corresponding consistent estimator of when .[ th : norm1 ] let hold and assume that , and . if and , and when , then next we apply theorem [ th : prelim]_(ii ) _ to weaken the condition and therefore get asymptotic normality for less smooth cases .additionaly , asymptotically pivotal quantities can be built even without a bound available for .note , however , that the obtained rate of convergence is slower than .define a consistent estimator for as .let .[ th : norm2 ] let hold and assume that and . * if , for some , and , then * if , then the practical applicability of the results in this paper relies on an accurate choice of the parameter .one possibility is to use the cross - validation techniques for choosing the optimal bandwidth for density estimation , see , e.g. , hart and vieu ( 1990 ) .however , the problem of finding a suitable is a topic for future research . in the following examples ,let be a sequence of i.i.d .-variables .* example 2 .* we consider estimation of the quadratic rnyi entropy for the 2-dependent moving average ma process , where . in this case and .we simulate independent and normalized residuals , , with , , and .the histogram and normal quantile plot in figure [ fig1 ] illustrate the performance of the normal approximation for implied by theorem [ th : norm1 ] .the p - value ( 0.60 ) for the kolmogorov - smirnov test also supports the hypothesis of standard normality for the residuals .-dependent ma time series ; sample size , , and .standard normal approximation for the normalized residuals ; .,scaledwidth=85.0% ] * example 3 .* consider the sequence , where , i.e. , is a 1-dependent log - normal sequence , . in this casethe quadratic entropy .figure [ fig3 ] shows the accuracy of the normal approximation in theorem [ th : norm1 ] for the residuals , , where , , and .the histogram , normal quantile plot , and p - value ( 0.36 ) of the kolmogorov - smirnov test imply that the normality hypothesis can not be rejected .-dependent log - normal sequence ; sample size , , and .standard normal approximation for the normalized residuals ; .,scaledwidth=85.0% ] * example 4 .* estimation of the quadratic functional for the 1-dependent cauchy sequence .here is the cauchy distribution with .we simulate residuals , , where , , and .figure [ fig2 ] illustrates the performance of the normal approximation of indicated by theorem [ th : norm2 ] .the histogram , normal quantile plot , and p - value ( 0.47 ) for the kolmogorov - smirnov test allow to accept the hypothesis of standard normality .-dependent cauchy sequence ; sample size and .standard normal approximation for the normalized residuals ; .,scaledwidth=85.0% ]-keys in time series databases _let a time series database be a matrix with random records ( or tuples ) , and attributes , , with continuous tuple distribution with density .as contrast to conventional static databases , the ordering of records in is significant , i.e. , the timestamp can be associated with an additional attribute for .for example , time series databases are used for modelling stock market , environmental ( e.g. , weather ) or web usage data ( see , e.g. , last et al . , 2001 ) .then the database can be considered as a sample from a vector time series .assume additionally that is a stationary -dependent time series .a subset , , is called an -_key _ if , i.e. , there are no -close in attributes sub - records .the distribution of characterizes the capability of to distinguish records in and can be used to measure the complexity of a database design for further optimization , e.g. , for optimal -key selection or searching dependencies between attributes ( or _ association rules _ ) ( see , e.g. , thalheim , 2000 , seleznjev and thalheim , 2008 , leonenko and seleznjev , 2010 ) .now theorem [ th : nn2]_(ii ) _ gives an approximation of the probability that is an -key , , where i.e. , asymptotically optimal -key candidates are amongst , sets with minimal value of the quadratic functional and the corresponding estimators of are applicable with various asymptotics for and ( remark 2 and theorems [ th : p ] , [ th : cons ] , th : norm1 , and [ th : norm2 ] ) . + _ entropy maximizing distributions for stationary _-_dependent sequences _ note that conditions of consistency for our estimate of the quadratic rnyi entropy ( see theorem [ th : p]_(ii ) _ ) are rather weak and can be easily verified for many statistical models .hence , one can use these consistent estimators to build goodness of fit tests based on the maximum entropy principle , see , e.g. , goria et al .( 2005 ) and leonenko and seleznjev ( 2010 ) , where similar approaches were proposed for shannon and rnyi entropies , respectively .let us remind some known facts about the maximum entropy principle , see , e.g. , johnson and vignat ( 2007 ) . consider the following maximization problem : given a symmetric positive definite matrix , for all densities with mean and such that quadratic rnyi entropy is uniquely maximized by the distribution with density is for all other densities support , mean , and covariance matrix , see ( [ me0 ] ) , we have , with equality if and only if almost everywhere with respect to the lebesgue measure in .the distribution belongs to the class of multivariate pearson type ii distributions ( or _ student- distributions _ ) and its quadratic rnyi entropy for an i.i.d .sample , the goodness of fit test based on the maximum quadratic rnyi entropy principle was proposed by leonenko and seleznjev ( 2010 ) . to generalize this test for -dependent data ,we need to show that there exists a stationary -dependent sequence with marginal distribution ( me1 ) . for one - dimensional processes, one can apply some results from joe ( 1997 ) .henceforth , we use some definitions and notation from this book .it is known that for continuous multivariate distributions , the univariate marginals and the multivariate or dependent structures can be separated by copula .let be a bivariate copula with conditional distribution the inverse conditional distribution is denoted by let be a continuous univariate distribution function and let be a sequence i.i.d .uniformly distributed -random variables .a -dependent sequence with marginal distribution is , where ] and consider the set of -subsets of now , based on the subset of , let first we prove c2 ) .denote by , , and , the numbers of terms of types 2 , 3 , and 4 , respectively , defined in proposition [ prop : nn1]_(i)_. furthermore , let , , and be the numbers of these terms that also appear in since , we observe that first , for , we obtain a lower bound for . assume without loss of generality that .there are elements in that also satisfy , where .for every element of this type , an index with or can be chosen in 4 different ways . moreover , for each such alternative we can choose in at least different ways .we get that is bounded from below by further , by proposition [ prop : nn1]_(ii ) _ , lemma [ lemma]_(i ) _ , and the limits and , we get for some . by a similar argument, we have lower bounds and for and , respectively , such that , for some and , now , from , , , and proposition [ prop : nn1]_(ii ) _ , and hence , since has zero mean , c2 ) is implied by chebyshev s inequality . next we prove that c1 ) .let \dfrac{k^3 } { ( k+m)^3 } \dfrac{\zeta^{(k)}_{1,m}}{\zeta_{1,m } } , & \quad a = \infty,\end{array } \right.\ ] ] where note that , since as , we have as , so c1 ) follows if it can be verified that to prove this , we apply the corresponding result of jammalamadaka and janson ( 1986 ) for independent samples .in fact , if we introduce the pooled random vectors in then the -dependence of implies that is an _independent _ sequence in .thus can be represented as a -statistic with respect to the independent sample , furthermore , let and define since yields , we obtain from lemma [ lemma ] that and therefore moreover , jammalamadaka and janson ( 1986 ) show that as , and hence the stationarity of and give since implies , we get from and that the conditions of theorem 2.1 in jammalamadaka and janson ( 1986 ) are satisfied .consequently further , using proposition [ prop : nn1]_(ii ) _ and definition , it is straightforward to show so the desired limit of follows from and the slutsky theorem .this completes the proof . _ proof of theorem [ th : p ] .( i ) _ from lemma lemma , proposition [ prop : nn1]_(i ) _ , and the condition , hence , the assertion follows._(ii ) _ in order to avoid condition , we repeat the argument of proposition [ prop : nn1]_(i ) _ with the convergence rates in lemma [ lemma]_(ii)-(iii ) _ replaced by the weaker limits which follow from the stationarity and -dependence of and since .first we obtain from that moreover , if we use in place of lemma [ lemma ] in the derivation of , , , and thus finally , it follows that from , the last statement in , and the condition , so the claim holds true .this completes the proof . _ proof of theorem [ th : prelim ] .( i ) _ let where , by proposition [ prop : nn1]_(i ) _ and the assumption , furthermore , from proposition [ prop : nn1]_(ii ) _ , finally , combining , , , theorem [ th : nn2]_(iii ) _ , and the slutsky theorem gives the assertion for .the statement about follows from proposition 2 in leonenko and seleznjev ( 2010 ) . + _( ii ) _ the details are omitted , since the argument is similar to that of _( i ) _ , using the decomposition corresponding to with the -scaling .this completes the proof . _ proof of theorem [ th : cons ] .( i ) _ as in leonenko and seleznjev ( 2010 ) , the density smoothness condition yields this bound and proposition [ prop : nn1]_(i ) _ imply the assertion _ ( ii ) _ note that , by the assumptions and , and hence , from , further , since , we get from _( i ) _ and that the bias fulfills consequently , for some and any , and the desired convergence for follows .moreover , combining this with proposition 2 in leonenko and seleznjev ( 2010 ) proves the statement for . + _( iii ) _ the argument is similar to that of _ ( ii ) _ and therefore is left out .this completes the proof . _ proof of proposition [ std ] ._ first we study the expectation of . by lemma [ lemma : lee ] ,the number of 3-tuples that satisfy , is .furthermore , observe that of the elements are permutations of .for the corresponding variables , and are mutually independent and also independent of , so for these we obtain thus , from lemma [ lemma ] and the assumption , next we consider the variance of . using the notation , we have we count the number of terms in this sum that are zero .lemma lemma : lee implies that the number of 6-tuples with , is .each such 6-tuple can be divided and permuted into pairs .the -dependence of yields that the corresponding random variables are independent , and hence at least summands in are zero . for each of the non - zero terms, so lemma [ lemma]_(iii ) _ gives that the sum of the non - zero terms in is .combining this with the condition , we get finally , from and it follows that , which completes the proof . _ proof of theorem [ th : norm1 ] ._ the argument is similar to that of theorem 6 in leonenko and seleznjev ( 2010 ) , so we show the main steps only . from the decomposition we see that the assertion for is implied by the slutsky theorem if and .the asymptotic normality follows straight away from theorem [ th : prelim]_(i)_. furthermore , the conditions for and together with bound lead to the desired convergence of .finally , proposition 2 in leonenko and seleznjev ( 2010 ) proves the claim for .this completes the proof . _ proof of theorem [ th : norm2 ] .( i ) _ we use the decomposition corresponding to : note that the condition gives and , so the asymptotic normality follows from theorem [ th : prelim]_(ii ) _ and the slutsky theorem .further , the assumptions , , and bound imply since .thus , from , , , and the slutsky theorem , we obtain the statement for .the assertion for follows by an argument similar to that of proposition 2 in leonenko and seleznjev ( 2010 ) .+ + _ ( ii ) _ the argument follows the same steps as that of _( i ) _ and consequently is omitted .this completes the proof . anderson , t.w .( 1971 ) , _ the statistical analysis of time series _ , new york : john wiley and sons .barbour , a.d . , holst , l. , janson , s. ( 1992 ) , _ poisson approximation _, oxford : oxford university press .baryshnikov , y. , penrose , m.d ., yukich , j.e .( 2009 ) , gaussian limits for generalized spacings , _ ann ._ , 19 , 158185 .bickel , p.j . and ritov , y. ( 1988 ) ,estimating integrated squared density derivatives : sharp best order of convergence estimates , _ sankhy : the indian journal of statistics _ , series a , 381393 .billlingsley , p. ( 1995 ) , _ probability and measure _ , new york : wiley .escolano , f. , suau , p. , bonev , b. ( 2009 ) , _ information theory in computer vision and pattern recognition _ , new york : springer .gin , e. , nickl , r. ( 2008 ) , a simple adaptive estimator for the integrated square of a density , _ bernoulli _ , 14 , 4761 .goria , m.n ., leonenko , n.n . ,mergel , v.v . ,inverardi , p.l.n .( 2005 ) , a new class of random vector entropy estimators and its applications in testing statistical hypotheses , _ j. nonparam ._ , 17 , 277297 .gregorio , a. , iacus , s.m .( 2009 ) , on rnyi information for ergodic diffusion processes , _ inform ._ , 179 , 279291 .harrelson , d. , houdr , c. ( 2003 ) , a characterization of -dependent stationary infinitely divisible sequences with applications to weak convergence , _ ann ._ , 31 , 849881 . hart , j.d . , vieu , p. ( 1990 ) , data - driven bandwidth choice for density estimation based on dependent data , _ ann ._ , 18 , 873890 .jammalamadaka , s.r . ,janson , s. ( 1986 ) , limit theorems for a triangular scheme of -statistics with applications to inter - point distances , _ ann ._ , 14 , 1347 - 1358 .johnson , o. , vignat c. ( 2007 ) , some results concerning maximum rnyi entropy distributions , _ ann .. h. poincar probab ._ , 43 , 339351 .joe , h. ( 1997 ) , _ multivariate models and dependence concepts _ , london : chapman and hall .kapur , j.n .( 1989 ) , _ maximum - entropy models in science and engineering _, new york : wiley .kapur , j.n . ,kesavan , h.k .( 1992 ) , _ entropy optimization principles with applications _ ,new york : academic press .kim , t.y . , luo , z.m . ,kim , c. ( 2011 ) , the central limit theorem for degenerate variable -statistics under dependence , _ j. nonparam ._ , 23 , 683 - 699 .koroljuk , v.s . , borovskich , y.v .( 1994 ) , _ theory of -statistics _ , dordrecht : kluwer .kotz , s. , balakrishnan , n. , johnson , n.l .( 2000 ) , _ continuous multivariate distributions : models and applications _ , new york : wiley .kllberg , d. , seleznjev , o. ( 2012 ) , estimation of entropy - type integral functionals , preprint arxiv:1209.2544 .kllberg , d. , leonenko , n. , seleznjev , o. ( 2012 ) , statistical inference for rnyi entropy functionals , _ lecture notes in comput ._ , 7260 , 36 - 51 .last , y. , klein , m. , kandel , a. ( 2001 ) , knowledge discovery in time series databases , _ ieee trans . systems . man . and cybernetics - part b _ , 31 , 160169 .laurent , b. ( 1996 ) , efficient estimation of integral functionals of a density , _ ann ._ , 24 , 659681 .lee , a.j .( 1990 ) , _ -statistics : theory and practice _, new york : marcel dekker .leonenko , n. , pronzato , l. , savani , v. ( 2008 ) , a class of rnyi information estimators for multidimensional densities , _ ann ._ , 36 , 2153 - 2182 .corrections , ( 2010 ) , _ ann ._ , 38 , 3837 - 3838 .leonenko , n. , seleznjev , o. ( 2010 ) , statistical inference for the -entropy and the quadratic rnyi entropy , _ j. multivariate anal ._ , 101 , 1981 - 1994 .neemuchwala , h. , hero , a. , carson , p. ( 2005 ) , image matching using alpha - entropy measures and entropic graphs , _ signal processing _ , 85 , 277 - 296 .pardo , l. ( 2006 ) , _ statistical inference based on divergence measures _ , boca raton : chapman & hall .penrose , m. , yukich , j.e .( 2011 ) , limit theory for point processes in manifolds , _ annals of applied probability _ , to appear , see also preprint arxiv:1104.0914v1 .principe , j.c .( 2010 ) , _ information theoretic learning _ , new york : springer .rnyi , a. ( 1970 ) , _ probability theory _, amsterdam : north - holland .seleznjev , o. , thalheim , b. ( 2003 ) , average case analysis in database problems , _ methodol ._ , 5 , 395 - 418 .seleznjev , o. , thalheim , b. ( 2010 ) , random databases with approximate record matching , _ methodol ._ , 12 , 6389 . serfling , r.j . ( 2002 ) , _ approximation theorems of mathematical statistics _ , new york : wiley .shannon , c.e .( 1948 ) , a mathematical theory of communication , _ bell syst . tech ._ , 27 , 379 - 423 , 623656 .thalheim , b. ( 2000 ) , _ entity - relationship modeling .foundations of database technology _ , berlin : springer - verlag .tsallis , c. ( 1988 ) , possible generalization of boltzmann - gibbs statistics , _ j. stat_ , 52 , 479487 .ullah , a. ( 1996 ) , entropy , divergence and distance measures with econometric applications. _ j. statist .inference _ , 49 , 137162 .vatutin , v .a. , mikhailov , v.g .( 1995 ) , statistical estimation of the entropy of discrete random variables with a large number of outcomes , _ russian math .surveys _ , 50 , 963 - 976 .wang , q. ( 1999 ) , on berry - esseen rates for -dependent -statistics , _ stat .letters _ , 41 , 123 - 130 .consider a stationary -dependent sequence with discrete -dimensional ( marginal ) distribution .we present some results on the estimation of quadratic rnyi entropy for discrete distributions and the corresponding quadratic functional where and are independent vectors with distribution . similarly to the continuous case , let define the normalized statistic be an estimator for . let be the corresponding estimator for . for , we also introduce the following estimator for , and are defined as in section sec : main . by an argument similar to that of proposition [ std ] , we get . hence , for , a consistent estimator for is given by some asymptotic properties for the estimators of and follow by combining the results of ch . 2 in lee ( 1990 ) , wang ( 1999 ) , and the slutsky theorem .
the rnyi entropy is a generalization of the shannon entropy and is widely used in mathematical statistics and applied sciences for quantifying the uncertainty in a probability distribution . we consider estimation of the quadratic rnyi entropy and related functionals for the marginal distribution of a stationary -dependent sequence . the -statistic estimators under study are based on the number of -close vector observations in the corresponding sample . a variety of asymptotic properties for these estimators are obtained ( e.g. , consistency , asymptotic normality , poisson convergence ) . the results can be used in diverse statistical and computer science problems whenever the conventional independence assumption is too strong ( e.g. , -keys in time series databases , distribution identification problems for dependent samples ) . = 3.4 ex _ ams 2010 subject classification : _ 62g05 , 62g20 , 62m99 , 94a17 _ keywords : _ entropy estimation , quadratic rnyi entropy , stationary -dependent sequence , inter - point distances , -statistics
support vector machines ( boser et al . , 1992 ) belong to core machine learning techniques for binary classification .given a large number of training samples characterized by a large number of features , a linear svm is often the _ go - to _ approach in many applications . a handy collection of software packages ,e.g. , ` liblinear ` ( fan et al . ,2008 ) , ` pegasos ` ( shalev - shwartz et al . , 2011 ) , ` svm^{\texttt{perf } } ` ( joachims , 2006 ) , ` scikit - learn ` ( pedregosa et al . , 2011 ) provide practitioners with efficient algorithms for fitting linear models to datasets .finding optimal hyperparameters of the algorithms for model selection is crucial though for good performance at test - time .a vanilla cross - validated grid - search is the most common approach to choosing satisfactory hyperparameters .however , grid search scales exponentially with the number of hyperparameters while choosing the right sampling scheme over the hyperparameter space impacts model performance ( bergstra & bengio , 2012 ) .linear svms typically require setting a single hyperparameter that equally regularizes the training ` loss ` of misclassified data .( klatzer & pock , 2015 ) propose bi - level optimization for searching several hyperparameters of linear and kernel svms and ( chu et al . , 2015 ) use warm - start techniques to efficiently fit an svm to large datasets but both approaches explore the hyperparameter regularization space partially . the algorithm proposed in ( hastie et al . , 2004 )builds the entire regularization path for linear and kernel svms that use single , symmetric cost for misclassifying negative and positive data .the stability of the algorithm was improved in ( ong et al . , 2010 ) by augmenting the search space of feasible event updates from one- to multi - dimensional hyperparameter space . in this paper, we also show that a one - dimensional path following method can diverge to unoptimal solution wrt kkt conditions .many problems often require setting multiple hyperparameters ( karasuyama et al . , 2012 ) .they arise especially when dealing with imbalanced datasets ( japkowicz & stephen , 2002 ) and require training an svm with two cost hyperparameters assymetrically attributed to positive and negative examples .( bach et al . , 2006 ) builds a pencil of one - dimensional regularization paths for the assymetric - cost svms . on the other hand , ( karasuyama et al . , 2012 )build a one - dimensional regularization path but in a multidimensional hyperspace .in contrast to algorithms building one - dimensional paths in higher - dimensional hyperparameter spaces , we describe a solution path algorithm that explores the entire regularization path for an assymetric - cost linear svms .hence , our path is a two - dimensional path in the two - dimensional hyperparameter space .our main contributions include : * development of the entire regularization path for assymetric - cost linear support vector machine ( ac - lsvm ) * algorithm initialization at arbitrary location in the hyperparameter space * computationally and memory efficient algorithm amenable to local parallelization .our binary classification task requires a _ fixed _ input set of training examples , where , , , to be annotated with corresponding binary labels denoting either class . then , the objective is to learn a decision function that will allow its associated classifier ] , which are associated with constraints in .let ] .then , the dual problem takes the familiar form : the immediate consequence of applying the lagrange multipliers is the expression for the lsvm parameters yielding the decision function .the optimal solution of the dual problem is dictated by satisfying the usual karush - kuhn - tucker ( kkt ) conditions .notably , the kkt conditions can be algebraically rearranged giving rise to the following _ active sets _ : firstly , the sets cluster data points to the margin , to the left , and to the right of the margin along with their associated scores .secondly , the sets indicate the range within the space for lagrange multipliers over which is allowed to vary thereby giving rise to a convex polytope in that space . [ [ convex - polytope ] ] convex polytope + + + + + + + + + + + + + + + a unique region in satisfying a particular configuration of the set is bounded by a convex polytope . the first task in path explorationis thus to obtain the boundaries of the convex polytope . following ( hastie , 2004 ) , we obtain linear inequality constraints from : \label{eq : h_alpha0}\ ] ] ^t \label{eq : h_alphac1}\ ] ] ^t \label{eq : h_alphac2}\ ] ] \label{eq : h_l}\ ] ] \label{eq : h_r}\ ] ] where , is the orthogonal projector onto the orthogonal complement of the subspace spanned by and is the moore - penrose pseudoinverse if has full column rank .specifically , let be a matrix composed of constraints . \label{eq : h}\ ] ] then , the boundaries of the convex polytope in the space are indicated by a subset of active constraints in , which evaluate to for some ] . in the context of our algorithm , both cases ( i)(ii )are detected at when the matrix formed of constraints associated with these points either has , producing multiple events at an edge denoted by constraints that are identical up to positive scale factor , or has , producing multiple joint events at a vertex denoted by constraints that intersect at the same point .we propose the following procedure for handling both special cases .namely , when some facets close with edges having multiple events or with vertices having multiple joint events that would lead to cases ( i)(ii ) , the algorithm moves to step , as it can obtain facet updates in these special cases . however , it skips step for these particular facets .while we empirically observed that such vertices close with edges having multiple joint events , it is an open issue how to generate open edges in this case .instead , during successive layers , step augments the list of facets , edges , and vertices by the ones associated to ( i)(ii ) for indexing and relabeling them with respect to successive ones that will become replicated in further layers . in effect, our algorithm goes around these special case facets and attempts to close them by computing adjacent facets .however , the path for in these cases is not unique and remains unexplored .nevertheless , our experiments suggest that unexplored regions occupy relatively negligibly small area in the hyperparameter space .when the algorithm starts with all points in and either case ( i)(ii ) occurs at the initial layers , the exploration of the path may haltd multiple event paths referring to these cases will go to both axis , instead of to one axis and to infinity . ] due to the piecewise continuity of the ( multiple ) events .a workaround can then be to run a regular lsvm solver at yet unexplored point , obtain sets , and extract convex polytope to restart the algorithm .our future work will focus on improving our tactics for special cases .we posit that one worthy challenge in this regard is to efficiently build the entire regularization path in -dimensional hyperparameter space . [[ computational - complexity ] ] computational complexity + + + + + + + + + + + + + + + + + + + + + + + + let be the average size of a margin set for all , let be the average size of .then , the complexity of our algorithm is , where is the number of computations for solving ( without inverse updating / downdating ( hastie et al . , 2004 ) ) and we hid constant factor related to convex hull computation .however , note that typically we have .in addition , we _ empirically _ observed that ( but cf .( gartner et al . , 2012 ) ) , so that the number of layers approximates dataset size .our algorithm is sequential in but parallel in .therefore , the complexity of a parallel implementation of the algorithm can drop to . finally , at each facet, it is necessary to evaluate .but then the evaluation of constraints can be computed in parallel , as well .while this would lead to further reduce the computational burden , memory transfer remains the main bottleneck on modern computer architectures .our algorithm partitions the sets , , into a _ layer_-like structure such that our two - step merging procedure requires access to objects only from layer pairs and and not to preceding layers , it requires access to , , objects related to these cases even after layers , but the number of these objects is typically small . ] . in effect , the algorithm only requires memory to cache the sets at , where and are average edge and vertex subset sizes of and , respectively .in this section , we evaluate our ac - lsvmpath algorithm described in section [ sec : algo ] .we conduct three numerical experiments for exploring the two - dimensional path of assymetric - cost lsvms on synthetic data .we generate samples from a gaussian distribution for ( i ) a small dataset with large number of features , ( ii ) a large dataset with small number of features , and ( iii ) a moderate size dataset with moderate number of features .we also build two - dimensional regularization path when input features are sparse ( iv ) .we use off - the - shelf algorithm for training flexible part mixtures model ( yang & ramanan , 2013 ) , that uses positive examples from parse dataset and negative examples from inria s person dataset ( dalal & triggs , 2006 ) . the model is iteratively trained with hundreds of positive examples and millions of hard - mined negative examples .we keep original settings .the hyperparameters are set to and to compensate for imbalanced training ( akbani et al . , 2004 ) . for experiments ( i)(iv ), we have the following settings : ( i ) , , , ( ii ) , , , ( iii ) , , , ( iv ) , , .we set in all experiments , as in ( yang & ramanan , 2013 ) .the results are shown in fig . 1 and figthis work proposed an algorithm that explores the entire regularization path of asymmetric - cost linear support vector machines .the events of data concurrently projecting onto the margin are usually considered as special cases when building one - dimensional regularization paths while they happen repeatedly in the two - dimensional setting . to this end, we introduced the notion of joint events and illustrated the set update scheme with vertex loop property to efficiently exploit their occurrence during our iterative path exploration .finally , as we structure the path into successive layers of sets , our algorithm has modest memory requirements and can be locally parallelized at each layer of the regularization path . finally , we posit that extending our algorithm to the entire -dimensional regularization path would facilitate processing of further special cases .hsieh , c. j. , chang , k. w. , lin , c. j. , keerthi , s. s. , sundararajan , s. ( 2008 ) . a dual coordinate descent method for large - scale linear svm . in _ proceedings of the 25th international conference on machine learning _, 408 - 415 pedregosa , f. , varoquaux , g. , gramfort , a. , michel , v. , thirion , b. , grisel , o. , et al . ,duchesnay , e. ( 2011 ) .scikit - learn : machine learning in python ._ the journal of machine learning research _ , 12 , 2825 - 2830 chu , b. y. , ho , c. h. , tsai , c. h. , lin , c. y. , lin , c. j. ( 2015 ) .warm start for parameter selection of linear classifiers ._ acm sigkdd international conference on knowledge discovery and data mining _ , 149 - 158
we propose an algorithm for exploring the entire regularization path of asymmetric - cost linear support vector machines . empirical evidence suggests the predictive power of support vector machines depends on the regularization parameters of the training algorithms . the algorithms exploring the entire regularization paths have been proposed for single - cost support vector machines thereby providing the complete knowledge on the behavior of the trained model over the hyperparameter space . considering the problem in two - dimensional hyperparameter space though enables our algorithm to maintain greater flexibility in dealing with special cases and sheds light on problems encountered by algorithms building the paths in one - dimensional spaces . we demonstrate two - dimensional regularization paths for linear support vector machines that we train on synthetic and real data .
suppose that we have independent observations from a probability density that belongs to a parametric model , where is an unknown -dimensional parameter and is the parameter space .the random variable to be predicted is independently distributed according to a density in a parametric model , possibly different from , with the same parameter .the objective is to construct a predictive density for by using .the performance of is evaluated by the kullback leibler divergence from the true density to the predictive density .the risk function is given by = \iint p(x^n \mid \theta ) { \displaystyle \tilde{p}}(y \mid \theta ) \log \frac{{\displaystyle \tilde{p}}(y \mid \theta)}{\hat{p}(y;x^n ) } { \mbox{d}}y { \mbox{d}}x^n.\ ] ] it is widely recognized that plug - in densities constructed by replacing the unknown parameter by an estimate may not perform very well and that bayesian predictive densities constructed by using a prior perform better than plug - in densities . if the value of is given , there is no specific meaning of considering the conditional density of given since the obvious relation holds .however , if is unknown , bayesian predictive densities constructed by introducing a prior density on the parameter space are useful to approximate the true density as discussed in and .in fact , there exists a predictive density whose asymptotic risk is smaller than that of a plug - in density unless the mean mixture curvature of the model manifold vanishes , see and for details .the choice of becomes important especially when the sample size is not very large .although the jeffreys prior is a widely known default prior , it does not perform satisfactorily especially when the unknown parameter is multidimensional as jeffreys himself pointed out . constructed a bayesian predictive density incorporating the advantage of shrinkage methods for the multivariate normal model .see also for useful results for the normal model . in the conventional setting in which the distributions of , , and arethe same , asymptotic theory of prediction based on general parametric models has been studied by using the framework of information geometry , see . in information geometry ,a parametric statistical model is regarded as a differentiable manifold , which we call the model manifold , and the parameter space is regarded as a coordinate system of the manifold , see .the fisher rao metric is a riemannian metric based on the fisher information matrix on the model manifold .the jeffreys prior corresponds to the volume element of the model manifold associated with the fisher rao metric .when the distributions of , , and are the same , the asymptotic difference between the risks of and is given by \notag\\ = & \frac{\displaystyle \delta \left(\frac{\pi}{{\pi_\mathrm{j}}}\right)}{\displaystyle \left ( \frac{\pi}{{\pi_\mathrm{j } } } \right ) } - \frac{1}{2 } \sum_{i=1}^d \sum_{j=1}^d g^{ij } \frac{\displaystyle \partial_i \left(\frac{\pi}{{\pi_\mathrm{j}}}\right ) \partial_j \left(\frac{\pi}{{\pi_\mathrm{j}}}\right ) } { \displaystyle \left(\frac{\pi}{{\pi_\mathrm{j}}}\right)^2 } + { \mathrm{o}}(1 ) = 2 \frac{\displaystyle \delta \left(\frac{\pi}{{\pi_\mathrm{j}}}\right)^{\frac{1}{2 } } } { \displaystyle \left ( \frac{\pi}{{\pi_\mathrm{j } } } \right)^{\frac{1}{2 } } } + { \mathrm{o}}(1 ) , \label{main0}\end{aligned}\ ] ] where denotes , , denotes the -element of the inverse of the matrix , and is the laplacian , see .the laplacian on a riemannian manifold endowed with a metric is defined by where is the determinant of the matrix , is a smooth real function on , and denotes the covariant derivative , defined in the next section .the indices run from to .note that both the definition of the laplacian and the definition that differs in sign are widely adopted in the mathematics literature , although it is confusing .because of , if there exists a non - constant positive superharmonic function , i.e. a non - constant positive function satisfying for every , on the model manifold , then the bayesian predictive density based on the prior density defined by asymptotically dominates that based on the jeffreys prior . here , the riemannian geometric structure of the model manifold based on the fisher rao metric plays a fundamental role . in practical applications , it often occurs that observed data , , and the target variable to be predicted have different distributions .regression models are a typical example .suppose that we observe , where is a given matrix , and predict , where is a given matrix and is an unknown parameter .then , the fisher information matrices for the same parameter based on and are different .similar situations also occur in nonlinear regression problems . and showed that shrinkage priors are useful for constructing bayesian predictive densities for linear regression models when the observations are normally distributed with known variance .however , it has been difficult to construct useful priors for general models other than the normal models when and have different distributions . in the present paper ,we study asymptotic theory for the setting in which , , and have different distributions .although several asymptotic properties of predictive distributions for such a setting are studied by , the result corresponding to has not been explored .the generalization is not straightforward because two different differential geometric structures , one for and the other for , such as the fisher rao metrics exist in the present setting .we introduce a new metric , which we call the predictive metric , depending on both and .the predictive metric and the volume element of it correspond to the fisher rao metric and the jeffreys prior in the conventional setting . in section 2, we obtain an expansion of the difference of the risk functions of bayesian predictive densities . each term in the expansion is represented by using geometrical quantities and is invariant with respect to parameter transformations . in section 3 , we introduce the predictive metric and evaluate the asymptotic risk difference between a bayesian predictive density based on a prior and that based on the volume element prior of the predictive metric .the asymptotic risk difference is represented by using the laplacian associated with the predictive metric . in section 4 ,we consider three examples and construct superior priors by using the formula obtained in section 3 .first , we prepare several information geometrical notations to be used . in the following ,the quantities associated with the model are denoted without tilde , and those associated with the model are denoted with tilde .we put and .the fisher rao metrics on the model manifolds and are given by respectively .the -elements of the inverses of the matrices and are denoted by and , respectively .we define and here , are the e - connection coefficients , are the m - connection coefficients , and are the riemannian connection coefficients .the relations represent the duality between and with respect to the metric , and the duality between and with respect to the metric , respectively .covariant derivatives , , and of a vector field with respect to the connection coefficients , , and are defined by , , and , respectively , where , , and . in the same way , the covariant derivatives , , and , with respect to the connection coefficients , , and , are defined .theorem [ riskdiff ] below is used in the following sections .[ riskdiff ] the difference between the risk functions of bayesian predictive densities and based on priors and , respectively , is given by \nonumber \\ = & \left ( \frac{1}{2 } \sum_{i , j } { \tilde{g}_{ij}^ { } } { u_\pi}^i { u_\pi}^j + \sum_{i , j , k } { \tilde{g}_{ij}^ { } } { g_{}^{jk } } { \widetilde{\nabla}_{k}^{(\mathrm{e } ) } } { u_\pi}^i \right ) \notag \\ & - \left ( \frac{1}{2 } \sum_{i , j } { \tilde{g}_{ij}^ { } } { u_{\pi'}}^i { u_{\pi'}}^j + \sum_{i , j , k } { \tilde{g}_{ij}^ { } } { g_{}^{jk } } { \widetilde{\nabla}_{k}^{(\mathrm{e } ) } } { u_{\pi'}}^i \right ) + \mathrm{o}(1 ) , \label{6 - 2}\end{aligned}\ ] ] where the proof of theorem [ riskdiff ] is given in the appendix .in this section , we introduce a new metric defined by which we call the predictive metric . since is positive definite , it can be adopted as a riemannian metric on .it will be shown that the predictive metric , the corresponding volume element and the laplacian based on play essential roles corresponding to those played by the fisher rao metric , the jeffreys prior , and the laplacian based on in the conventional setting where . here , , , and denote determinants of matrices , , and , respectively .the -element of the inverse of the matrix is given by . here, we give an intuitive meaning of the predictive metric by a nonrigorous argument . in the standard estimation theory , the fisher - rao metric , which is the fisher information matrix , corresponds to the inverse of the asymptotic variance of the maximum likelihood estimator . in the setting we consider ,the asymptotic variance of the maximum likelihood estimator based on is , where is the matrix , and the asymptotic variance of the maximum likelihood estimator based on both of and is , where is the matrix .the inverse of the reduction of the asymptotic variance by observing in addition to are given by , as we see in example 1 in section 4 , corresponding to the predictive metric .the riemannian connection coefficients with respect to the predictive metric are given by and we put .then , in the same way , we have thus , the laplacian with respect to the predictive metric is defined by where , and is a real smooth function on . by using these quantities ,we obtain the following theorem corresponding to in the conventional setting .[ maintheorem ] the difference between the risk functions of bayesian predictive densities based on a and based on is given by \notag\\ = & \frac{\displaystyle { \mathring{\delta}}\left(\frac{\pi}{{\pi_\mathrm{p}}}\right)}{\displaystyle \left ( \frac{\pi}{{\pi_\mathrm{p } } } \right ) } - \frac{1}{2 } \sum_{i , j } { \mathring{g}_{}^{ij } } \frac{\displaystyle \partial_i \left(\frac{\pi}{{\pi_\mathrm{p}}}\right ) \partial_j \left(\frac{\pi}{{\pi_\mathrm{p}}}\right ) } { \displaystyle \left(\frac{\pi}{{\pi_\mathrm{p}}}\right)^2 } + { \mathrm{o}}(1 ) = 2 \ ; \frac{\displaystyle { \mathring{\delta}}\left(\frac{\pi}{{\pi_\mathrm{p}}}\right)^{\frac{1}{2 } } } { \displaystyle \left ( \frac{\pi}{{\pi_\mathrm{p } } } \right)^{\frac{1}{2 } } } + { \mathrm{o}}(1).\label{main}\end{aligned}\ ] ] the proof of theorem [ maintheorem ] is given in the appendix .if there exists a positive constant such that , we identify the prior with because the posterior densities based on them are identical .in fact , the risk difference between and coincides with that between and .[ usefulcor ] if a positive function is superharmonic with respect to the predictive metric , i.e. for every , and the strict inequality holds at a point in , then the bayesian predictive density based on the prior density asymptotically dominates the bayesian predictive density based on the prior density .if there exists a non - constant positive superharmonic function with respect to the predictive metric , then the bayesian predictive density based on the prior density asymptotically dominates . the first statement is a straightforward conclusion from theorem [ maintheorem ] .we show the second statement .the function is superharmonic because if is a positive superharmonic function .the strict inequality holds at satisfying for any .such exists since is a non - constant function .thus , the second statement follows from the first statement . by setting , it follows from corollary [ usefulcor ] that the bayesian predictive density based on the prior asymptotically dominates the bayesian predictive density based on if is a non - constant positive superharmonic function .note that corollary [ usefulcor ] also holds if we replace the predictive metric with another metric satisfying with a positive constant .this is because the volume element with respect to is proportional to that with respect to and the relation holds , where is the laplacian with respect to .in this section , we see three examples . we verify that the results in the previous sections are consistent with several known results in examples 1 and 2 andobtain some new results in examples 2 and 3 .example 1 .normal models suppose that is distributed according to the -dimensional normal distribution with mean vector and covariance matrix and that is distributed according to the -dimensional normal distribution with the same mean vector and possibly different covariance matrix . here , is the unknown parameter and and are known .the fisher information matrix for is and that for is , where and are inverse matrices of and , respectively .since the coefficients of the predictive metric do not depend on , the volume element with respect to the predictive metric is which is the uniform distribution . and considered shrinkage pri - ors for this model .the bayesian predictive density dominates based on the uniform measure if is a superharmonic function on the euclidean space endowed with the metric , see theorem 3.2 in .this result holds for every positive integer . since ^{-1 }( n{g_{}^{}})^{\frac{1}{2 } } \\[2pt ] = & ( n{g_{}^{}})^{\frac{1}{2 } } \left [ i - i + ( n{g_{}^{}})^ { -\frac{1}{2 } } { \tilde{g}_{}^ { } } ( n{g_{}^{}})^ { -\frac{1}{2 } } + \mathrm{o}(n^{-2 } ) \right]^{-1 } ( n{g_{}^{}})^{\frac{1}{2 } } \\[2pt ] = & n^2 \ , { g_{}^ { } } \ , { \tilde{g}_{}^{-1 } } \ , { g_{}^ { } } + \mathrm{o}(n)\end{aligned}\ ] ] corresponds to the predictive metric , theorem [ maintheorem ] is consistent with theoretical and numerical results in and .example 2 .location - scale models suppose that and are probability densities on that are symmetric about the origin .let where and are unknown parameters .suppose that we have a set of independent observations distributed according to .the variable to be predicted is independently distributed according to .the objective is to construct a prior for a bayesian predictive density .the fisher rao metrics on the model manifolds and are respectively , where and are positive constants depending on , and and are positive constants depending on .the predictive metric is given by define by rescaling the location parameter .we call this coordinate system the upper - half plane coordinates .then , the predictive metric is represented by coinciding with the metric on the hyperbolic plane , which is a 2-dimensional complete manifold with constant sectional curvature .thus , the model manifold endowed with the predictive metric is isometric to . the volume element with respect to the predictive metric given by and coincides with the jeffreys priors for and for .the laplacian on the model manifold endowed with the predictive metric is given by by corollary [ usefulcor ] , the bayesian predictive density based on the prior asymptotically dominates based on because by theorem [ maintheorem ] , the asymptotic risk difference is \notag\\ = & 2 \ ; \frac{\displaystyle { \mathring{\delta}}\left(\frac{\pi_\mathrm{r}}{{\pi_\mathrm{p}}}\right)^{\frac{1}{2 } } } { \displaystyle \left ( \frac{\pi_\mathrm{r}}{{\pi_\mathrm{p } } } \right)^{\frac{1}{2 } } } + { \mathrm{o}}(1 ) = 2 \ ; \frac{{\mathring{\delta}}\sigma^{\frac{1}{2 } } } { \sigma^{\frac{1}{2 } } } + { \mathrm{o}}(1 ) = - \frac{\tilde{b}}{2 b^2 } + { \mathrm{o}}(1 ). \label{riskr}\end{aligned}\ ] ] in fact , it can be shown that the bayesian predictive density exactly dominates for finite because is the left invariant prior and is the right invariant prior with respect to the location - scale group .the bayesian procedures based on the right invariant prior dominate those based on the left invariant prior in many problems associated with group models as shown in .the prior is also derived as a reference prior , see .furthermore , as we see below , the bayesian predictive density based on the prior defined by asymptotically dominates and thus also dominates . to clarify the meaning of the prior , we introduce another coordinate system on the model manifold .let be the riemannian distance based on the predictive metric between a point and an arbitrary fixed point on .the direction of from is represented by a point on the unit circle in the tangent space at .then , the point is represented by and , see e.g. p. 152 .this coordinate system is called the geodesic polar coordinates .then , the predictive metric is given by the laplacian is represented by where is the laplacian on the unit circle in the tangent space at , see e.g. p. 158 .when the upper - half plane coordinate system is adopted , the riemannian distance between and is represented by see e.g. p. 176 .thus , in the original coordinate system , theriemannian distance between and and is thus , the ratio of prior densities is given by note that depends on only through defined by .thus , from , , and theorem [ maintheorem ] , we have \notag\\ = & 2 \ ; \frac{\displaystyle { \mathring{\delta}}\left(\frac{\pi_{c,\kappa}}{{\pi_\mathrm{p}}}\right)^{\frac{1}{2 } } } { \displaystyle \left ( \frac{\pi_{c,\kappa}}{{\pi_\mathrm{p } } } \right)^{\frac{1}{2 } } } + { \mathrm{o}}(1 ) = - \frac{\tilde{b}}{b^2 } \left\ { \frac{1}{2 } + c \frac{\pi_{c,\kappa}}{{\pi_\mathrm{p } } } + \frac{3}{2 } ( 1-c^2 ) \left(\frac{\pi_{c,\kappa}}{{\pi_\mathrm{p } } } \right)^2 \right\ } + { \mathrm{o}}(1 ) \notag \\ = & - \frac{\tilde{b}}{b^2 } \left\ { \frac{1}{2 } + c \frac{1}{\cosh \rho + c } + \frac{3}{2 } ( 1 - c^2 ) \frac{1}{(\cosh\rho + c)^2 } \right\ } + { \mathrm{o}}(1 ) , \label{riskhs}\end{aligned}\ ] ] and is smaller than when and .the asymptotic risk difference can also be derived from and the laplacian in the original coordinate system . by corollary [ usefulcor] , the bayesian predictive density asymptotically dominates since the function is superharmonic for .however , asymptotically dominates only when .+ { \mathrm{o}}(1 ) = -(\tilde{b}/b^2 ) \ { 1/2 + c ( \pi/{\pi_\mathrm{p } } ) + ( 3/2 ) ( 1-c^2 ) ( \pi/{\pi_\mathrm{p } } ) \}^2 ]is given by where which is a covariant vector .the expansion of the risk function of a bayesian predictive density up to the order is given in theorem [ risk - invariant ] below .the expansion is invariant in the sense that each term is a scalar not depending on parametrization . in theorem [ risk - invariant ] ,we put here , and are vectors orthogonal to the model manifolds and , respectively .these vectors are closely related to the curvature of the manifolds. expansions of the risk functions corresponding to when the distributions of , , and are the same are obtained by for curved exponential families by using differential geometrical notions and by for general models under rigorous regularity conditions . obtained several related results when when the distributions of , , and are different .the expansion is shown by lengthy calculations parallel to those in and by using the results such as , , and obtained by .the quantity is the efron curvature of the model manifold at , and is the mixture mean curvature discussed in of the model manifold at . from and, we have let and .then , from , thus , when , . from , we have - { { \rm e}}\big [ d ( { \displaystyle \tilde{p}}(y \mid \theta ) , { \displaystyle \tilde{p}}_{\mathrm{p}}(y \mid x^n ) \big ] \biggr ) \notag \\ = & \frac{1}{2 } \sum_{i , j } { \tilde{g}_{ij}^ { } } { u_\pi}^i { u_\pi}^j + \sum_{i , j , k } { \tilde{g}_{ij}^ { } } { g_{}^{jk } } \left(\partial_k u_{\pi}^i + \sum_l { \tilde{\gamma}_{kl}^{\,(\mathrm{e})i } } u_{\pi}^l \right ) \notag \\ & - \frac{1}{2 } \sum_{i , j } { \tilde{g}_{ij}^ { } } u_{\mathrm{p}}^i u_{\mathrm{p}}^j - \sum_{i , j , k } { \tilde{g}_{ij}^ { } } { g_{}^{jk } } \left ( \partial_k u_{\mathrm{p}}^i + \sum_l { \tilde{\gamma}_{kl}^{\,(\mathrm{e})i } } u_{\mathrm{p}}^l \right ) + { \mathrm{o}}(1 ) \notag \\ \label{riskdiff-2 } = & \frac{1}{2 } \sum_{i , j } { \tilde{g}_{ij}^ { } } ( \sum_k { g_{}^{ik } } \partial_k \log f + s^i + r^i ) ( \sum_l { g_{}^{jl } } \partial_l \log f + s^j + r^j ) \notag \\ & - \frac{1}{2 } \sum_{i , j } { \tilde{g}_{ij}^ { } } ( s^i + r^i ) ( s^j + r^j ) \notag \\ & + \sum_{i , j , k } { \tilde{g}_{ij}^ { } } { g_{}^{jk } } \left\ { \sum_l \partial_k ( { g_{}^{il } } \partial_l \log f ) + \sum_{l , m } { \tilde{\gamma}_{kl}^{\,(\mathrm{e})i } } { g_{}^{lm } } \partial_m \log f \right\ } + { \mathrm{o}}(1 ) \notag \\ = & \frac{1}{2 } \sum_{i , j } { \mathring{g}_{}^{ij } } \partial_i \log f \partial_j \log f + \sum_{i , j , k } { \tilde{g}_{ij}^ { } } { g_{}^{ik } } ( \partial_k \log f ) ( s^j + r^j ) \notag \\ & + \sum_{i , j , k , l } { \tilde{g}_{ij}^ { } } { g_{}^{ik } } \partial_k ( { g_{}^{jl } } \partial_l \log f ) + \sum_{i , j , k , l , m } { \tilde{g}_{ij}^ { } } { g_{}^{jk } } { \tilde{\gamma}_{jk}^{\,(\mathrm{e})i } } { g_{}^{lm } } \partial_m \log f + { \mathrm{o}}(1).\end{aligned}\ ] ] let . from , it is sufficient to show that is equal to .since we have thus , from we have hence , because of the duality of the e - connection and the m - connection , is equal to berger , j. o. and bernardo , j. m. ( 1992 ) .`` on the development of reference priors ( with discussion ) . '' in bernardo , j. m. , berger , j. o. , dawid , a. p. , and smith , a. f. m. ( eds . ) , _ bayesian statistics 4 _ , 3560 .new york : oxford university press .davies , e. b. ( 1989 ) . _ heat kernels and spectral theory_. cambridge : cambridgeuniversity press .
bayesian predictive densities when the observed data and the target variable to be predicted have different distributions are investigated by using the framework of information geometry . the performance of predictive densities is evaluated by the kullback leibler divergence . the parametric models are formulated as riemannian manifolds . in the conventional setting in which and have the same distribution , the fisher rao metric and the jeffreys prior play essential roles . in the present setting in which and have different distributions , a new metric , which we call the predictive metric , constructed by using the fisher information matrices of and , and the volume element based on the predictive metric play the corresponding roles . it is shown that bayesian predictive densities based on priors constructed by using non - constant positive superharmonic functions with respect to the predictive metric asymptotically dominate those based on the volume element prior of the predictive metric .
following the suggestion of , several collaborations , notably macho and eros , began to search for gravitational microlensing towards the magellanic clouds as an indicator of compact objects in the halo of the milky way . at about the same time , the ogle collaboration began a survey in the direction of the galactic bulge .it was soon found that a much higher event rate occurred in fields towards the galactic bulge relative to the rate towards the magellanic clouds . since 1990 , approximately 1000 such events have been detected .several groups including planet ( probing lensing anomalies network , ) , mps ( microlensing planet search , ) and ( microlensing follow - up network , ) monitor events much more intensively than the survey groups in order to identify anomalous behavior that can signal the presence of a planet associated with the lens star .high - magnification events in particular ( those with ) attract the attention of follow - up groups since it is these that are most likely to give detectable planetary signals .in addition , for high magnification events the angular size of the source star may be non - negligible in comparison to the lens - source angular separation . in these cases the lightcurves of the events can provide the possibility to determine the lens - source relative proper motion and atmospheric properties of the source . in the first years of operation ,when microlensing alerts came primarily from the macho collaboration , detected event rates were low enough that planet could monitor almost all potentially interesting events with ease .for the last two years ( the 2002 and 2003 bulge seasons ) , this has not been the case , due to the much improved alert rate since the advent of the ogle iii early warning system ( ews ) , http://www.astrouw.edu.pl/~ogle/ogle3/ews/ews.html . in excess of 400 eventswere alerted by the ews in each of these years .in addition , approximately 75 events were alerted in 2003 by the moa collaboration although some of these were duplicates of ews events .we are now in an era in which a careful selection of events is necessary to optimize planet detection and exclusion productivity .for this reason , follow - up groups require accurate predictions of eventual maximum amplications in the early days following a detection . forthe remainder of this paper i will focus exclusively on events detected by the ogle iii ews .most microlensing events are well fitted by a point - source point - mass - lens ( pspl ) model for the magnification at time , the impact parameter is the angular separation between the source and lens measured in units of the angular einstein radius , where is the mass of the lens , is the observer - source distance , the observer - lens distance , the source - lens distance and is the impact parameter at , the time of maximum magnification . the einstein radius crossing time , where is the relative proper motion between the lens and source .the lightcurve of a pspl event can thus be characterised by parameters , ( ) plus for each telescope + filter combinations , the unmagnified ( baseline ) magnitude of the source star and the blending parameter , where is the fraction of blended ( non - lensed ) light . the maximum magnification, frequently replaces as a parameter .the conventional method for predicting peak magnifications is to use minimization techniques to fit pspl models to data from ogle iii ( possibly supplemented by a follow - up group s own data ) as they become available .such fits are continuously updated and the subsequent predictions revised as data accumulate . experience has shown that early predictions of eventual maximum magnification using these methods systematically yield overpredictions , strongly limiting the usefulness of such estimates .in particular , very large maximum magnifications ( with large uncertainties ) are often predicted for events that turn out to be of rather low amplitude .valuable observing time is often wasted monitoring such events in order to confirm their nature .the reason for this overprediction of values for is that in using minimization for a predictive purpose , one implicitly assumes that all parameter values are equally likely .however , for microlensing events this is far from being the case . from a purely geometrical perspective ,high magnification events are exceedingly rare . in practice ,being of high magnification , they have a higher probability of detection by a survey group .it is the individual detection efficiency of a survey convolved with the intrinsic event rates ( both of these as a function of event parameters ) that determines the magnification probability , given that an event has been detected .these ideas can be given a quantitative basis in a bayesian formulation of the problem .the merits of the bayesian approach to statistical analysis have been discussed at length elsewhere and will not be reargued . herewe simply note that a bayesian formulation with appropriate priors should produce an unbiased estimate of the eventual microlensing event parameters during the rising part of a lightcurve . from bayes theorem , the probability density for a microlensing event to have a certain set of parameter values , given the fact , , that it has been detected by the ogle iii ews and that data have been acquired, where . the second term on the right hand side of equation ( 6 ) , ( known as the prior ) , is the underlying probability density for given a detection , i.e. is the probability that is in the range $ ] .it is this function that incorporates both the natural event occurrence probability and the particular parameter sensitivities of the detection system . the first term , , the likelihood function for given . data from the ogle iii survey consist of -band magnitudes and their uncertainties at time ( i.e. ) .i assume each to be drawn randomly from a normal distribution where the true value of the magnitude at time is .this implies that where and is the magnitude evaluated from the model parameterised by .analagous to a minimization , the value of that maximises ( i.e. the posterior mode ) is taken as the best estimator of , the true value of . in the absence of a prior, this solution reduces to the minimum solution .we stress here that when sufficient data are available to constrain a fit to a certain event , e.g. when the event is over , the bayesian and minimization techniques give the same parameter values and the choice of prior is largely irrelevant . in other words ,the solution is not driven by prior probabilities when sufficent empirical information is available ( see , chapter 2 for a discussion of this point ) .if the parameters are statistically independent quantities , factorizes as where for brevity i have omitted `` '' in the probability densities on the right hand side of the equation .it is often more convenient to work in decadic logarithmic units for several of these quantities , in which case with being an arbitrary unit of time . herei define to be the time to peak magnification from an initial `` alert date '' . for the remainder of this paperi adopt . it is worth noting that even if each parameter in is independent in , they are not independent in the likelihood function and hence not independent in .thus when fitting a model to a lightcurve , particularly when only early data are available , the fitted maximum magnification is affected not only by the prior on but also by the priors on the other other parameters .the criterion used by the ogle iii ews is that a blending parameter is only used when it is more than 3- less than unity and is larger than its formal uncertainty . in this paperi use the odds ratio test , a natural way to decide between two different models .i define the odds ratio where indicates the set of model parameters without blending ( when considering only ews data ) . having no _ a priori _ indication about whether to include blendingi choose .if we assume a unform prior probability density for in the range and zero outside this range , and assuming a gaussian probability density function for about , it can be shown ( see for instance ch 4 ) that equation ( [ oreqn1 ] ) reduces to for cases in which is more than several away from the cutoffs imposed by the prior . otherwise , for close to 1 , equation ( [ oreqn1 ] ) becomes while for close to 0 the odds ratio is then made up of two terms .the first of these represents a relative `` goodness of fit '' between the two models while the second is the `` occam penalty '' for introducing a new parameter . only when and the odds ratio is less than unity is a blending parameter used in this analysis .i have used the set of microlensing events detected by the ews in 2002 to determine the parameter priors . sinceour interest is in the set of pspl events , i have removed 41 events that showed deviations from pspl behavior from this analysis .excluded events were numbered 18 , 23 , 40 , 51 , 68 , 69 , 77 , 80 , 81 , 99 , 113 , 119 , 126 , 127 , 128 , 129 , 131 , 135 , 143 , 149 , 159 , 175 , 194 , 202 , 203 , 205 , 215 , 228 , 229 , 232 , 238 , 254 , 255 , 256 , 266 , 273 , 307 , 315 , 339 , 348 , 360 , out of the complete set of 389 alerts .parameter values have been obtained for these events using a simplex downhill method to minimise ( ews estimates of , , , can also be obtained from the ews web page ) .for we require an objective definition of an `` alert date '' that can be applied to all events .i have arbitrarily chosen a working definition of an alert date as being the date at which three successive data points have been more than 1- brighter than , the baseline magnitude . in practice , can be determined separately from and in advance of the other parameters .the distributions of , and are shown in figure [ fig1 ] .for the purposes of obtaining bayesian prior probability densities for these quantities , the distribution functions are adequately represented by the following empirically - chosen functions , also shown in figure [ fig1 ] : \\ p(\lg ( t_{\rm e}/t^ { * } ) ) & = & 0.476 \exp \left [ -(\lg ( t_{\rm e}/t^ { * } ) - 1.333)^2 /0.330\right ] \\ p(\lg ( \delta t_{\rm 0}/t^ { * } ) ) & = & 0.156 \exp \left [ -(\lg ( \delta t_{\rm 0}/t^ { * } ) - 1.432)^2 /0.458 \right].\end{aligned}\ ] ] it is also instructive to examine the distribution of , shown in figure [ fig2](a ) . in the absence of any selection effects, this distribution should be uniform .in fact , there is an enhanced sensitivity to detection of high magnification ( low ) events and a rapid decrease in sensitivity for .figure [ fig2](a ) also shows the shape of the adopted prior on ( eq . 15 ) when transformed to .figure [ fig2](b ) shows the same data but excluding those events where has a formal uncertainty greater than 50% .this illustrates that many high amplification events have maxima that are poorly constrained from ogle data alone .as a test of the bayesian method , i have applied a fitting procedure that maximises to a sample of the pspl events alerted in real time by the ogle iii ews in 2003 . these consist of events ogle-2003-bul-138 to ogle-2003-bul-462 and excluding events numbered 145 , 160 , 168 , 170 , 176 , 192 , 200 , 230 , 236 , 252 , 260 , 266 , 267 , 271 , 282 , 286 , 293 , 303 , 306 , 311 , 359 , 380 , 419 that do not appear to be due to pspl microlensing and 188 , 197 , 245 , 263 , 274 , 297 , 387 , 399 , 407 , 412 , 413 , 417 , 420 , 422 , 429 , 430 , 432 , 433 , 435 , 437 , 440 , 441 , 442 , 443 , 444 , 449 , 450 , 452 , 453 , 454 , 455 , 457 , 459 , 461 , 462 that were still ongoing at the time of writing .events ogle-2003-bul-137 and earlier were anounced by the ews in a single email at the beginning of the 2003 bulge season and thus not alerted in real time .ogle-2003-bul-238 ( a. gould 2004 , private communication ) and 262 are events in which the lens is known to have transited the source and ogle-2003-bul-208 and 222 may also involve finite source effects .these events have not been excluded . for the remaining sample of 267 events ,i have used only the ogle iii data taken before the ews alert time , defined as the reception of the alert email by the author .for the zero point of for each event , i have used the definition in 2 except for cases in which this has not occured before the ews alert time in which case the latter has been used as the zero point . as mentioned in 1 , different fitting codes can produce different estimates of maximum magnifications , particularly for high - magnification events for which blending may be involved . in particular , there is a concern that a direct comparison of predictions with the ews alert predictions may suffer from such differences . in order to compare the maximum magnifications predicted by the bayesian method with those predicted using fitting , i have thus used very similar computer codes to make and optimisations , electing not to use the ews - fitted parameters . to avoid the problem of slightly different blending parameters resulting in large differences in derived magnifications , for each eventi compute the brightness increase , , where is the magnitude at .the predicted values of using only the pre - alert data for an event are compared with the values determined using all the data . when all the data are available , the parameters derived using and are almost always identical .exceptions to this are in a few cases for which there are no data over the peak to constrain the fits . the predictive performances of the bayesian and models at the time of ews alert are illustrated in figures [ fig3 ] [ fig5 ] .figure [ fig3 ] shows the distributions of predicted peak magnifications for both models and compares these with the eventual values .figure [ fig4 ] shows the same data as a function of . for the models ,there is clearly a population of low events with predicted brightenings of more than 10 magnitudes that do not eventuate .such overpredictions are not present in the bayesian fitted models . on the other hand, there is a tendency for the bayesian models to underpredict the peak , and at alert time to fail to predict the small population of high magnification events in figure [ fig3](a ) . in figure [ fig5 ]i compare the distribution of the differences in predicted vs actual brightenings for both models .again , the tendency for the fits to overestimate the peak is obvious .as pointed out in 4.1 , baysesian solutions to early lightcurve data often fail to indicate the nature of high magnification events. it would be of concern if high magnification events were not observed due to this tendency . to illustrate in more detail the behavior of bayesian vs models , i consider here examples of low and high magnification events .these examples show several generic aspects of how bayesian vs solutions evolve as data accumulate .ogle-2003-blg-171 was a low magnification event ( ) . at this magnification, the source star barely passes within one einstein radius of the lens and the event is unsuitable for detecting a planetary anomaly .this is typical of the type of event that a follow - up program should avoid observing .figure [ fig6 ] shows and bayesian fitted lightcurves at 5 day intervals as the event evolves from its alert date .the predicted maximum magnifications corresponding to each panel are listed in table [ tbl1 ] . at alert ,the event is predicted to be of low magnification but by jd 2452785 ( panel d in fig .[ fig6 ] ) , the solution suggests a high magnification , albeit with a large uncertainty .since the lightcurve appears to be rising rapidly , follow - up programs may well begin observing the event in order to improve on the high uncertainty in the predicted peak magnification .as more data accumulate , the low - magnification nature of the event becomes apparent .although the true nature of the event would be identified relatively quickly by a follow - up observing program , there is a not - insignificant overhead associated with adding the event to the program .in contrast to the fit , the predicted peak magnification for the bayesian solution changes steadily with time . at no timeis a high - magnification event suggested and a follow - up strategy based on this method would ignore the event .ogle-2003-blg-208 ( fig .[ fig7 ] , table [ tbl2 ] ) reached a moderately high magnification ( , ) .the projected source trajectory passed as close as 0.02 to the lens and thus had a high probability of intersecting a central caustic if it were present .the alert date for this event corresponds to panel ( c ) in figure [ fig7 ] at which time the predictions of peak magnification are and 4.4 for the and bayesian solutions respectively .the bayesian prediction of is sufficiently high to warrant the attention of a follow - up observing program such as planet . as data acumulate ,the predicted peak magnification rises until reaching in panel ( g ) . the true peak magnification ( starts to become apparent from panel ( h ) as the event peaks .in contrast , the bayesian predicted peak magnification rises steadily until the true peak magnification is identified from around the time of panels ( f ) ( g ) .the behavior illustrated by these two examples is typical . for low magnification events ,the bayesian model never indicates them as being worthy of observational follow - up . for high magnification events that should be observed ,the peak magnification is initially underestimated but adjusts to an appropriate prediction as soon as the data indicate . in all casesexamined , this occurs relatively early in the event when the magnification , . for both high and low magnification events ,the bayesian predicted peak magnification changes smoothly while the prediction is prone to large changes as new data points are included .the bayesian solutions usually converge to the correct amplification earlier than the solutions .high magnification events provide the best opportunity for detecting signals of planets around lens stars and for obtaining upper limits on their abundances .intensive photometric monitoring programs are hampered currently by difficulties in identifying high magnification events well before peak .systems that use minimization to fit pspl models to early data are prone to exagerated predictions of peak magnification .such prections induce observers to spend their time monitoring events that ultimately have little statistical power .i have shown here that a predictive system based on a bayesian formalism that takes account of the characteristics of a detection system is immune to such behavior .although such a bayesian system tends to initially underpredict the peak for high magnification events , accurate prediction occurs as soon as sufficient data accumulate to justify the assertion . in all casesexamined , this occurs well ahead of peak in their associated lightcurves and early enough for the events to be targeted for observation .implementation of such a system based on the ogle early warning system should result in much improved observing productivity for the 2004 season .i am grateful to martin dominik for his comments on an earlier version of this paper .i think the referee , andy gould , for his suggested improvements to the manuscript .this work was supported by the marsden fund under contract uoc302 .albrow , m.d . , et al . , 1998 , , 509 , 687 albrow , m.d . , et al . , 2001 , , 556 , 113 alcock , c. , et al . , 1993 , , 365 , 621 alcock , c. , et al . , 1995 , , 445 , 133 alcock , c. , et al . ,1997a , , 479 , 119 alcock , c. , et al ., 1997b , , 491 , 436 alcock , c. , et al . , 2000 , , 541 , 734 aubourg , e. , et al . , 1993 , , 365 , 623 bond , i.a . , et al , 2002 , , 331 , 19 dominik , m. , et al , 2002 , planetary and space science , 50 , 299 gaudi , b.s . , naber , r.m . , sackett , p.d . , 1998 , , 500 , 33 gaudi , b.s ., et al . , 2002 , , 566 , 463 gould , a. , 1994 , , 421 , l71 griest , k. , safizadeh , n. , 1998 , , 500 , 37 heyrovsk , d. , 2003 , , 594 , 464 loredo , t.j . , 1990 , in p.f .fougere , ed , maximum entropy and bayesian methods , kluwer , dordrecht , pp81 - 142 paczynski , b. , 1986 , , 304 , 1 rhie , s.h . , 1999 , , 522 , 1037 sivia , d.s . ,1996 , data analysis .oxford university press , oxford udalski , a. , et al ., 1992 , acta astron . , 42 , 253 udalski , a. , et al . , 1993 , acta astron . , 43 , 289 udalski , a. , et al ., 1994a , acta astron . , 44 , 165 udalski , a. , et al . ,1994b , acta astron ., 44 , 227 udalski , a. , et al . ,2000 , acta astron . ,50 , 1 udalski , a. , 2003 , acta astron . , 53 , 291 yoo , j. , et al . , 2004 , , in press , astro - ph/0309302 rrrrrrrrrrrr a & 1.240 & 0.670 & 1.000 & & 1.240 & & 1.053 & 0.014 & 1.000 & & 1.053 + b & 1.050 & 0.009 & 1.000 & & 1.050 & & 1.051 & 0.009 & 1.000 & & 1.051 + c & 1.050 & 0.009 & 1.000 & & 1.050 & & 1.051 & 0.009 & 1.000 & & 1.051 + d & 34934.748 & & 1.000 & & 34934.748 & & 1.300 & 0.314 & 1.000 & & 1.300 + e & 1.449 & 0.557 & 1.000 & & 1.449 & & 1.273 & 0.150 & 1.000 & & 1.273 + f & 1.459 & 0.330 & 1.000 & & 1.459 & & 1.367 & 0.169 & 1.000 & & 1.367 + g & 1.539 & 0.353 & 1.000 & & 1.539 & & 1.446 & 0.199 & 1.000 & & 1.446 + h & 1.404 & 0.082 & 1.000 & & 1.404 & & 1.392 & 0.072 & 1.000 & & 1.392 + i & 2.524 & 1.366 & 0.233 & 0.206 & 1.355 & & 1.356 & 0.019 & 1.000 & & 1.356 + j & 1.374 & 0.005 & 1.000 & & 1.374 & & 1.374 & 0.005 & 1.000 & & 1.374 + rrrrrrrrrrrr a & 63601.415 & & 1.000 & & 63601.415 & & 2.055 & 0.657 & 1.000 & & 2.055 + b & 138528.609 & & 0.209 & 0.349 & 28987.904 & & 2.150 & 0.700 & 1.000 & & 2.150 + c & 226215.441 & & 0.170 & 0.168 & 38337.850 & & 4.362 & 3.407 & 1.000 & & 4.362 + d & 224373.536 & & 0.382 & 0.324 & 85643.167 & & 5.934 & 4.681 & 1.000 & & 5.934 + e & 778431.912 & & 0.438 & 0.232 & 340873.531 & & 15.425 & 14.683 & 1.000 & & 15.425 + f & 1669005.679 & & 0.412 & 0.159 & 687516.769 & & 41.122 & 31.856 & 0.452 & 0.007 & 19.150 + g & 4966165.089 & & 0.586 & 0.138 & 2912064.088 & & 46.816 & 12.146 & 0.456 & 0.004 & 21.888 + h & 55.641 & 15.221 & 0.293 & 0.076 & 17.022 & & 49.816 & 1.378 & 0.326 & 0.002 & 16.893 + i & 53.452 & 13.513 & 0.303 & 0.073 & 16.916 & & 48.446 & 1.356 & 0.333 & 0.002 & 16.813 + j & 48.204 & 8.750 & 0.334 & 0.056 & 16.740 & & 45.814 & 1.096 & 0.350 & 0.002 & 16.661 +
gravitational microlensing events with high peak magnifications provide a much enhanced sensitivity to the detection of planets around the lens star . however , estimates of peak magnification during the early stages of an event by means of minimization frequently involve an overprediction , making observing campaigns with strategies that rely on these predictions inefficient . i show that a rudimentary bayesian formulation , incorporating the known statistical characteristics of a detection system , produces much more accurate predictions of peak magnification than minimisation . implementation of this system will allow efficient follow - up observing programs that focus solely on events that contribute to planetary abundance statistics .
the standard picture of cosmological structure formation suggests that any visible object forms in a gravitational potential of _ dark matter halos_. therefore , a detailed description of dark halo clustering is the most basic step toward understanding the clustering of visible objects in the universe . for this purpose , many theoretical models for halo clustering have been developed and then tested against extensive numerical simulations .first , i will describe our most recent theoretical model for clustering of dark matter halos ( hamana , yoshida , suto & evrard 2001b ) .in particular , we focus on their high - redshift clustering where the past light - cone effect is important. then i will show that our model predictions are in good agreement with the result from a light - cone output of the hubble volume simulation ( evrard et al .finally i will discuss a fundamental difficulty in relating the halo model to clusters of galaxies .my conclusion is that we already have a reliable empirical model for the halo clustering , but that we need to understand what are the clusters of galaxies , especially at high redshifts , before attempting _precision cosmology _ with clusters of galaxies .as emphasized by suto et al.(1999 ) , for instance , observations of high - redshift objects are carried out only through the past light - cone defined at , and the corresponding theoretical modeling should properly take account of a variety of physical effects which are briefly summarized below .assuming the cold dark matter ( cdm ) paradigm , the linear power spectrum of the mass density fluctuations is computed by solving the boltzmann equation for systems of cdm , baryons , photons and ( usually massless for simplicity ) neutrinos . the resulting spectrum in real space is specified by a set of cosmological parameters including the density parameter , the baryon density parameter , and the hubble constant in units of 100/km / s / mpc , and the cosmological constant .then one can obtain its nonlinear counterpart in real space , by adopting a fitting formula of peacock & doods ( 1996 ) .the most important ingredient in describing the clustering of halos is their biasing properties .the mass - dependent halo bias model was developed by mo & white ( 1996 ) on the basis of the extended press - schechter theory . subsequently jing ( 1998 ) andsheth & tormen ( 1999 ) improved their bias model so as to more accurately reproduce the mass - dependence of bias computed from -body simulation results .we construct an improved halo bias model of the two - point statistics which reproduces the scale - dependence of the taruya & suto ( 2000 ) bias correcting the mass - dependent but scale - independent bias of sheth & tormen ( 1999 ) on linear scales as follows : ^{0.15 } , \\b_{\scriptscriptstyle\rm st}(m , z ) & = & 1 + \frac{\nu-1}{\delta_c(z ) } + \frac{0.6}{\delta_c(z)(1 + 0.9\nu^{0.3})},\end{aligned}\ ] ] for , where is the virial radius of the halo of mass at .in order to incorporate the halo exclusion effect approximately , we set for . in the above expressions , is the mass variance smoothed over the top - hat radius , is the mean density , , is the linear growth rate of mass fluctuations , and ^ 2 $ ] . in linear theory of gravitational evolution of fluctuations ,any density fluctuations induce the corresponding peculiar velocity field , which results in the systematic distortion of the pattern of distribution of objects in redshift space ( kaiser 1987 ) .in addition , virialized nonlinear objects have an isotropic and large velocity dispersion . this _ finger - of - god _ effectsignificantly suppresses the observed amplitude of correlation on small scales . with those effects , the nonlinear power spectrum _ in redshift space _is given as ^{2 } d_{\rm vel}[k\mu\sigma_{\rm halo } ] , \label{nonlinear}\ ] ] where is the fourier transform of the pairwise peculiar velocity distribution function ( e.g. , magira et al . 2000 ) , is the direction cosine in -space , is the one - dimensional _ pair - wise _ velocity dispersion of halos , and while both and depend on the halo mass , separation , and in reality , we neglect their scale - dependence in computing the redshift distortion , and adopt the halo number - weighted averages : where we adopt the halo mass function fitted by jenkins et al .the value of , the halo center - of - mass velocity dispersion at , is modeled following yoshida , sheth & diaferio ( 2001 ) .then our empirical halo bias model can be applied to the two - point correlation function of halos at in redshift space as all cosmological observations are carried out on a light - cone , the null hypersurface of an observer at , and not on any constant - time hypersurface . thus clustering amplitude and shape of objects should naturally evolve even _ within _ the survey volume of a given observational catalogue . unless restricting the objects at a narrow bin of at the expense of the statistical significance , the proper understanding of the data requires a theoretical model to take account of the average over the light cone ( matsubara , suto , & szapudi 1997 ; mataresse et al .1997 ; moscardini et al .1998 ; nakamura , matsubara , & suto 1998 ; yamamoto & suto 1999 ; suto et al .according to the present prescription , the two - point correlation function of halos on the light - cone is computed by averaging over the appropriate halo number density and the comoving volume element between the survey range : where is the comoving volume element per unit solid angle . while the above expression assumes a mass - limited sample for simplicity , any observational selection functioncan be included in the present model fairly straightforwardly ( hamana , colombi & suto 2001a ) once the the relation between the luminosity of the visible objects and the mass of the hosting dark matter halos is specified .figure 1 compares our model predictions with the clustering of simulated halos from `` light - cone output '' of the hubble volume simulation ( evrard et al .2001 ) with , , , and . for the dark matter correlation functions ,our model reproduces the simulation data almost perfectly at ( see also hamana et al .this scale corresponds to the mean particle separation of this particular simulation , and thus the current simulation systematically underestimates the real clustering below this scale especially at .our model and simulation data also show quite good agreement for dark halos at scales larger than .below that scale , they start to deviate slightly in a complicated fashion depending on the mass of halo and the redshift range .this discrepancy may be ascribed to both the numerical limitations of the current simulations and our rather simplified model for the halo biasing .nevertheless the clustering of _ clusters _ on scales below is difficult to determine observationally anyway , and our model predictions differ from the simulation data only by percent at most . therefore we conclude that in practice our empirical model provides a successful description of halo clustering on the light - cone .with the successful empirical model of halo clustering , the next natural question is how to apply it in describing _ real _ galaxy clusters . in fact , in my opinion the main obstacle for that purpose is the lack of the universal definition of clusters .let me give some examples that i can easily think of ( see fig.2 ) .( i ) press - schechter halos : : : almost all theoretical studies adopt the definition of dark matter halos according to the nonlinear spherical model .this is characterized by the mean overdensity of ( in the case of the einstein - de sitter universe .the corresponding expressions in other cosmological models can be also derived . ) . combining this definition with the press - schechter theory ,the mass function of the dark halos can be computed analytically .this makes it fairly straightforward to compare the predictions in this model with observations , and therefore this definition has been widely studied in cosmology .( ii ) halos identified from n - body simulations : : : in reality the gravitationally bound objects in the universe quite often show significant departure from the spherical symmetry .such non - spherical effects can be directly explored with n - body simulations . even in this methodology , the identification of dark halos from the simulated particle distribution is somewhat arbitrary .a most conventional method is the friend - of - friend algorithm . in this algorithm ,the linking length is the only adjustable parameter to controle the resulting halo sample .its value is usually set to be 0.2 times the mean particle separation in the whole simulation which _ qualitatively _ corresponds to the overdensity of as described above .( iii ) abell clusters : : : until recently most cosmological studies on galaxy clusters have been based on the abell catalogue . while this is a really amazing set of cluster samples ,the eye - selection criteria applied on the palomar plates are far from objective and can not be compared with the above definitions in a quantitative sense .( iv ) x - ray clusters : : : the x - ray selection of clusters significantly improves the reliability of the resulting catalogue due to the increased signal - to - noise and moreover removes the projection contamination compared with the optical selection .nevertheless the quantitative comparison with halos defined according to ( i ) or ( ii ) requires the knowledge of gas density profile especially in the central part which fairly dominates the total x - ray emission .( v ) sz clusters : : : the sz cluster survey is especially important in probing the high - z universe . in this case , however , the signal is more sensitive to the temperature profile in clusters than the x - ray selection , and thus one needs additional information / models for temperature in order to compare with the x - ray / simulation results .the above consideration raises the importance to examine the systematic comparison among the resulting _cluster / halo _ samples selected differently . in reality , this is a difficult and time - consuming task , and one might argue that we do not have to worry about such _details _ at this point .such an optimistic point of view may turn out to be reasonably right after all .nevertheless it is still important , at present , to keep in mind that this simplistic assumption of `` dark halos = galaxy clusters '' may produce a systematic effect in the detailed comparison between observational data and theoretical models .i have presented a phenomenological model for clustering of dark matter halos on the light - cone by combining and improving several existing theoretical models ( hamana et al .one of the most straightforward and important applications of the current model is to predict and compare the clustering of x - ray selected clusters . in doing so ,however , the one - to - one correspondence between dark halos and observed clusters should be critically examined at some point .this assumption is a reasonable working hypothesis , but we need more quantitative justification or modification to move on to _ precision cosmology with clusters_. i am afraid that this problem has not been considered seriously simply because the agreement between model predictions and available observations seems already _ satisfactory_. in fact , since current viable cosmological models are specified by a set of many _ adjustable_ parameters , the agreement does not necessarily justify the underlying assumption .thus it is dangerous to stop doubting the unjustified assumption because of the ( apparent ) success .i hope to examine these issues in future .i would like to thank fred lo for inviting me to this exciting and enjoyable meeting and also for great hospitality at taiwan .the present work is based on my collaboration with t.hamana , n.yoshida , and a.e.evrard .this research was supported in part by the grant - in - aid by the ministry of education , science , sports and culture of japan ( 07ce2002 , 12640231 ) .evrard , a.e .2001 , apj , submitted hamana , t. , colombi , s. , & suto , y. 2001a , a & a , 367 , 18 hamana , t. , yoshida , n. , suto , y. , & evrard , a.e .2001b , apj(letters ) , in press jenkins , a. et al .2001 , mnras , 321 , 372 jing , y.p .1998 , apj , 503 , l9 kaiser , n. 1987 , mnras , 227 , 1 magira , h. , jing , y.p . , & suto , y. 2000 , apj , 528 , 30 matarrese , s. , coles , p. , lucchin , f. , & moscardini , l. 1997 , mnras , 286 , 115 matsubara , t. , suto , y. , & szapudi , i. 1997 , apj , 491 , l1 mo , h.j ., & white , s.d.m 1996,mnras , 282 , 347 moscardini , l. , coles , p. , lucchin , & f. , matarrese , s. 1998 , mnras , 299 , 95 nakamura , t. t. , matsubara , t. , & suto , y. 1998 , apj , 494 , 13 peacock , j.a ., & dodds , s.j .1996 , mnras , 280 , l19 sheth , r.k . , & tormen , g. 1999 , mnras , 308 , 119 suto , y. , magira , h. , jing , y. p. , matsubara , t. , & yamamoto , k. 1999 , prog.theor.phys.suppl . , 133 , 183 taruya , a. & suto , y . 2000 , apj , 542 , 559 yamamoto , k. , & suto , y. 1999 , apj , 517 , 1 yoshida , n. , sheth , r. , & diaferio , a. 2001 , mnras , in press
a phenomenological model for the clustering of dark matter halos on the light - cone is presented . in particular , an empirical prescription for the scale- , mass- , and time - dependence of halo biasing is described in detail . a comparison of the model predictions against the light - cone output from the hubble volume -body simulation indicates that the present model is fairly accurate for scale above . then i argue that the practical limitation in applying this model comes from the fact that we have not yet fully understood what are clusters of galaxies , especially at high redshifts . this point of view may turn out to be too pessimistic after all , but should be kept in mind in attempting _ precision cosmology _ with clusters of galaxies . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in
secure communication achieved by exploiting the wireless physical layer to provide secrecy in data transmission , has drawn significant recent research attention ( see e.g. and references therein ) .the performance of this type of secure communication can be measured in terms of the secrecy throughput , which is the capacity of conveying information to the intended users while keeping it confidential from eavesdroppers . on the other hand ,energy efficiency ( ee ) has emerged as another important figure - of - merit in assessing the performance of communication systems . for most systems ,both security and energy efficiency are of interest , and thus it is of interest to combine these two metrics into a single performance index called the secrecy ee ( see ) , which can be expressed in terms of secrecy bits per joule .transmit beamforming can be used to enhance the two conflicting targets for optimizing see in multiple - user multiple - input multiple - output ( mu - mimo ) communications : mitigating mu interference to maximize the users information throughput , and jamming eavesdroppers to control the leakage of information . however , the current approach to treat both ee and see is based on costly zero - forcing beamformers , which completely cancel the mu interference and signals received at the eavesdroppers .the ee / see objective is in the form of a ratio of a concave function and a convex function , which can be optimized by using dinkelbach s algorithm .each dinkelbach s iteration still requires a log - det function optimization , which is convex but computationally quite complex .moreover , zero - forcing beamformers are mostly suitable for low code rate applications and are applicable to specific mimo systems only .the computational complexity of see for single - user mimo / siso communications as considered in and is also high as each iteration still involves a difficult nonconvex optimization problem .this letter aims to design transmit beamformers to optimize see subject to per - user secrecy quality - of - service ( qos ) and transmit power constraints .the specific contributions are detailed in the following dot - points . * a path - following computational procedure , which invokes a simple convex quadratic program at each iteration and converges to at least a locally optimal solution ,is proposed .the mu interference and eavesdropped signals are effectively suppressed for optimizing the see .in contrast to zero - forcing beamformers , higher code rates not only result in transmitting more concurrent data streams but also lead to much better see performance in our proposed beamformer design .* as a by - product , other important problems in secure and energy - efficient communications , such as ee maximization subject to the secrecy level or sum secrecy throughput maximization , which are still quite open for research , can be effectively addressed by the proposed procedure ._ notation ._ all variables are written in boldface .for illustrative purpose , is a mapping of variable while is the output of mapping corresponding to a particular input . denotes the identity matrix of size .the notation stands for the hermitian transpose , denotes the determinant of a square matrix , and denotes its trace while .the inner product is defined as and therefore the frobenius squared norm of a matrix is .the notation ( , respectively ) means that is a positive semidefinite ( definite , respectively ) matrix . ] , and is the number of concurrent data streams .denote by the complex - valued beamformer matrix for user .the ratio is called the code rate of . for notational convenience ,define and {j\in{{\cal d}}} ] , which is feasible for ( [ fd5b])-([fd5c ] ) is where , , and to provide a minorant of the secrecy throughput ( see ( [ d2 ] ) ) at , the next step is to find a _majorant _ of the eavesdropper throughput function at .reexpressing by for , and applying theorem [ baseth ] in the appendix for upper bounding the first term and lower bounding the second term in ( [ fd4.1 ] ) yields the following _ convex quadratic majorant _ of at : where , and a _ concave quadratic minorant _ of the secrecy throughput function at is then here , , , and .therefore , the nonconvex secrecy qos constraints ( [ fd5c ] ) can be innerly approximated by the following convex quadratic constraints in the sense that the feasibility of the former is guaranteed by the feasibility of the latter : for good approximation , the following trust region is imposed : by using the inequality we obtain , for where , , which is a concave function . with regard to define a concave function as follows : * if , define , which is a concave function ; * if , define , which is a linear minorant of the convex function at .a _ concave minorant _ of , which is also a minorant of at , is thus we now solve the nonconvex optimization problem ( [ fd5 ] ) by generating the next feasible point as the optimal solution of the following convex quadratic program ( qp ) , which is an inner approximation of the nonconvex optimization problem ( [ fd5.1 ] ) : note that ( [ kappa1 ] ) involves scalar real variables and quadratic constraints so its computational complexity is .+ it can be seen that as long as , i.e. is better than .this means that , once initialized from a feasible point for ( [ fd5.1 ] ) , the -th qp iteration ( [ kappa1 ] ) generates a sequence of feasible and improved points toward the nonconvex optimization problem ( [ fd5.1 ] ) , which converges at least to a locally optimal solution of ( [ fd5 ] ) . under the stopping criterion for a given tolerance , the qp iterations will terminate after finitely many iterations ._ initialization : _ set , and choose a feasible point for ( [ fd5.1 ] ) ._ -th iteration : _ solve ( [ kappa1 ] ) for an optimal solution and set , and calculate . stop if . the proposed path - following procedure for computational solution of the nonconvex optimization problem ( [ fd5 ] )is summarized in algorithm [ alg1 ] .we note that a feasible initial point for ( [ fd5.1 ] ) can be found by solving by the iterations , which terminate upon reaching to satisfy ( [ fd5b])-([fd5c ] ). the following problem of ee optimization under users throughput qos constraints and secrecy levels : where is set small enough to keep the users information confidential from the eavesdropper , is simpler than ( [ fd5 ] ) .it can be addressed by a similar path - following procedure , which solves the following qp at the iteration instead of ( [ kappa1 ] ) : [ cee ] where , and and are defined from ( [ thetajk ] ) . a feasible initial point for ( [ tst1 ] ) can be found by solving by the iterations which terminate upon reaching , , to satisfy ( [ fd5b ] ) , ( [ tst1 ] ) .lastly , the problem of sum secrecy throughput maximization is also simpler than the see optimization problem ( [ fd5 ] ) , which can be addressed by a similar path - following procedure with the qp solved at the iteration instead of ( [ kappa1 ] ) .the fixed parameters are : , , , , , , bits / s / hz , and db . the secrecy level is set in solving ( [ tst1 ] ) .the channels are rayleigh fading so their coefficients are generated as .for the first numerical example , the number of data streams is set , so the code rate is .each is of size .figure [ fig1 ] shows the see performance of our proposed beamformer and the zero - forcing beamformer .one can see that the former outperforms the latter substantially .apparently , the latter is not quite suitable for both ee and see .the see performance achieved by the formulation ( [ fd5 ] ) is better than that achieved by the formulation ( [ tst1 ] ) because the secrecy level is enhanced with the users s throughput in the former instead of being constrained beforehand in the latter .when the transmit power is small , the denominator of the see objective in ( [ fd5 ] ) and ( [ tst1 ] ) is dominated by the constant circuit power . as a result ,the see is maximized by maximizing its numerator , which is the system sum secrecy throughput . on the other hand ,the see objective is likely maximized by minimizing the transmitted power in its denominator when the latter is dominated by .that is why the see saturates once is beyond a threshold according to figure [ fig1 ] . for ., scaledwidth=90.0% ] we increase the number of data streams to in the second numerical example .the code rate is thus . for this higher - code - rate case ,the zero - forcing beamformers are infeasible . comparing figure[ fig1 ] and figure [ fig2 ] reveals that higher code - rate beamforming is also much better in terms of see because it leads to greater freedom in designing of size for maximizing the see . in other words ,the effect of code rate diversity on the see is observed . for ., scaledwidth=90.0% ]we have proposed a path - following computational procedure for the beamformer design to maximize the energy efficiency of a secure mu mimo wireless communication system and have also showed its potential in solving other important optimization problems in secure and energy - efficient communications .simulation results have confirmed the superior performance of the proposed method over the exiting techniques. * acknowledgement . *the authors thank dr . h.h. kha for providing the computational code from .[ baseth ] for a given , consider a function in . then for any , it is true that with the _ concave _ quadratic function and the _ convex _ quadratic function where , , and , . both functions and agree with at . _proof . _ due to space limitations , we provide only a sketch of the proof .rewrite , which is convex as a function in .then defined by ( [ b2 ] ) actually is the first order approximation of this function at , which is its minorant at , proving the first inequality in ( [ b1 ] ) .+ on the other hand , considering as a concave function in , defined by ( [ b3 ] ) is seen as its first order approximation at and thus is its majorant at , proving the second inequality in ( [ b1 ] ) .t. m. hoang , t. q. duong , h. a. suraweera , c. tellambura , and h. v. poor , `` cooperative beamforming and user selection for improving the security of relay - aided systems , '' _ ieee trans ._ , vol .63 , no . 12 , pp .50395050 , dec . 2015 .r. l. g. cavalcante , s. stanczak , m. schubert , a. eisenlatter , and u. turke , `` toward energy - efficienct 5 g wireless communications technologies , '' _ ieee signal process . mag ._ , vol . 13 , no . 11 , pp .2434 , nov .2014 .n. zhao , f. r. yu , and h. sun , `` adaptive energy - efficient power allocation in green interference - alignment - based wireless networks , '' _ ieee trans_ , vol .64 , no . 9 , pp . 42684281 , sept .2015 .a. kalantari , s. maleki , s. chatzinotas , and b. ottersten , `` secrecy energy efficiency optimization for miso and siso communication networks , '' in _ proc .ieee 16th inter .workshop on signal process .advances in wireless commun.(spawc ) _ , 2015 , pp . 2125 .a. zappone , p .- h .lin , and e. jorswieck , `` energy efficiency of confidential multi - antenna systems with artificial noise and statistical csi , '' _ ieee j. selec .topics signal process . _ ,10 , no . 8 , pp . 14621477 , aug .2016 .h. h. m. tam , h. d. tuan , and d. t. ngo , `` successive convex quadratic programming for quality - of - service management in full - duplex mu - mimo multicell networks , '' _ ieee trans ._ , vol .64 , pp . 23402353 , jun . 2016 .
considering a multiple - user multiple - input multiple - output ( mimo ) channel with an eavesdropper , this letter develops a beamformer design to optimize the energy efficiency in terms of secrecy bits per joule under secrecy quality - of - service constraints . this is a very difficult design problem with no available exact solution techniques . a path - following procedure , which iteratively improves its feasible points by using a simple quadratic program of moderate dimension , is proposed . under any fixed computational tolerance the procedure terminates after finitely many iterations , yielding at least a locally optimal solution . simulation results show the superior performance of the obtained algorithm over other existing methods . mimo beamforming , secure communication , energy efficiency .
in this problem , we attempt to reconstruct the _ conductivity _ in a steady state heat equation of the cooling fin on a cpu .the heat is dissipated both by conduction along the fin and by convection with the air , which gives rise to our equation ( with for convection , for conductivity , for thickness and for temperature ) : the cpu is connected to the cooling fin along the bottom half of the left edge of the fin .we use the robin boundary conditions ( detailed in ) : our data in this problem is the set of boundary points of the solution to ( [ eq : heatpde ] ) , which we compute using a standard finite difference scheme for an mesh ( here or ) .we denote the correct value of by and the data by . in order to reconstruct , we will take a guess , solve the forward problem using and compare those boundary points to by implementing the metropolis - hastings markov chain monte carlo algorithm ( or mhmcmc ) .priors will need to be established to aid in the reconstruction , as comparing the boundary points alone is insufficient .markov chains produce a probability distribution of possible solutions ( in this case conductivities ) that are most likely given the observed data ( the probablility of reaching the next step in the chain is entirely determined by the current step ) .the algorithm is as follows ( ) .given , can be found using the following : 1 .generate a candidate state from with some distribution .we can pick any so long as it satisfies 1 . 2 . is the transition matrix of markov chain on the state space containing .2 . with probability set , otherwise set ( ie . accept or reject ) .proceed to the next iteration .using the probability distributions of our example , ( [ eq : alpha ] ) becomes } \right\}\ ] ] ( where and denote the set of measured boundary points using and respectively , and ) + to simplify ( [ eq : alpha2 ] ) , collect the constants and seperate the terms relating to and : }\\ & = & \frac{-1}{2}\sum_{i , j=1}^{n , m}{\left [ \left ( \frac{d_{ij}-d_{ij}'}{\sigma } \right)^2 - \left ( \frac{d_{ij}-d_{n_{ij}}}{\sigma } \right)^2 \right]}\\ & = & \frac{-1}{2}\left [ d ' - d_n \right ] = f_n - f'\end{aligned}\ ] ] now , ( [ eq : alpha2 ] ) reads we now examine the means by which we generate a guess . if the problem consists of reconstructing a constant conductivity , we can implement a uniform change , for every iteration we take a random number between and and add it to every entry in to obtain ( we initialize to a matrix of ) .the algorithm is highly efficient , and the reconstructed value will consistently converge to that of the solution to within .+ in order to approximate a nonconstant , the obvious choice is a pointwise change , at each iteration we add to a random entry of , thus genrating .unfortunately , systematic errors occur at the boundary points of our reconstruction ( they tend to rarely change from their initial position ) . + in order to sidestep this , we use a gridwise change ; change a square of the mesh ( chosen at random as well ) by adding to the four corners of said square . while this fixes the boundary problem , another major issue which arises from a non - uniform change is that the reconstruction will be marred with spikes " , which we must iron out .to aid in ironing out the wrinkles in the reconstruction we use priors " .priors generally require some knowledge of the quantity we wish to find , and will add a term to ( [ eq : alpha3 ] ) . naturally , the more unassuming the prior , the more applicable the algorithm .this applicability will be tested as often as possible throughout these tests .the first prior compares the sum of the differences between adjacent points of to those of ( keping the spikes in check ) , and is given by and modifying ( [ eq : alpha3 ] ) , we obtain so the guess is most likely if and are similarily smooth ( ie , ) , if an iteration gives a that is noticeably less smooth than the last accepted iteration , we are less likely to accept it .+ as an initial test for the smoothness prior developed above , we attempt the gridwise change on a constant conductivity ( , using ) . while we can still see the problem at the boundary points , they are limited to being a noticeable nuisance as oppposed to adamantly ruining an otherwise accurate reconstruction ( whose mean comes to within of ) .the next step is therefore to test the algorithm on a non - constant conductivity . as a simple nonconstant trial, we look at a tilted plane with constant slope , given by once again , we take to be a matrix of all and .the boundary points again have trouble increasing from to the desired values , and in so doing lower the mean value of the reconstruction ; though we still consistentily get to within about of the solution ( in iterations ). tilted plane we wish to reconstruct , and a reconstruction using the smoothness prior.,height=188 ] we now attempt to reconstruct a more complicated conductivity : a gaussian well .+ the gaussian well is the first real challenge that the algorithm will face , and will be the main focus of the rest of the paper as it contains different regions which require different priors .it is given by the following equation }{0.2}}}\right)}}\ ] ] this conductivity represents a much more significant challenge , with both flat regions , and regions with steep slopes .after several trials , the optimal were found to be between and , though obataining a specific value for which the reconstruction is best is impossible due to the high innaccuracy of the algorithm when faced with this well .gaussian well we wish to reconstruct , and a reconstruction using the smoothness prior.,height=188 ] there is an evident need , at this point , for much more precision .we turn once again to priors , this time developing one that will look at the slopes of the reconstruction .one of the main concerns in implementing a new prior is the generality mentioned earlier . in theory , one could use a prior that only accepts gaussian wells of the form we have here , but that code would not be very versatile .we therefore try to keep our slope prior as general as possible . in keeping with this , we look at the ratios of adjacent slopes , both in the and directions , as follows : and define ( where is ) + the generality of these prior terms comes from the fact that they go to so long as the conductivity does nt change its mind .it is equally happy " with a constant slope as it is with slopes that , say , double , at each grid point . it should be noted that the formulas above break down for regions where we have very small slopes adjacent to large ones , where one ratio goes to while the other grows very large .nevertheless , we now set with this new prior , we define and use that in the acceptance step of the mhmcmc algorithm . again , as a first test of the algorithm , we test it on the tilted plane .the reconstructions reach the same precision in iterations as we had with only the smoothness prior , so we have not yet implemented anything that is too problem - specific to the gaussian well .+ the initial result of the test on the well is arguably substantially better , but still rather imprecise . in an attempt to see more clearly, we make the mesh finer ( ) . in addition , we set to be a matrix of all . the results of the combined slope and smoothness priors are below . and gaussian wells with parameters : and , respectively.,height=188 ] as we can see, a substantial improvement has been made over the attempt in section ( 3.1 ) .we now consistently obtain somewhat of a bowl shape . in comparing the solution we wish to achieve and the reconstruction we have ,one notices that the major problem areas are the outer regions , where the conductivity is nearly constant .as previously stated , equations ( [ eq : px ] ) and ( [ eq : py ] ) break down when the slopes are vanishing , so it is reasonable to assume that with this alone , the reconstruction will not improve as substantially as we need it to . as before ,we implement another prior to aid us .to help reconstruct the outermost regions of the well , we need a prior that will go to for regions that have vanishing slope .the most obvious choice is therefore to use what we computed for the smoothness prior and set using in the mhmcmc algorithm .again , the worry that adding a new prior would undermine the generality of the algorithm can be eased by noting that we are simply accounting for a problematic case not treated by the slope prior , though we still test this prior on the tilted plane .the tilted plane is given by running the mhmcmc algorithm with all three priors yields fairly accurate reconstructions , that miss the solution by .one should again note the presence of the familiar ( though no less troublesome ) boundary points .tilted plane , and a reconstruction using all three priors.,height=188 ] we now try once again to reconstruct the gaussian well .the results of the added prior are apparent , and regions with vanishing slope are treated much more accurately than before .perhaps the most successful reconstruction thus far is the following ( though many more possible combinations of , and w must be explored ) . gaussian well using all three priors , at and million interations.,height=188 ] an obvious flaw in these reconstructions happens to be the width of the well , the algorithm is still capable of reconstructing the center of the well , and its depth , but it is often much narrower than in the actual solution .it would seem the algorithm has trouble starting to drop off from the vanishing slope region into the varying one .this exposes an inherent problem with the patchwork we have taken thus far : getting the seams to match up nicely .as we have seen , reconstructions of the heat conductivity greatly benefit from added priors .there is certainly much work left to be done , and a very careful analysis of the seams at which the various priors trade off is in order .however , we believe that in testing the algorithm against other complex nonconstant conductivities , which is the next step we plan to take , it is possible to complete the aforementioned analysis of the seams and reconstruct complex quantities via this patchwork method .i was introduced to this problem at a national science foundation reu program at george mason university , and would like to thank both of those institutions for the oppurtunity that gave me .i would also like to thank professors timothy sauer and harbir lamba at gmu , who got me started on this project while i was there and helped me decipher the mhmcmc algorithm .
we consider the nonlinear inverse problem of reconstructing the heat conductivity of a cooling fin , modeled by a -dimensional steady - state equation with robin boundary conditions . the metropolis hastings markov chain monte carlo algorithm is studied and implemented , as well as the notion of priors . by analyzing the results using certain trial conductivities , we formulate several distinct priors to aid in obtaining the solution . these priors are associated with different identifiable parts of the reconstruction , such as areas with vanishing , constant , or varying slopes . although more research is required for some non - constant conductivities , we believe that using several priors simultaneously could help in solving the problem . inverse problems , heat diffusion , monte carlo , prior . a. zambelli antoine_elabdouni.edu1
in general , under some appropriate conditions , -estimators enjoy some nice properties , for example , the asymptotic efficiency of ( quasi-)mle and oracle properties of regularized estimators .however , it may suffer from heavy computation load since it is batch estimation in the sense that we have to optimize the appropriate objective function .it would be of great help to be able to carry out recursive estimation , where we update the estimator by doing some fine tuning of the previous one . in this way, it enables us to process enormous data effectively .recursive estimation is established as an application of stochastic approximation theory , which is mainly used in control system and computer science ( see e.g. , robbins and monro , nevelson and khasminskii , dvoretzky , borkar ) .the case of independent data was developed by , among others , fabian and .we refer to sharia , , and for asymptotics of time - series model recursive estimator .see kutoyants ( * ? ? ?* section 2.6.6 ) and lazrieva et al . as well as the references therein for the case of continuous - time data from a diffusion model .the model we consider through this paper is the scaled wiener process with drift : where is a random and time - varying nuisance process .suppose that we observe only discrete - time sample for with , where .we know that the quasi - mle of a true value with regarding is where , and has the asymptotic normality and the asymptotic efficiency .the aim of this paper is to propose an update estimation method for the diffusion parameter under the nuisance drift element .specifically , we wish to construct a sequence of estimates , computed in a recursive manner , in such a way that for any , the difference is a function described by , , , and some ( not all ) data , and exhibits suitable convergence property for an arbitrary initial value .usually this estimator does not require any numerically hard optimization , while its asymptotic behavior does require careful investigation .note the difference from the recursive ( online ) estimation for sample , where each successive estimator is obtained by using data observed one after another .we will bring about good asymptotic property of as efficient as the quasi - mle , i.e we ensure the asymptotic equivalence of to : this paper is organized as follows . in section [ sec : objective ] , we propose an update formula and give a main theorem with the proof .we consider an example and present the simulation result to illustrate the theory in section [ sec : sim ] .we consider the scaled wiener process with drift , where is a random and time - varying nuisance process and .we denote by the family of distributions of associated with : , .let , the underlying filtration .suppose that we observe only discrete - time sample for with , where .we wish to estimate the true value based on .as is well known , the quasi - likelihood function , denote by , with regarding is therefore , the quasi - mle of is , and has the asymptotic normality and the asymptotic efficiency .the aim of this paper is to propose an update estimation method for the diffusion parameter having the form where is an appropriate fine tuning function .the upper index implies that we have the data set . under the nuisance drift element , we ensure the asymptotic equivalence of to .therefore , the asymptotic normality and the asymptotic efficiency of hold .we now define some notations : denote and by the convergence rate and the asymptotic variance of , respectively , i.e. , and .we also define the quasi - likelihood function as . then , we propose the following update formula : note that , in finite sample , this formula is more stable than the one which is derived by direct using of the newton - raphson method .we have hence the update formula is note that right - hand side of is positive a.s . for any initial value .then , for any and we define note that for .now , we set two assumptions : there exists such that .[ ass : hn - rate2 ] \xrightarrow{p } 0 ; \label{ass : m1}\end{aligned}\ ] ] <\infty .\label{ass : m2}\end{aligned}\ ] ] [ ass : moment ] the notation in assumption [ ass : hn - rate2 ] means that .the following theorem [ thm : mainthm ] is a main result in this paper .consider the model .assume that assumptions [ ass : hn - rate2 ] and [ ass : moment ] hold .then , for any the update formula generates an estimator , which is asymptotic equivalent to : therefore , it has the asymptotic normality : and the asymptotic efficiency .[ thm : mainthm ] to prove the theorem , we use the result of sharia ( * ? ?* lemma 1 ) .we change some notations from the original ones of sharia in terms of a triangular array of random variables .we drop the index to simplify some notations .we can rewrite to where moreover , we define the following notations : ; \nonumber\end{aligned}\ ] ] where and , and introduce the conditions of sharia ( * ? ?* lemma 1 ) applied to our model setting : 1 . + w.r.t . , where is a random variable with ; 2 . + in probability ; 3 . + in probability .now , let us show the above conditions and derive the _ local asymptotic linearity _ of : where is a linear statistic .clearly , ( i ) holds since we have . to check ( ii ) , we calculate \nn \\ & = -\frac{1}{2\hat{\beta}_{j-1}}+\frac{\beta_{0}}{2\hat{\beta}_{j-1}^{2}}\left(1 + 2\sqrt{\frac{h_{n}}{\beta_{0}}}e_{\beta_{0}}\left[y_{j}\overline{\mu}_{j}\big|\mathcal{f}_{t_{j-1}}\right]+\frac{h_{n}}{\beta_{0}}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right ) \nn \\ & = -\frac{d_{j-1}}{2\hat{\beta}_{j-1}^{2}}+\frac{\sqrt{h_{n}\beta_{0}}}{\hat{\beta}_{j-1}^{2}}e_{\beta_{0}}\left[y_{j}\overline{\mu}_{j}\big|\mathcal{f}_{t_{j-1}}\right]+\frac{h_{n}}{2\hat{\beta}_{j-1}^{2}}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right ] , \nonumber\end{aligned}\ ] ] +\frac{h_{n}}{2\hat{\beta}_{j-1}^{2}}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right\ } \nn \\ & = \frac{1}{\beta_{0}^{2}}\left(-\frac{d_{j-1}}{2}+\sqrt{h_{n}\beta_{0}}e_{\beta_{0}}\left[y_{j}\overline{\mu}_{j}\big|\mathcal{f}_{t_{j-1}}\right]+\frac{h_{n}}{2}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right ) .\nonumber\end{aligned}\ ] ] hence , we have +\frac{h_{n}}{2}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right ) \nn \\ & = \frac{1}{\beta_{0}^{2}}\left(\sqrt{h_{n}\beta_{0}}e_{\beta_{0}}\left[y_{j}\overline{\mu}_{j}\big|\mathcal{f}_{t_{j-1}}\right]+\frac{h_{n}}{2}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right ) , \nonumber\end{aligned}\ ] ] and therefore +\frac{h_{n}}{\sqrt{n}}\sum_{j=1}^{n}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right ] . \label{eq:(ii)}\end{aligned}\ ] ] the first term of converges to in probability since we assume .let us show \rightarrow 0 \nonumber \ ] ] in probability .then , by making use of markov s inequality , assumption [ ass : hn - rate2 ] and , we obtain for any \lesssim h_{n}\sqrt{n}\rightarrow 0 , \nonumber\end{aligned}\ ] ] hence ( ii ) . to check the condition ( iii ), we also calculate -\frac{h_{n}}{2\hat{\beta}_{j-1}^{2}}e_{\beta_{0}}\left[\overline{\mu}_{j}^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right\}-\left\{-\frac{1}{2\beta_{0}}+\frac{\beta_{0}}{2\beta_{0}^{2}}\left(y_{j}+\overline{\mu}_{j}\sqrt{\frac{h_{n}}{\beta_{0}}}\right)^{2}\right\ } \nn \\ & = \frac{1}{\beta_{0}^{2}}\left\{-\frac{\hat{\beta}_{j-1}}{2}+\frac{\beta_{0}}{2}\left(y_{j}+\overline{\mu}_{j}\sqrt{\frac{h_{n}}{\beta_{0}}}\right)^{2}+\frac{d_{j-1}}{2}-\sqrt{h_{n}\beta_{0}}e_{\beta_{0}}\left[y_{j}\overline{\mu}_{j}\big|\mathcal{f}_{t_{j-1}}\right]-\frac{h_{n}}{2}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right\ } \nn \\ & \ \ \ + \frac{1}{2\beta_{0}}-\frac{1}{2\beta_{0}}\left(y_{j}+\overline{\mu}_{j}\sqrt{\frac{h_{n}}{\beta_{0}}}\right)^{2 } \nn \\ & = \frac{1}{\beta_{0}^{2}}\left(-\sqrt{h_{n}\beta_{0}}e_{\beta_{0}}\left[y_{j}\overline{\mu}_{j}\big|\mathcal{f}_{t_{j-1}}\right]-\frac{h_{n}}{2}e_{\beta_{0}}\left[|\overline{\mu}_{j}|^{2}\big|\mathcal{f}_{t_{j-1}}\right]\right ) . \nonumber\end{aligned}\ ] ] assumptions [ ass : hn - rate2 ] and [ ass : moment ] also ensure the condition ( iii ) : in probability .consequently , we derive the local asymptotic linearity from sharia ( * ? ? ? * lemma 1 ) . in this case , we obtain therefore we can conclude the claim .see e.g. , jacod and shiryaev for more detailed discussion on the asymptotic properties of an asymptotically linear estimator .we performed a simulation study to validate our proposed update formula .we consider the model where we know the true value .note that this means , hence assumption [ ass : moment ] holds .we set , data size and the monte carlo trial number .we also set to satisfy assumption [ ass : hn - rate2 ] .we generate through the following steps : we repeat step 1 and 2 times .the following figures preset the simulation results . figures [ fig : sam ] and [ fig : beta - trace ] are traces of data and at , respectively ( denote each steps of monte carlo trials ) .-axis has update times . the dotted line in figure [ fig : beta - trace ] denotes the true value . in figure [fig : average - trace ] we plot for , where and denotes the value of at . we give a qq - plot for in figure [ fig : qqplot ] . from the figure [ fig : average - trace ], we expect that as .actually , whenever we repeat this algorithm , nearly equal to and standard deviations are not so large ( about ) .we also expect that as from the qq - plot in figure [ fig : qqplot ] since the plots almost lay on the 45 degree line .n. lazrieva , t. sharia , and t. toronjadze .semimartingale stochastic approximation procedure and recursive estimation ., 153(3):211261 , 2008 .nevelson , r.z .providence , ri : american mathematical society , 1973 .
we propose an update estimation method for a diffusion parameter from high - frequency dependent data under a nuisance drift element . we ensure the asymptotic equivalence of the estimator to the corresponding quasi - mle , which has the asymptotic normality and the asymptotic efficiency . we give a simulation example to illustrate the theory .
let be a random vector and be a random variable .then , and are said to follow a _ normal - gamma distribution _( ng distribution ) , if their joint probability density function is given by where denotes a multivariate normal density with mean and covariance and denotes a gamma density with shape and rate . in full , the density function is given by ( koch , 2007 , p. 55 ) \cdot \frac{{b}^{a}}{\gamma(a ) } \ , y^{a-1 } \ , \exp[-b y ] \ ; .\ ] ] the _ kullback - leibler divergence _( kl divergence ) is a non - symmetric distance measure for two probability distributions and and is defined as = \sum_{i \in \omega } p(i ) \ , \ln \frac{p(i)}{q(i ) } \ ; .\ ] ] for continuous probability distribtions and with probability density functions and on the same domain , it is given by ( bishop , 2006 , p. 55 ) = \int_{x } p(x ) \, \ln \frac{p(x)}{q(x ) } \ , \mathrm{d}x \ ; .\ ] ] the kl divergence becomes important in information theory and statistical inference .here , we derive the kl divergence for two ng distributions with vector - valued and real - positive and provide two examples of its application .first , consider two multivariate normal distributions over the vector specified by according to equation ( [ eq : cont - kl ] ) , the kl divergence of from is defined as = \int_{\mathbb{r}^k } \mathrm{n}(x ; \mu_1 , \sigma_1 ) \ , \ln \frac{\mathrm{n}(x ; \mu_1 , \sigma_1)}{\mathrm{n}(x ; \mu_2 , \sigma_2 ) } \ , \mathrm{d}x \ ; .\ ] ] using the multivariate normal density function \ ; , \ ] ] it evaluates to ( duchi , 2014 ) = \frac{1}{2 } \left [ ( \mu_2 - \mu_1)^t \sigma_2^{-1 } ( \mu_2 - \mu_1 ) + \mathrm{tr}(\sigma_2^{-1 } \sigma_1 ) - \ln \frac{|\sigma_1|}{|\sigma_2| } - k \right ] \ ; .\ ] ] next , consider two univariate gamma distributions over the real - positive specified by according to equation ( [ eq : cont - kl ] ) , the kl divergence of from is defined as = \int_{0}^{\infty } \mathrm{gam}(y ; a_1 , b_1 ) \ , \ln \frac{\mathrm{gam}(y ; a_1 , b_1)}{\mathrm{gam}(y ; a_2 , b_2 ) } \ , \mathrm{d}y \ ; .\ ] ] using the univariate gamma density function \quad \text{for } \quad y > 0 \ ; , \ ] ] it evaluates to ( penny , 2001 ) = a_2 \ , \ln \frac{b_1}{b_2 } - \ln \frac{\gamma(a_1)}{\gamma(a_2 ) } + ( a_1 - a_2 ) \ ,\psi(a_1 ) - ( b_1 - b_2 ) \ , \frac{a_1}{b_1}\ ] ] where is the digamma function . now , consider two normal - gamma distributions over and specified by according to equation ( [ eq : cont - kl ] ) , the kl divergence of from is defined as = \int_{0}^{\infty } \int_{\mathbb{r}^k } p(x , y ) \, \ln \frac{p(x , y)}{q(x , y ) } \ , \mathrm{d}x \ , \mathrm{d}y \ ; .\ ] ] using the law of conditional probability , it can be evaluated as follows : & = \int_{0}^{\infty } \int_{\mathbb{r}^k } p(x|y ) p(y ) \ , \ln \frac{p(x|y ) p(y)}{q(x|y ) q(y ) } \ , \mathrm{d}x \ , \mathrm{d}y \\ & = \int_{0}^{\infty } p(y ) \int_{\mathbb{r}^k } p(x|y ) \ , \ln \frac{p(x|y)}{q(x|y ) } \ , \mathrm{d}x \ , \mathrm{d}y \\ & + \int_{0}^{\infty } p(y ) \ , \ln \frac{p(y)}{q(y ) } \int_{\mathbb{r}^k } p(x|y ) \ , \mathrm{d}x \ , \mathrm{d}y \\ & = \left\langle \mathrm{kl}[p(x|y)||q(x|y ) ] \right\rangle_{p(y ) } + \mathrm{kl}[p(y)||q(y ) ] \end{split}\ ] ] in other words , the kl divergence for two normal - gamma distributions over and is equal to the sum of a multivariate normal kl divergence regarding conditional on , expected over , and a univariate gamma kl divergence regarding .together with equation ( [ eq : mvn - kl ] ) , the first term becomes \right\rangle_{p(y ) } \\ & = \left\langle \frac{1}{2 } \left [ ( \mu_2 - \mu_1)^t ( y \lambda_2 ) ( \mu_2 - \mu_1 ) + \mathrm{tr}\left ( ( y \lambda_2 ) ( y \lambda_1)^{-1 } \right ) - \ln \frac{|(y \lambda_1)^{-1}|}{|(y \lambda_2)^{-1}| } - k \right ] \right\rangle_{p(y ) } \\ & = \left\langle \frac{y}{2 } ( \mu_2 - \mu_1)^t \lambda_2 ( \mu_2 - \mu_1 ) + \frac{1}{2 } \ , \mathrm{tr}(\lambda_2 \lambda_1^{-1 } ) - \frac{1}{2 } \ln \frac{|\lambda_2|}{|\lambda_1| } - \frac{k}{2 } \right\rangle_{p(y ) } \ ; .\end{split}\ ] ] using the relation , we have \right\rangle_{p(y ) } = \frac{1}{2 } \frac{a_1}{b_1 } ( \mu_2 - \mu_1)^t \lambda_2 ( \mu_2 - \mu_1 ) + \frac{1}{2 } \, \mathrm{tr}(\lambda_2 \lambda_1^{-1 } ) - \frac{1}{2 } \ln \frac{|\lambda_2|}{|\lambda_1| } - \frac{k}{2 } \ ; . \end{split}\ ] ] thus , from ( [ eq : exp - mvn - kl ] ) and ( [ eq : gam - kl ] ) , the kl divergence in ( [ eq : ng - kl1 ] ) becomes & = \frac{1}{2 } \frac{a_1}{b_1 } \left [ ( \mu_2 - \mu_1)^t \lambda_2 ( \mu_2 - \mu_1 ) \right ] + \frac{1}{2 } \ , \mathrm{tr}(\lambda_2 \lambda_1^{-1 } ) - \frac{1}{2 } \ln \frac{|\lambda_2|}{|\lambda_1| } - \frac{k}{2 } \\ & + a_2 \ , \ln \frac{b_1}{b_2 } - \ln \frac{\gamma(a_1)}{\gamma(a_2 ) } + ( a_1 - a_2 ) \ ,\psi(a_1 ) - ( b_1 - b_2 ) \ , \frac{a_1}{b_1 } \ ; . \end{split}\ ] ] consider bayesian inference on data using model with parameters . in this case ,bayes theorem is a statement about the posterior density : the denominator acts as a normalization constant on the posterior density and according to the law of marginal probability is given by this is the probability of the data given only the model , regardless of any particular parameter values .it is also called `` marginal likelihood '' or `` model evidence '' and can act as a model quality criterion in bayesian inference , because parameters are integrated out of the likelihood . for computational reasons ,only the logarithmized or log model evidence ( lme ) is of interest in most cases . by rearranging equation ( [ eq : bt ] ), the model evidence can be represented as logarithmizing both sides of the equation and taking the expectation with respect to the posterior density over model parameters gives the lme using this reformulation , the lme as a model quality measure can be naturally decomposed into an accuracy term , the posterior expected likelihood , and a complexity term , the kl divergence between the posterior and the prior distribution : \end{split}\ ] ] intuitively , the accuracy acts increasing and the complexity acts decreasing on the log model evidence .this reflects the capability of the lme to select models that achieve the best balance between accuracy and complexity , i.e. models that explain the observations sufficiently well ( high accuracy ) without employing too many principles ( low complexity ) .the fact that the complexity term is a kl divergence between posterior and prior means that models with prior assumptions that are close to the posterior evidence receive a low complexity penalty , because one is not surprised very much when accepting such a model which renders the bayesian complexity a measure of surprise . consider multiple linear regression using the univariate general linear model ( glm ) where is an vector of measured data , is an matrix called the design matrix , is a vector of weight parameters called regression coefficients and is an vector of errors or noise .these residuals are assumed to follow a multivariate normal distribution whose covariance matrix is the product of a variance factor and an correlation matrix .usually , and are known while and are unknown parameters to be inferred via model estimation . for mathematical convenience ,we rewrite and so that equation ( [ eq : glm ] ) implies the following likelihood function : the conjugate prior relative to this likelihood function is a normal - gamma distribution on the model parameters and ( koch , 2007 , ch .2.6.3 ) : due to the conjugacy of ( [ eq : glm - ng - prior ] ) to ( [ eq : glm - lf ] ) , the posterior is also a normal - gamma distribution where the posterior parameters in ( [ eq : glm - ng - post ] ) are given by ( koch , 2007 , ch .4.3.2 ) from ( [ eq : lme2 ] ) , the complexity for the model defined by ( [ eq : glm - lf ] ) and ( [ eq : glm - ng - prior ] ) is given by \ ; .\ ] ] in other words , the complexity penalty for a general linear model with normal - gamma priors ( glm - ng ) is identical to a kl divergence between two ng distributions and using ( [ eq : ng - kl2 ] ) can be written in terms of the prior and posterior parameters as + \frac{1}{2 } \, \mathrm{tr}(\lambda_0 \lambda_n^{-1 } ) - \frac{1}{2 } \ln \frac{|\lambda_0|}{|\lambda_n| } - \frac{p}{2 } \\ & + a_0 \ , \ln \frac{b_n}{b_0 } - \ln \frac{\gamma(a_n)}{\gamma(a_0 ) } + ( a_n - a_0 ) \ , \psi(a_n ) - ( b_n - b_0 ) \, \frac{a_n}{b_n } \ ; . \end{split}\ ] ]consider a linear model with polynomial basis functions ( bishop , 2006 , p. 5 ) given by essentially , this model assumes that is an additive mixture of polynomial terms weighted with the coefficients with where the natural number is called the model order .this means that corresponds to a constant value ( plus noise ) ; corresponds to a linear function ; corresponds to a quadratic pattern ; corresponds to a 3rd degree polynomial etc .given that is an vector of real numbers between and , this model can be rewritten as a glm given in equation ( [ eq : glm ] ) with based on this reformulation , we simulate polynomial data .we perform simulations with data points in each simulation .we generate simulated data based on a true model order and analyze these data using a set of models ranging from to .the predictor is equally spaced between and and design matrices are created according to equation ( [ eq : pbf2 ] ) . in each simulation , six regression coefficients are drawn independently from the standard normal distribution .then , gaussian noise is sampled from the multivariate normal distribution with a residual variance of .finally , simulated data is generated as as .then , for each , bayesian model estimation is performed using the design matrix , a correlation matrix and the prior distributions ( [ eq : glm - ng - prior ] ) with the prior parameters , invoking a standard multivariate normal distribution and , invoking a relatively flat gamma prior .posterior parameters are calculated using equation ( [ eq : glm - ng - post - par ] ) and give rise to the model complexity via ( [ eq : glm - ng - com2 ] ) as well as model accuracy and the log model evidence via ( [ eq : lme2 ] ) .average lmes , accuracies and complexities are shown in figure 1 .one can see that the true model order is correctly identified by the maximal log model evidence .this is achieved by an increasing complexity penalty which outweighs the saturating accuracy gain for models with .this demonstrates that the kl divergence for the ng distribution can be used to select polynomial basis functions when basis sets can not be separated based on model accuracy alone .[ fig : figure_pbf ] * figure 1 . * bayesian model selection for polynomial basis functions .all displays have model order on the x - axis and average model quality measures ( across simulations ) on the y - axis .intuitively , the model accuracy ( middle panel ) increases with model order , but saturates at around with no major increase after .moreover , the model complexity ( lower panel ) which is the kl divergence between prior and posterior distribution also grows with model order , but switches to a linear increase at around reaching a value of at .together , this has the consequence that the log model evidence ( upper panel ) is maximal for ( black cross ) where exact values are : . in neuroimaging , especially functional magnetic resonance imaging ( fmri ) , glms as given by equation ( [ eq : glm ] )are applied to time series of neural data ( friston et al . , 1995 ) .the design matrix is specified by the temporal occurrence of experimental conditions and the covariance structure is estimated from residual auto - correlations .model estimation and statistical inference are performed `` voxel - wise '' , i.e. separately for each measurement location in the brain , usually referred to as the `` mass - univariate glm '' . here, we analyze data from a study on orientation pop - out processing ( bogler et al . , 2013 ) .during the experimental paradigm , the screen showed a array of homogeneous bars oriented either 0 , 45 , 90 or 135 relative to the vertical axis .this background stimulation changed every second and was interrupted by trials in which one target bar on the left and one target bar on the right were independently rotated either 0 , 30 , 60 or 90 relative to the rest of the stimulus display .those trials of orientation contrast ( oc ) lasted 4 seconds and were alternated with inter - trial intervals of 7 , 10 or 13 seconds .each combination of oc on the left side and oc on the right side was presented three times resulting in 48 trials in each of the 5 sessions lasting 672 seconds .after fmri data preprocessing ( slice - timing , realignment , normalization , smoothing ) , two different models of hemodynamic activation were applied to the fmri data . the first model ( glm i ) considers the experiment a factorial design with two factors ( left vs. right oc ) having four levels ( 0 , 30 , 60 , 90 ) .this results in possible combinations or experimental conditions modelled by onset regressors convolved with the canonical hemodynamic response function ( hrf ) . the second model ( glm ii )puts all trials from all conditions into one hrf - convolved regressor and encodes orientation contrast using a parametric modulator ( pm ) that is given as with , resulting in , such that the parametric modulator is proportional to orientation contrast .there was one pm for each factor of the design , i.e. one pm for left oc and one pm for rightnote that both models encode the same information and that every signal that can be identified using glm ii can also be characterized using glm i , but not vice versa , because the first model allows for a greater flexibility of activation patterns across experimental conditions than the second .for these two models , we performed bayesian model estimation . to overcome the challenge of having to specify prior distributions on the model parameters , we apply cross - validation across fmri sessions .this gives rise to a cross - validated log model evidence ( cvlme ) as well as cross - validated accuracies and complexities for each model in each subject .we then performed a paired t - test to find voxels where glm ii has a significantly higher cvlme than glm i. due to the specific assumptions in glm ii and the higher flexibility of glm i , we assumed that these differences might be primarily based on a complexity advantage of glm ii over glm i. we focus on visual area 4 ( v4 ) that is known to be sensitive to orientation contrast . within left v4 , specified by a mask from a separate localizer paradigm ( bogler et al . ,2013 ) , we identified the peak voxel ( = [ -15 , -73 , -5]$ ] mm ) defined by the maximal t - value ( ) and extracted log model evidence as well as model accuracy and model complexity from this voxel for each subject .differences in lme , accuracy and complexity are shown in figure 2 .again , the model complexity enables a model selection that would not be possible based on the model accuracy alone .[ fig : figure_nms ] * figure 2 . * bayesian model selection for orientation pop - out processing .all displays have subject on the x - axis and difference in model qualities ( = glm i , = glm ii ) on the y - axis .interestingly , there is a slight disadvantage for glm ii regarding only the model accuracy ( middle panel ) , its mean difference across subjects being smaller than zero .however , model complexity ( lower panel ) measured as the kl divergence between prior and posterior distribution is consistently higher for glm i. together , this has the consequence that the log model evidence ( upper panel ) most often favors glm ii .average values are : .we have derived the kullback - leibler divergence of two normal - gamma distributions using earlier results on the kl divergence for multivariate normal and univariate gamma distributions . moreover , we have shown that the kl divergence for the ng distribution occurs as the complexity term in the univariate general linear model when using conjugate priors .analysis of simulated and empirical data demonstrates that the complexity penalty has the desired theoretical features , namely to quantify the relative informational content of two generative models and to detect model differences that can not be detected by just relying on model accuracy , e.g. given by the maximum log - likelihood ( as in information criteria like aic or bic ) or the posterior log - likelihood ( as in the bayesian log model evidence ) .9 friston kj , holmes ap , worsley kj , poline jp , frith cd , frackowiak rsj ( 1995 ) : `` statistical parametric maps in functional imaging : a general linear approach '' ._ human brain mapping _ , vol . 2 , iss . 4 , pp . 189 - 210
we derive the kullback - leibler divergence for the normal - gamma distribution and show that it is identical to the bayesian complexity penalty for the univariate general linear model with conjugate priors . based on this finding , we provide two applications of the kl divergence , one in simulated and one in empirical data . kullback - leibler divergence + for the normal - gamma distribution + joram soch^1,3,^ & carsten allefeld^1,2^ ^1^ bernstein center for computational neuroscience , berlin , germany + ^2^ berlin center for advanced neuroimaging , berlin , germany + ^3^ department of psychology , humboldt - universitt zu berlin , germany + corresponding author : joram.soch-berlin.de .
in the last decade , there have been a number of studies of systems in which the states of individuals and the connections between them coevolve , see .the systems considered include evolutionary games and epidemics , but here we will concentrate on the spread of opinions .different from the models of cascades which are also widely used in the study of opinion spread , the evolving voter model we study here allows an agent to switch between different opinions and the network topology to change accordingly , yet we assume that agents impose equal influence over each other ( cf . , multi - state complex contagions ) .this model provides building blocks to quantitatively study collective behaviors in various social systems , e.g. , segregation of a population into two or more communities with different political opinions , religious beliefs , cultural traits , etc .we are particularly interested here in systems that generalize the model proposed by holme and newman . in their modelthere is a network of vertices and edges .the individual at vertex has an opinion from a set of possible opinions and the number of people per opinion stays bounded as gets large . on each step of the process, a vertex is picked at random .if its degree equals , nothing happens .if , ( i ) then with probability a random neighbor of is selected and we set ; ( ii ) otherwise ( i.e. , with probability ) an edge attached to vertex is selected and the other end of that edge is moved to a vertex chosen at random from those with opinion . this process continues until the ` consensus time ' , at which there are no longer any discordant edges that is , there are no edges connecting individuals with different opinions . for ,only rewiring steps occur , so once all of the edges have been touched , the graph has been disconnected into components , each consisting of individuals who share the same opinion . sincenone of the opinions have changed , the components of the final graph are all small ( i.e. , their sizes are poisson with mean ) . by classical results for the coupon collector s problem, this requires updates , see e.g. , page 57 in . in the case of sparse graphs we consider here ( i.e. , ) so the number of steps is , i.e. , when is large it will be .in contrast , for this system reduces to the voter model on a static graph .if we suppose that the initial graph is an erds - rnyi random graph in which each vertex has average degree , then ( see e.g. , chapter 2 of ) there is a `` giant component '' that contains a positive fraction , , of the vertices and the second largest component is small having only vertices. the voter model on the giant component will reach consensus in steps ( see , e.g. , section 6.9 of ) , so the end result is that one opinion has followers while all of the other groups are small . using simulation and finite size scaling , holme andnewman showed that there is a critical value so that for all of the opinions have a small number of followers at the end of the process , while for `` a giant community of like - minded individuals forms . ''when the average degree and the number of individuals per opinion , this transition occurs at .see for recent work on this model . in , we studied a two - opinion version of this model in which on each step an edge is chosen at random and is given a random orientation , .if the individuals at the two ends have the same opinion nothing happens . if they differ , then ( i ) with probability we set ; ( ii ) otherwise ( i.e. , with probability ) breaks its edge to and reconnects to ( a ) a vertex chosen at random from those with opinion , a process we label ` rewire - to - same ' , or ( b ) at random from the graph , a process we label ` rewire - to - random ' . here, we will concentrate on the second rewiring option , rewire - to - random . while this process may be less intuitive than the rewire - to - same version , it has a more interesting phrase - transition , as documented in .the remainder of this paper is organized as follows . in section [ sec:2 ], we recall the main results from that provide essential context for our observations of the multiple - opinion case , which we begin to explore in section [ sec : multi ] .we then continue in section [ sec : quant ] with further quantitative details about the phase transitions and their underlying quasi - stationary distributions , before concluding comments in section [ sec : conclusion ] .suppose , for concreteness , that the initial social network is an erds - rnyi random graph in which each individual has average degree , and that vertices are assigned opinions 1 and 0 independently with probabilities and .simulations suggest that the system has the following to help understand the last statement , the reader should consult the picture in figure [ fig : rewire_to_random_p1 ] .if the initial fraction of 1 s then as decreases from 1 , the ending density stays constant at 1/2 until and then decreases to a value close to 0 at . for convenience, we call the graph of for , the _universal curve_. if the initial density is , then the ending density stays constant at until the flat line hits the universal curve and then for .the main aim of was to use simulations , heuristic arguments , and approximate models to explain the presence and properties of this universal curve describing the consensus states that result from the slow - consensus process . to make it easier to compare the results here with the previous paper , we rescale time so that times between updating steps are exponential with rate , where is the total number of edges .the final fractions of the minority below phase transitions follow an universal curve independent of the initial fractions.,scaledwidth=45.0% ] to further explain the phrase `` quasi - stationary distributions '' in this context , we refer the reader to figure [ fig : n1n010v3 ] .let be the number of vertices in state 1 at time , be the number of - edges ( that is , the number of edges connecting nodes and with , ) .similarly , let be the number of connected triples -- with , , and .the top panel of figure [ fig : n1n010v3 ] plots versus for five different simulations ( with different initial densities , ) for . note that in each case the simulation rapidly approaches a curve and then diffuses along the curve until consensus is reached ( ) . at both of the possible consensus points on the curve ,the fraction of the minority opinion is , in accordance with the simulation in figure [ fig : rewire_to_random_p1 ] . the bottom panel of figure [ fig : n1n010v3 ] similarly plots versus for and .again the simulation rapidly approaches a curve ( approximately cubic ) and diffuses along it until consensus is reached .since if , and it is very unlikely that all - s only occur in -- triples , the zeros of the cubic curve for -- and quadratic curve for - coincide .- edges , , versus the population of opinions when for the rewire - to - random dynamic .five simulations starting from =0.2 , 0.35 , 0.5 , 0.65 , and 0.8 are plotted in different colors .each simulation starts from an erds - rnyi graph with n=100,000 nodes and average degree .after initial transients , the fraction of discordant edges behaves as a function of the population of opinions .( bottom ) similarly , the number of -- connected triples behaves as a function of after an initial transient ( one simulation).,title="fig:",scaledwidth=50.0% ] - edges , , versus the population of opinions when for the rewire - to - random dynamic .five simulations starting from =0.2 , 0.35 , 0.5 , 0.65 , and 0.8 are plotted in different colors .each simulation starts from an erds - rnyi graph with n=100,000 nodes and average degree .after initial transients , the fraction of discordant edges behaves as a function of the population of opinions .( bottom ) similarly , the number of -- connected triples behaves as a function of after an initial transient ( one simulation).,title="fig:",scaledwidth=50.0% ] one can repeat the simulations in figure [ fig : n1n010v3 ] for other network measurements , with the result that their values are similarly determined by the density .this is somewhat analogous to a stationary distribution from equilibrium statistical mechanics e.g . , the maxwell - boltzmann distribution associating the velocity distribution with the temperature .we call our distributions quasi - stationary because our system is a finite state markov chain , which will eventually reach one of its many absorbing states , and hence there is no true stationary distribution .nevertheless , an improved understanding of the system is obtained from these observations , displaying a fast dynamics rapidly converging to a family of neutrally - stable quasi - stationary distributions followed by slow , diffusive dynamics through the space local to the quasi - stationary distributions until consensus is reached . to begin to explain the behavior of given in ( [ wf ] ) , note that when an edge is picked with two endpoints that differ , a rewiring will not change the number of 1 s , while a voting event , which occurs with probability , will result in an increase or decrease of the number of 1 s with equal probability . when the rate at which - edges are chosen is equal to the expected fraction of - edges under , which is .as shown in , the behaviors for the rewire - to - same model in terms of quasi - stationary distributions are very similar , but with small differences from the rewire - to - random model that yield fundamentally different consensus states . in rewire - to -same , there are quasi - stationary distributions under which the expected fraction of - edges is .again the simulation comes rapidly to this curve and diffuses along it until consensus is reached .that is , unlike figure [ fig : n1n010v3 ] ( top ) , the arches of quasi - stationary values versus maintain their zeros at .thus , for , the minority fraction obtained at the consensus time is always for rewire - to - same .bhme and gross have studied the three - opinion version of the evolving voter model with rewire - to - same dynamics . in this case , the limiting behavior is complicated one may have partial fragmentation ( 1 s split off rapidly from the 2 s and 3 s ) in addition to full fragmentation and coexistence of the three opinions .see their figures 35 . as we will see in the present section ,the behavior of the multi - opinion rewire - to - random model is much simpler because small groups of individuals with the same opinion will be drawn back into the giant component .we thus aim to extend the understanding of the two - opinion model behavior to larger numbers of opinions .consider now the -opinion model in which voters are assigned independent initial opinions that are equal to with probability .let and let be the number of edges at which the endpoint opinions differ .when , frequencies of the three types must lie in the triangle of possible values . to preserve symmetry , we draw as an equilateral triangle in barycentric coordinates by mapping .the top panel in figure [ fig : levelset2 ] plots as a function of the opinion densities as the system evolves , generalizing the one - dimensional arch observed for to a two - dimensional cap for .generalizing the parabolic form of the arch for , we conjecture e_u n_/m = ( 1- _ i=1^k u_i^2 ) - c_0 ( ) .[ quadcap ] as in the two opinion case , the simulated values come quickly to the surface and then diffuse along it . in some situations ,one opinion is lost before consensus occurs and the evolution reduces to that for the two opinion case .however , in one of the simulations shown , the realization ending with , all three opinions persist until the end . .multiple simulations corresponding to different initial densities are shown while each one starts from an erds - rnyi graph with n=10,000 nodes and average degree .similar to the two - opinion case , the simulations quickly converge to a parabolic cap of quasi - stationary distributions .bottom : top view of the parabolic caps of quasi - stationary distributions for =0.1,0.2, ... ,0.8 .we fit the parabolic cap eq .( [ quadcap ] ) to simulation data at various s and then plot the level sets , which are the intersections of the parabolic caps with the plane , as the large circles with colors indicating values of .,title="fig:",scaledwidth=50.0% ] .multiple simulations corresponding to different initial densities are shown while each one starts from an erds - rnyi graph with n=10,000 nodes and average degree .similar to the two - opinion case , the simulations quickly converge to a parabolic cap of quasi - stationary distributions .bottom : top view of the parabolic caps of quasi - stationary distributions for =0.1,0.2, ... ,0.8 .we fit the parabolic cap eq .( [ quadcap ] ) to simulation data at various s and then plot the level sets , which are the intersections of the parabolic caps with the plane , as the large circles with colors indicating values of .,title="fig:",scaledwidth=50.0% ] the picture is somewhat easier to understand if we look at the cap from a top view , where the level sets for different are observed to be circles . in the bottom panel of figure [ fig : levelset2 ]we plot the circles for different s fitted from simulation data using eq .( [ quadcap ] ) as well as the consensus opinion frequencies from the simulations ( indicated by small circle data points ) .the two agree with each other up to small stochastic fluctuations .the size of the level set then dictates different consensus state properties .for example , the circle corresponding to intersects in three disconnected arcs . as increases , the radius of the level set decreases .when , the critical value of the two opinion model , the circle falls fully inside the triangle , so an initial condition including all three opinions will continue to demonstrate all three opinions at consensus .for example , the small circles around the innermost circle give the ending frequencies for several simulations for .if the initial frequencies fall within the circle , then the model will quickly relax to the quasi - stationary distributions above the circle and then diffuse along the cap until consensus is reached at some point . if instead the initial frequencies fall outside the circle that is , for above the phase transition point consensus time jumps from to , similar to for the two - opinion model , with the final opinion frequencies essentially the same as the initial .what is new in this case is that when starting with three opinions and , the system always ends up with three distinct opinions . for ,our simulation results indicate the same type of behavior as the system evolves .we define to be the largest for which consensus takes updates when we start with opinions with density for each opinion . then as the multi - opinion model has infinitely many phase transitions .when , consensus occurs after steps if we start with opinions , while if we start with equally likely opinions the system quickly converges to a quasi - stationary distribution and diffuses until consensus occurs after updates and there will always be opinions present at the end .the associated picture is the natural dimensional extension of the relationship between the and models : just as corresponds to the point at which the circle for is the inscribed circle within the triangle , corresponds to the point at which the circle reaches zero radius that is , the point at which the sphere for has become the inscribed sphere within the corresponding barycentric tetrahedron .for each we simulate our multi - opinion rewire - to - random model starting from opinions with each opinion taking fraction of nodes at random for a wide range of s . generalizing the picture of the one - dimensional arch for and the two - dimensional cap for , the number of discordant edges as a function of frequencies conjectured in eq .( [ quadcap ] ) is a co - dimension 1 hypersurface characterizing the quasi - stationary states , and the behavior of the equal - initial - populations case will allow us to describe this surface , thereby characterizing behaviors for general initial populations .first the critical s are identified when the slow diffusion of can not be observed for the first time as increases from to . then we fit to ( ) using eq .( [ quadcap ] ) at every up to , and plot the fitted coefficients and against in figure [ fig : c_2 ] . remarkably , the coefficients in ( [ quadcap ] ) appear to be well approximated by linear functions of .the graphs shows some curvature near , which may be caused by the fact that ( ) corresponds to a voter model without evolution of the underlying network . in the rest of the paper, we will work with for simplicity .naturally , critical points translate to .the fitted coefficients from the 2-opinion model deviate slightly from those fitted from higher - order models , which implies that eq .( [ quadcap ] ) is not universal for the multi - opinion model and higher - order terms are possible . however , while the discrepancy between the fitted coefficients of the 2-opinion model and those of the 3-opinion one is apparent , difference between fitted coefficients of higher - order models is negligible , which implies that the inclusion of higher - order terms beyond the 3rd would not make significant changes to the equation . to probe the effect of higher - order termswe introduce terms up to order for opinions . noting , eq .( [ quadcap ] ) is equivalent to : e_u n_/m = - c_0 ( ) + c_2()_i , j=1;i >j^k u_i u_j .given the symmetry of the system in s , the only possible choice in degree - k polynomials is : where is the collection of all -element subsets of . using the same simulation data as above , we refit to s ( ) according to the generalized formula eq .( [ kthcap ] ) and plot the fitted coefficients and against in figure [ fig : c_2_high ] .fitting diagnostics suggest that higher - order terms are significant from zero ( with ) and it can be seen that those terms explain the inconsistency between fitted coefficients of different models in figures [ fig : c_2 ] . however , the difference between the two fitted functions of eq .( [ quadcap ] ) and eq .( [ kthcap ] ) is actually small ( in -norm ) and thus higher - order terms are small corrections to the hyper - surface eq .( [ quadcap ] ) .+ values of the coefficients for the three opinion model near its critical value show some scatter , but this is to be expected since the surface is very small at this point .values for the four opinion model appear to become more difficult to fit prior to since is a three - dimensional hyper - surface in four - dimensional space , so much more data is required to get reliable estimates of coefficients .as is visually apparent in figure [ fig : c_2_high ] , the coefficients and for the first two terms in eq .( [ kthcap ] ) are well approximated by linear functions , with best fits and , while coefficients for higher - order terms are not linear in ( e.g. , see figure [ fig : c_3 ] for ) . for comparison ,the best fits for and in eq .( [ quadcap ] ) ( as in figure [ fig : c_2 ] ) are c_2 ( ) 1.3 + 0.5 , c_0 ( ) 0.25 .[ coeff ] since eq .( [ quadcap ] ) well approximate the higher - order hyper - surface eq .( [ kthcap ] ) , its simple form can be used to estimate the critical points for phase transitions . combining ( [ quadcap ] ) and ( [ coeff ] ) , and then solving gives which agrees with the critical s identified whenthe slow diffusion of can not be observed in simulations as increases . in eq .( [ kthcap ] ) for models with multiple opinions .each value of is obtained by fitting eq .( [ kthcap ] ) to the same data in figure [ fig : c_2].,title="fig:",scaledwidth=50.0% ] +our multi - opinion voter model has infinitely many phase transitions .when , consensus occurs rapidly when we start with opinions , while if we start with equally likely opinions there will always be opinions present at the end . to a good approximation , but the departures from linearity in the plots of and suggest that this result is not exact .however , formulas for various quantities associated with this model are close to polynomials , so an exact solution may be possible .more complicated rewiring rules might also be considered , particularly if they maintained high clustering or other global macroscopic properties .an even more complete understanding of the present rewiring system would help motivate similar investigations for other rewiring rules .
we consider an idealized model in which individuals changing opinions and their social network coevolve , with disagreements between neighbors in the network resolved either through one imitating the opinion of the other or by reassignment of the discordant edge . specifically , an interaction between and one of its neighbors leads to imitating with probability and otherwise ( i.e. , with probability ) cutting its tie to in order to instead connect to a randomly chosen individual . building on previous work about the two - opinion case , we study the multiple - opinion situation , finding that the model has infinitely many phase transitions . moreover , the formulas describing the end states of these processes are remarkably simple when expressed as a function of .
here , we describe the iterative procedure aiming at constructing the set of stochastic matrices .let us define two matrices and whose entries are initially set to zero . at the iteration ,the row of and the column of are filled by setting and , where the pivot entry is the negative of the escape probability from state and is the probability of a round - trip from the same state .given stochastic matrix , the probability of an indirect transition from to via is ^{f}p_{n\gamma}^{(n-1)}=\mathcal{l}_{\beta n}u_{n\gamma},\ ] ] where the sum accounts for the probabilities of all possible round - trips from .since for , matrix remains upper triangular after the row addition and the equality remains satisfied for .the iteration is completed by constructing as follows : where for , as required .this property holds by induction on up to . after adding in , the probabilities of transitions from to ( )subsume the cancelled probability of all possible transitions from to .this ensures that the transformed matrices remain stochastic .the present path factorization is connected to gauss - jordan elimination method .summing from to when or from to when yields the relation let be the lower triangular matrix defined by for and otherwise .let also denote the upper triangular matrix by . then substituting for and identifying yields the equivalent relation ( ) choosing in eq. entails and , while setting yields .factorizations of and result from transformations above and below the eliminated pivot , respectively .this corresponds to the gauss - jordan pivot elimination method .the conditional probabilities used in the randomization procedure write by resorting to and and by using the stored entries of and , the information necessary for evaluating the conditional probabilities used in the iterative reverse randomization ( see sec .[ algorithm ] below ) can be easily retrieved . from with : dashed and solid arrows point to eliminated and non - eliminated states , respectively .arrow labels indicate typical numbers of transitions .a transition out of an eliminated state means that an exiting path starts from this state : here , two paths start from and three from .,scaledwidth=80.0% ]to show how the space - time randomization procedure is implemented in practice , the following preliminary definitions are required .the binomial law of trial number and success probability is denoted by .the probability of successes is .the negative binomial law of success number and success ( escape ) probability is denoted by .the probability of failures before the -th success is where is the failure or flicker probability ( flickers will correspond to round - trips from a given state ) .the gamma law of shape parameter and time - scale is denoted by . denotes the categorical laws whose probability vector is the -th line of if or of the stochastic matrix obtained from by eliminating the single state .the symbol means `` is a random variate distributed according to the law that follows '' .let and denote the set of absorbing states and the set of noneliminated transient states , respectively . the absorbing boundary constains the state of that can be reached directly from .let be a vector and denote the current state of the system .the cyclic structure of the algorithm , referred to as kinetic path sampling ( kps ) , is a. compute by iterating on from to , and label the states connected to in ascending order from to through appropriate permutations ; define and ; set the entries of and to zero ; b. [ item : run ] draw ; increment or by one depending on whether or ; move current state to ; if new goto ( [ item : randomize ] ) otherwise repeat ( [ item : run ] ) ; c. [ item : randomize ] iterate in reverse order from to 1 : a. for and draw b. for count the new hops from to c. for count the hops from to d. compute , the number of hops from , and draw the flicker number e. store , the number of transitions from , and deallocate , and ; d. for , store where ; e. evaluate , the total number of flickers and hops associated with the path generated in ; increment the physical time by .[ item : gamma ] after this cycle , the system has moved to the absorbing state reached in . the gamma law in simulates the time elapsed after performing consecutive poisson processes of rate . indeed , after any hop or flicker performed with , the physical time must be incremented by a residence time drawn in the exponential distribution of decaying rate . the way and constructed at each cycle is specific to the application . in simulations of anomalous diffusion on a disordered substrate , implying .any transition from reaches . in constrast ,when is not empty , the generated path may return to set several times prior to reaching . with this more general set - up ,several transitions exiting are typically recorded in the hopping matrix , as illustrated in fig .[ fig : diagram ] .this amounts to storing the path factorization and using it as many times as necessary . as a result , the elapsed physical time is generated for several escaping trajectories simultaneously .let denote the subset of states whose distances to along ( 1,0 ) and ( 0,1 ) directions are shorter than 48 , being the lattice parameter .then , the energy landscape is constructed using where and are independent random variables taking values -1 or 1 with equal probabilities .in fe - cu , a kps cycle starts by factoring the evolution operator when the vacancy binds to a cu atom and forms a new cluster shape .set contains the configurations corresponding to the initial vacancy position and to the possible vacancy positions inside the neighboring cu cluster ( wherein the vacancy can exchange without moving any fe atom ) .cu and fe atoms are unlabeled , hence the size of is the number of cu atoms in the cu cluster plus one .note that labeling the atoms would entail , making the simulations impractical for .the eliminated pivot in corresponds to the least connected entry of .the stochastic matrix is then used to evolve the system a first time and then each time the system returns to , as illustrated by the vacancy path of fig .[ figs2].a .a kmc algorithm is implemented when the vacancy is embedded in the iron bulk .the set of non - eliminated transient states , , encompasses all states corresponding to the vacancy embedded in the iron bulk . in practice, the vacancy tends to return to the cu cluster from which it just exited , meaning that the factorization is used many times during a kps cycle , up to an average of 60 for a cluster of size 40 at k. one stops generating the path whenever the vacancy returns to the initial cu cluster but with a different cluster shape ( see fig .[ figs2].b ) or reaches another cu cluster ( see fig . [ figs2].c ) .the possible ending states define the absorbing boundary .then , kps completes its cycle with space - time randomization . ) on a square lattice with cu - like atoms colored in apricot .dashed lines delimit the cluster shapes . ,scaledwidth=40.0% ] the copper solubility limits in iron for the three simulated temperatures ( k , k and k ) are , and , respectively .these limits are very small compared with , the cu concentration in the simulations .supersaturations in copper are thus very important , meaning that the solid solution is unstable at these temperatures and that the cu clusters that form during subsequent ageing unlikely dissolve during the initial incubation at .we in fact observe that the cross - over from the slow initial `` incubation '' to faster growth occurs concomittantly with a vacancy - containing copper cluster growing to 15 - 16 cu atoms , as shown in fig .[ figs3 ] .additional `` magic numbers '' around , 27 , 35 are evidenced at by visualizing the 41 simulated trajectories displayed in fig .[ figs4 ] .these numbers correspond to the numbers of site in compact clusters with fully filled nearest - neighbor shells . of the migrating cu cluster : single ( non - averaged ) ageing kinetics at k.,scaledwidth=45.0% ] associated with the migrating cu clusters for the 41 trajectories simulated at k.,scaledwidth=45.0% ]we now prove that the algorithm generates the correct distribution of first - passage times to for , i.e . in step ( e ) of the kps algorithm , we resort to the distributivity of the gamma law with respect to its shape parameter and decompose the first - passage time as follows : where and are the generated numbers of hops and flickers from . for any state that is visited times , the probability of having flickers before the -th escape from is , which corresponds to the probability mass of the negative binomial law of success number and success probability . in the following ,we introduce the effective rate the residence time associated with hops and flickers from state is distributed according to , the gamma law of shape parameter and time - scale .the probability mass of law at being $ ] , the overall probability to draw residence time after visits of state is obtained by summing the compound probabilities associated with all possible occurrences of the number of flickers as follows ^{f } \exp \left[-\frac{t } { \tau } \right ] = \nonumber \frac { ( k_\ell t ) ^{h-1 } } { ( h-1 ) ! } k_\ell \exp\left [ - k_\ell t \right ] .\nonumber\end{aligned}\ ] ] remarkably , the summation removes the dependence on and yields the distribution of the gamma law of shape parameter and time - scale , which corresponds to the convolution of decaying exponentials of rate .this is the expected distribution for the time elapsed after consecutive poisson processes of rate .note that the standard kmc algorithm simply draws an escape time according to , the exponential law of rate for each visit of , which is statistically equivalent .this amounts to prescribing a success probability of 1 , which results in and no failures .the proof that the algorithm is correct for follows by induction on resorting to the iterative structure of the algorithm . finally , it is worth noticing that trajectories escaping a trapping basin can in principle be analysed to provide information about the kinetic pathways since the numbers of transitions involving each eliminated state are randomly generated . obtaining information about time correlation functions and commitor probabilitieswould require testing the involved bernoulli trials sequentially instead of drawing a binomial or negative binomial deviates given a number of transitions , in order to construct the path .however , the computational cost of these additional operations scales linearly with the mean first - passage time for escaping a trapping basin .
the computational efficiency of stochastic simulation algorithms is notoriously limited by the kinetic trapping of the simulated trajectories within low energy basins . here we present a new method that overcomes kinetic trapping while still preserving exact statistics of escape paths from the trapping basins . the method is based on path factorization of the evolution operator and requires no prior knowledge of the underlying energy landscape . the efficiency of the new method is demonstrated in simulations of anomalous diffusion and phase separation in a binary alloy , two stochastic models presenting severe kinetic trapping . time evolution of many natural and engineering systems is described by a master equation ( me ) , i.e. a set of ordinary differential equations for the time - dependent vector of state probabilities . for models with large ( or infinite but countable ) number of states , direct solution of the me is prohibitive and kinetic monte carlo ( kmc ) is used instead to simulate the time evolution by generating sequences of stochastic transitions from one state to the next . statistically equivalent to the ( most often unknown ) solution of the me , kmc finds growing number of applications in natural and engineering sciences . however still wider applicability of kmc is severely limited by the notorious kinetic trapping where the stochastic trajectory repeatedly visits a subset of states , a trapping basin , connected to each other by high - rate transitions while transitions out of the trapping basin are infrequent and take great many kmc steps to observe . in this letter , we present an efficient method for sampling stochastic trajectories escaping from the trapping basins . unlike recent methods that focus on short portions of the full kinetic path directly leading to the escapes and/or require equilibration over a path ensemble , our method constructs an entire stochastic trajectory within the trapping basin including the typically large numbers of repeated visits to each trapping state as well as the eventual escape . referred hereafter as kinetic path sampling ( kps ) , the new algorithm is statistically equivalent to the standard kmc simulation and entails ( i ) iterative factorization of paths inside a trapping basin , ( ii ) sampling a single exit state within the basin s perimeter and ( iii ) generating a first - passage path and an exit time to the selected perimeter state through an exact randomization procedure . we demonstrate the accuracy and efficiency of kps on two models : ( 1 ) diffusion on a random energy landscape specifically designed to yield a wide and continuous spectrum of time scales and ( 2 ) kinetics of phase separation in super - saturated solid solutions of copper in iron . the proposed method is immune to kinetic trapping and performs well under simulation conditions where the standard kmc simulations slows down to a crawl . in particular , it reaches later stages of phase separation in the fe - cu system and captures a qualitatively new kinetics and mechanism of copper precipitation . the evolution operator , obtained formally from solutions of the me , can be expressed as an exponential of the time - independent transition rate matrix where is the probability to find the system in state at given that it was in state at time , is the rate of transitions from state to state ( off - diagonal elements only ) and the standard convention is used to define the diagonal elements as . as defined , the evolution operator belongs to the class of stochastic matrices such that and for any , , and . if known , the evolution operator can be used to sample transitions between any two states and over arbitrary time intervals . in particular , substantial simulation speed - ups can be achieved by sampling transitions to distant states on an absorbing perimeter of a trapping basin . two main defficiencies of the existing implementations of this idea is that states within the trapping basin are expected to be known _ a priori _ and that computing the evolution operator requires a partial eigenvalue decomposition of entailing high computational cost . in contrast , the kps algorithm does not require any advance knowledge of the trapping basin nor does it entail matrix diagonalization . instead , kps detects kinetic trapping and charts the trapping basin iteratively , state by state , and achieves high computational efficiency by sequentially eliminating all the trapping states through path factorization . here , wales s formulation of path factorization is adopted for its clarity . consider the linearized evolution operator where is the identity matrix . assuming that , is a proper stochastic matrix that can be used to generate stochastic sequences of states from the ensemble of paths defined by matrix . the diagonal elements of define the probabilities of round - trip transitions after which the system remains in the same state . to correct for the linearization of the evolution operator in , the time elapsed before any transition takes place is regarded as a stochastic variable and sampled from an exponential distribution . this simple time randomization obviates the need for exponentiating the transition rate matrix in . following , consider a bi - directional connectivity graph defined by in which states in the trapping basin are numbered in order of their elimination , . an iterative path factorization procedure then constructs a set of stochastic matrices such that , after the -th iteration , all states are eliminated in the sense that the probability of a transition from any state to state is zero . specifically , at -step of factorization the transition probability ( , ) is computed as the sum of the probability of a direct transition and the probabilities of all possible indirect paths involving round - trips in after having initially transitioned from to and before finally transitioning from to , e.g. , , , , and so on . with the round - trip probability being , it is a simple matter to sum the geometric series corresponding to the round - trip paths . although any intermediate can be used to generate stochastic escapes from any state , a trajectory generated using is the simplest containing a single transition from that effectively subsumes all possible transitions involving the deleted states in the trapping basin . on the other end , a detailed escape trajectory can be generated using that accounts for all transitions within , reverting back to the standard kmc simulation . remarkably , it is possible to construct a detailed escape trajectory statistically equivalent to the standard kmc without ever performing a detailed ( and inefficient ) kmc simulation . consider matrix whose elements store the number of transitions observed in a stochastic simulation with . given , one can randomly generate , the matrix similarly used to count the transition numbers observed in a stochastic process based on without actually performing the simulation using . the ratio of transition probabilities ( ) defines the conditional probability that a trajectory generated using contains a direct transition from to given that the trajectory generated with contains the same transition . for , is independent of and is equal to , the probability of escape from . it is thus possible to generate by performing a stochastic simulation with , harvesting and drawing random variates from ( standard and negative ) binomial distributions whose exponents and coefficients are given by the elements of and , respectively . this randomization procedure can be used iteratively on in the reverse order from to to generate containing a detailed count of transitions involving all states in . finally , the time of exit out of is sampled by drawing a random variate from the gamma distribution whose scale and shape parameters are defined by and the total number of transitions contained in , respectively . thus , in its simplest form the kps algorithm proceeds by first deleting all states in through iterative forward path factorization , then using to sample a single transition from and to generate followed by a backward randomization to reconstruct a detailed stochastic path within and to sample an escape time to the selected exit state . a detailed description of the kps algorithm is given in the supplemental material . we first apply kps to simulations of a random walker on a disordered energy landscape ( substrate ) . the substrate is a periodically replicated 256 fragment of the square lattice on which the walker hops to its four nearest - neigbour ( nn ) sites with transition rates ,\ ] ] where is the temperature , the site energy and the saddle energy between sites and . the energy landscape is purposefully constructed to contain trapping basins of widely distributed sizes and depths ( see the supplemental material for details ) and is centered around the walker s initial position next to the lowest energy saddle ( fig . [ fig : landscape ] ) . . artificial smoothing is used for better visualization.,scaledwidth=60.0% ] when performed at temperature t , standard kmc simulations ( with hops only to the nn sites ) are efficient enabling the walker to explore the entire substrate . however at t , the walker remains trapped near its initial position repeatedly visiting states within a trapping basin . to chart a basin set for subsequent kps simulations , the initial state 1 is eliminated at the very first iteration followed in sequence by the `` most absorbing states '' for which is found to be largest at the -th iteration ( ) . the expanding contours shown in fig . [ fig : landscape ] depict the absorbing boundary ( perimeter of the basin ) obtained after eliminating , , , , and states . the perimeter contour consists of all states for which is nonzero . for the walker to remain within a trapping basin containing states ( solid line ) and the distribution of times of escape out of the same basin ( dashed lines ) using a log scale for the bins ; ( b ) the mean first - passage time as a function of the number of states included in the trapping basin ; ( c ) computational cost of kmc and kps simulations as a function of at two different temperatures ( in the units of a single kmc hop ) . , scaledwidth=45.0% ] to demonstrate correctness of kps , we generated paths starting from state and ending at the absorbing boundary of the basin containing states , using both kps and kmc at t=2.5 . the perfect match between the two estimated distributions of exit times is shown in fig . [ fig2].a . the mean times of exit to are plotted as a function of the number of eliminated states at t and t=1.0 in fig . [ fig2].b , while the costs of both methods are compared in fig . [ fig2].c . at t=1.0 , kmc trajectories are trapped and never reach : in this case we plot the expectation value for the number of kmc hops required to exit which is always available after path factorization . we observe that the kps cost scales as , as expected for this factorization , and exceeds that of kmc for at t=2.5 . however , at t=1 trapping becomes severe rendering the standard kmc inefficient and the wall clock speedup achieved by kps is four orders of magnitude for . we observe that in kps the net cost of generating an exit trajectory is nearly independent of the temperature but grows exponentially with the decreasing temperature in kmc . at the same time , an accurate measure of the relative efficiency of kps and kmc is always available in path factorization , allowing one to revert to the standard kmc whenever it is relatively more efficient . thus , when performed correctly , a stochastic simulation combining kps and kmc should always be more efficient than kmc alone . as a second illustration , we apply kps to simulate the kinetics of copper precipitation in iron within a lattice model parameterized using electronic structure calculations . the simulation volume is a periodically replicated fragment of the body centered cubic lattice with 128 sites on which 28,163 cu atoms are initially randomly dispersed . fe atoms occupy all the remaining lattice sites except one that is left vacant allowing atom diffusion to occur by vacancy ( v ) hopping to one of its nn lattice sites . its formation energy being substantially lower in cu than in fe , the vacancy is readily trapped in cu precipitates rendering kmc grossly inefficient below k . whenever the vacancy is observed to attach to a cu cluster , we perform kps over a pre - charted set containing trapping states that correspond to all possible vacancy positions inside the cluster containing cu atoms : the shape of the trapping cluster is fixed at the instant when the vacancy first attaches . the fully factored matrix is then used to propagate the vacancy to a lattice site just outside the fixed cluster shape which is often followed by vacancy returning to the same cluster . if the newly formed trapping cluster has the same shape as before , the factorized matrix is used again to sample yet another escape . however a new path factorization ( kps cycle ) is performed whenever the vacancy re - attaches to the same cu cluster but in a different cluster shape or attaches to another cu cluster ( see the supplementary material for additional simulation details ) . we simulated copper precipitation in iron at three different temperatures t k , t k and t k for which the atomic fraction of cu atoms used in our simulations significantly exceeds copper solubility limits in iron . defined as the ratio of physical time simulated by kps to that reached in kmc simulations over the same wall clock time , the integrated speed - up is plotted in fig . [ fig3].a . as a function of the physical time simulated by kps ( averaged over 41 simulations for each method and at each temperature ) . the precipitation kinetics are monitored through the evolution of the volume - averaged warren - cowley short - range order ( sro ) parameter shown in fig . [ fig3].b both for kps and kmc simulations . at t and t the kinetics proceed through a distinct incubation stage reminescent to a time lag associated with repeated re - dissolution of subcritical nuclei prior to reaching the critical size in the classical nucleation theory . however , `` incubation '' observed here is of a distinctly different nature since all our simulated solid solutions are thermodynamically unstable and even the smallest of cu clusters , once formed , never dissolve . at all three temperatures the growth of clusters is observed to proceed not through the attachment of mobile v - cu dimers but primarily through the cluster s own diffusion and sweeping of neighboring immobile cu monomers . this is consistent with an earlier study that also suggested that , rather counter - intuitively , the diffusivity of clusters should increase with the increasing before tapering off at ( see fig . 9 of ref . ) . we futher observe that at t the cross - over from the slow initial `` incubation '' to faster `` agglomeration '' growth seen on [ fig3].b occurs concomittantly with the largest cluster growing to 15 - 16 cu atoms . individual realizations of the stochastic precipitation kinetics reveal that , in addition to , cluster growth slows down again once the cluster reaches , 27 , 35 and so on ( see figure s4 in the supplementary materials ) . leaving precise characterization of these transitions to future work , we speculate that the observed `` magic numbers '' correspond to compact clusters with fully filled nearest - neighbor shells in which vacancy trapping is particularly strong reducing the rate of shape - modifying vacancy escapes required for cluster diffusion . numerically , as expected , the integrated speed - up rapidly increases with the decreasing temperature as vacancy trapping becomes more severe . two line segments of unit slope and two pairs of vertical arrows are drawn in fig . [ fig3 ] to compare evolution stages achievable within kps and kmc over the same wall clock time . as marked by the pair of two solid vertical arrows on the right , the integrated speed - up exceeds seven orders of magnitude at t . subsequent reduction in the speed - up concides with the transition into the agglomeration regime where increasingly large vcu clusters repeatedly visit increasingly large number of distinct shapes . unquestionably , the efficiency of kps simulations for this particular model can be improved by indexing distinct cluster shapes for each cluster size and storing the path factorizations to allow for their repeated use during the simulations . in any case , given its built - in awareness of the relative cost measured in kmc hops , kps is certain to enable more efficient simulations of diffusive phase transformations in various technologically important materials . in particular , it is tempting to relate an anomalously long incubation stage observed in aluminium alloys with mg , si and se additions to possible trapping of vacancies on se , similar to the retarding effect of cu on the ageing kinetics reported here for the fe - cu alloys . in summary , we developed a kinetic path sampling algorithm suitable for simulating the evolution of systems prone to kinetic trapping . unlike most other algorithms dealing with this numerical bottleneck , kps does not require any _ a priori _ knowledge of the properties of the trapping basin . it relies on an iterative path factorization of the evolution operator to chart possible escapes , measures its own relative cost and reverts to standard kmc if the added efficiency no longer offsets its computational overhead . at the same time , the kps algorithm is exact and samples stochastic trajectories from the same statistical ensemble as the standard kmc algorithms . being immune to kinetic trapping , kps is well positioned to extend the range of applicability of stochastic simulations beyond their current limits . furthermore , kps can be combined with spatial protection and synchronous or asynchronous algorithms to enable efficient parallel simulations of a still wider class of large - scale stochastic models . this work was supported by defi needs ( project mathdef ) and lawrence livermore national laboratory s ldrd office ( project 09-erd-005 ) and utilized hpc resources from genci-[ccrt / cines ] ( grant x2013096973 ) . this work was performed under the auspices of the u.s . department of energy by lawrence livermore national laboratory under contract de - ac52 - 07na27344 . the authors wish to express their gratitude to t. opplestrup , f. soisson , e. clouet , j .- l . bocquet , g. adjanor and a. donev for fruitful discussions . 10 n. g. van kampen , stochastic processes in physics and chemistry ( elsevier science , 2007 ) . s. redner , a guide to first - passage processes , cambridge university press ( 2001 ) . j .- m . lanore , rad . effects * 22 * , 153 ( 1974 ) . a. bortz , m. kalos , j. lebowitz , j. comp . phys . * 17 * , 10 ( 1975 ) . d. gillespie , j. chem . phys . * 81 * , 2340 ( 1977 ) . p. bolhuis , d. chandler , c. dellago and p. geissler , ann . rev . phys . chem . * 53 * , 291 ( 2002 ) . s. x. sun , phys . rev . lett . * 96 * 210602 ( 2006 ) and * 97 * , 178902 ( 2006 ) . b. harland and x. sun , j. chem . phys . * 127 * 104103 ( 2007 ) . c. d. van siclen , j. phys . : condens . matter 19 , 072201 ( 2007 ) . n. eidelson and b. peters j. chem . phys . * 137 * , 094106 ( 2012 ) . t. mora , a. m. walczak and f. zamponi , phys . rev . e * 85 * , 036710 ( 2012 ) . m. manhart and a. v. morozov , phys . rev . lett . * 111 * 088102 ( 2013 ) . the evolution operator is obtained by integrating the me from to and identifying the formal solution with where is the state - probability ( row ) vector at . if the system is in at a given time , then the state - probability vector at a later time is where denotes the row vector whose entry is one and the other ones are zero . the entries of are and can be used as transition probabilities for kmc moves from . m. a. novotny , phys . rev . lett . * 74 * , 1 ( 1995 ) . g. boulougouris and d. theodorou , j. chem . phys . * 127 * , 084903 ( 2007 ) . m. barrio , a. leier and t. marquez - lago , j. chem phys . * 138 * , 104114 ( 2013 ) . g. nandipati , y. shim and j. g. amar , phys . rev . b * 81 * 235415 ( 2010 ) . c. moler and c. van loan , siam rev . * 45 * , 3 ( 2003 ) . m. athnes , p. bellon and g. martin , phil . mag . a , * 76 * , 565 ( 1997 ) . s. trygubenko and d. wales , j. chem . phys . * 124 * , 234110 ( 2006 ) . d. wales , j. chem . phys . * 130 * , 204111 ( 2009 ) . s. a. serebrinsky , phys . rev . e * 83 * , 037701 ( 2011 ) . see supplemental material below for the connection between path factorization and gauss - jordan elimination method and for a proof that the algorithm is correct for . y. limoge and j .- l . bocquet , phys . rev . lett . * 65 * , 60 ( 1990 ) . f. soisson , c. c. fu , phys . rev . b * 76 * , 214102 ( 2007 ) . m. athnes , p. bellon and g. martin , acta mat . * 48 * , 2675 , ( 2000 ) . e. clouet , asm handbook vol . 22a , fundamentals of modeling for metals processing d. u. furrer and s. l. semiatin ( eds . ) , pp . 203 - 219 ( 2010 ) . to understand the origin of the efficiency decrease , we have monitored the number of distinct shapes of the vacancy - copper cluster . for a v - cu cluster , we found that , over the last factorizations that have been performed , there are only 21 different cluster shapes and that the 5 most frequent shapes occur with a frequency of about 60% . l. k. bland , p. brommer , f. el - mellouhi , j. f. joly and n. mousseau , phys . rev . e * 84 * , 046704 ( 2011 ) . s. pogatscher , h. antrekowitsch , m. werinos , f. moszner , s. s. a. gerstl , m. f. francis , w. a. curtin , j. f. lffler and p. j. uggowitzer , phys . rev . lett . * 112 * , 225701 ( 2014 ) . d. mason , r. rudd and a. sutton , comp . . comm . * 160 * , 140 ( 2004 ) . b. puchala , m. falk and k. garikipati , j. chem . phys . * 132 * , 134104 ( 2010 ) . t. opplestrup , v. v. bulatov , g. h. gilmer , m. h. kalos , and b. sadigh , phys . rev . lett . * 97 * , 230602 ( 2006 ) . y. shim and j. g. amar , phys . rev . b * 71 * , 115436 ( 2005 ) . m. merrick and k. fichthorn , * 75 * , 011606 ( 2007 ) . e. martnez and p. r. monasterio , j. marian , j. comput . phys . * 230 * , 1359 ( 2011 ) . f. wieland , and d. jefferson , proc . 1989 intl conf . parallel processing , vol.iii , f. ris , and m. kogge , eds . , pp . 255 - 258 . * supplemental materials : path factorization approach to stochastic simulations *
entanglement is an essential ingredient in quantum information and the central feature of quantum mechanics which distinguishes a quantum system from its classical counterpart . in recent years, it has been regarded as an important physical resource , and widely applied to a lot of quantum information processing(qip ) : quantum computation [ 1 ] , quantum cryptography [ 2 ] , quantum teleportation [ 3 ] , quantum dense coding [ 4 ] and so on .entanglement arises only if some subsystems ever interacted with the others among the whole multipartite system in physics , or only if the multipartite quantum state is not separable or factorable in mathematics .the latter provides a direct way to tell whether or not a given quantum state is entangled . as to the separability of bipartite quantum states , partial entropy introduced by bennett __ al_. [ 5 ] provides a good criterion of separability for pure states .later , wootters presents the remarkable concurrence for bipartite systems of qubits [ 6,7 ] . based on the motivation of generalizing the definition of concurrence to higher dimensional systems , many attempts have been made [ 8,9,10,11 ] , which provide good separability criteria for bipartite qubit systems under corresponding conditions , whilst ref.[8 ] also presents an alternative method to minimize the convex hull for mixed states . as to multipartite quantum systems ,several separability criteria have been proposed [ 12,13,14,15,16,17 ] .the most notable one is 3-tangle for three qubitsrecently , the result has been generalized to the higher dimensional systems [ 18 ] . despite the enormous effort ,the separability of quantum states especially in higher dimensional systems is still an open problem . in this paperwe construct the full separability criteria for arbitrary tripartite qubit system by a novel method , i.e. a tripartite pure state can be defined by a three - order tensor .the definition provides an intuitionistic mathematical formulation for the full separability of pure states .analogous to ref.[8 ] , we extend the definition to mixed states .more importantly , our approach is easily generalized to higher dimensional systems . as applications ,we discuss separability of two bound entangled states introduced in [ 19,20 ] , respectively .we start with the separability definition of tripartite qubit pure state is fully separable iff a general tripartite pure state written by , , , , the coefficients can be arranged as a three - order tensor ( tensor cube ) [ 21 ] as shown in figure 1 .note that the subscripts of correspond to the basis .every surface can be regarded as the tensor product of a single qubit and an unnormalized bipartite state .hence , if the two vectors ( edges ) of a surface are linear relevant ( including one of the vectors is vector ) , then the bipartite state mentioned above can be factorized .the conclusion for diagonal plane is analogous .considering all the planes , one can easily find that the tripartite state is fully separable iff all the vectors which are parallel mutually shown in the cube , are linear relevant , according to the fundamental linear algebra .i.e. the rank of every matrix composed of four coefficients on the corresponding surface and diagonal plane is .equivalently , we can obtain the following lemma .* * lemma1.-**a tripartite pure state with the form of eq.(2 ) in dimensional hilbert space is fully separable , iff the following six equations hold : and . * proof . *( sufficient condition ) if eq.(3)-eq.(5 ) hold , then the rank of every matrix that the cubic surface corresponds to is _one_. if eq.(6)-eq.(8 ) hold , then the rank of every matrix that the cubic diagonal plane corresponds to is _one_. hence , that eq.(3)-eq.(8 ) hold simultaneously shows that the tripartite pure state can be fully factorized , i.e. it is fully separable .( necessary condition ) if a given tripartite state is fully separable , one can easily obtain that the rank of every corresponding matrix is .namely , eq.(3)-eq.(8 ) hold .consider that can be denoted by a vector in dimensional hilbert space, the superscript denoting transpose , we can write the above equations ( 3 - 8 ) in matrix notation by the star denotes complex conjugation , and , , , , , , , , , with , , and .( alternatively , can be replaced by for tripartite pure states of qubit with , and , where , by which comlex optimal parameters can be reduced for the case of mixed states ) .define a new vector by with , then the length of the vector can be given by .therefore , the full separability criterion for a tripartite state can be expressed by a more rigorous form as follows . * * theorem1.-**a tripartite pure state is fully separable iff . * proof . * that * * * * is equivalent to that holds for any . according to lemma 1 , one can obtain that is the sufficient and necessary condition .a tripartite mixed state is fully separable iff there exists a decomposition , such that is fully separable for every or equivalently iff the infimum of the average vanishes , namely, all possible decompositions .therefore , for any a given decomposition to the minkowski inequality can get the matrix notation [ 8 ] of equation ( 10 ) as , where is a diagonal matrix with , the columns of the matrix correspond to the vectors , and the eigenvalue decomposition , , where is a diagonal matrix whose diagonal elements are the eigenvalues of , and is a unitary matrix whose columns are the eigenvectors of , associated with the relation , where is a right - unitary matrix , inequality ( 12 ) can be rewritten as in terms of the cauchy - schwarz inequality inequality implied for any with and where .the infimum of equation ( 15 ) is given by analogous to ref.[8 ] , with are the singular values , in decreasing order , of the matrix . is as well expressed by can easily see that provides a necessary and even sufficient condition of full separability for tripartite mixed qubit systems , hence an effective separability criterion. however , it is so unfortunate that can not serve as a good entanglement measure , but only an effective criterion to detect whether a state is fully separable , because for pure states is not invariant under local unitary transformations .at first , consider the complementary states to shifts upb [ 19 ] .shifts upb is the set of the following four product states .the corresponding bound entangled ( complementary ) state is given by corresponding to the shifts upb . in ref.[19 ] , it is stated that this complementary state has the curious property that not only is it two - way ppt , it is also two - way separable . the numerical result based on our criterioncan show a _non - zero _ ( ) entanglement for , which is consistent to [ 19 ] .let s consider the second example , the dr - cirac -tarrach states [ 20 ] can also show the ( ) for conclusion is also implied in [ 20 ] .the above numerical tests are operated as follows . in order to show the nonzero with or , we choose random vectors generated by _matlab 6.5 _ for a given , then substitute these vectors into and obtain matrices .we can get ( )s by singular value decomposition for the matrices .the maximal among the matrices is assigned to . due to the whole process, it is obvious that our numerical approach is more effective to test the _it can only provide a reference for the _ zero _ .if a standard numerical process is needed , we suggest that the approach introduced in ref.[17 ] be preferred .as a summary , we have shown effective criteria for tripartite qubit systems by the pioneering application of the approach to define a tripartite pure state as a three - order tensor . however , although our criteria can be reduced to wootters concurrence [ 7 ] for bipartite systems , as mentioned above , the criteria can not serve as a good entanglement measure . therefore , it is not necessary to find out the concrete value of , but whether are greater than , as can be found in our examples .based on the tensor treatment for a tripartite pure state , if a more suitable that can serve as a good entanglement measure can be found , it will be interesting .it deserves our attention that our approach can be easily extended to test the full separability of multipartite systems in arbitrary dimension , which will be given out in the forthcoming works .we would like to thank x. x. yi for extensive and valuable advice .we are grateful to the referees for their useful suggestion and comments .this work was supported by ministry of science and technology , china , under grant no.2100cca00700 .different from the previous definition of tensors , all the quantities with indices , such as , are called three - order tensors here . therefore , the set of all one - order tensor is the set of vectors , and the set of all two - order tensors is the set of matrices .three - order tensors are matrices correpsonding to the planes in the cube ( vectors corresponding to the edges ) when any one ( two ) of their three indices is ( are ) fixed .
in this paper , we present a method to construct full separability criteria for tripartite systems of qubits . the spirit of our approach is that a tripartite pure state can be regarded as a three - order tensor that provides an intuitionistic mathematical formulation for the full separability of pure states . we extend the definition to mixed states and given out the corresponding full separability criterion . as applications , we discuss the separability of several bound entangled states , which shows that our criterion is feasible .
[ [ ams - subject - classification - msc2010 ] ] ams subject classification ( msc2010 ) the use of computer - based tests in which questions are randomly generated in some way provides a means whereby a large number of different tests can be generated ; many universities currently use such tests as part of the student assessment process . in this paperwe present findings that illustrate that , although the number of different possible tests is high and grows very rapidly as the number of alternatives for each question increases , the average number of tests that need to be generated before all possible questions have appeared at least once is surprisingly low .we presented preliminary findings along these lines in .a computer - based test consists of questions , each ( independently ) selected at random from a separate bank of alternatives .let be the number of tests one needs to generate in order to see all the questions in the question banks at least once .we are interested in how , for fixed , the random variable grows with the number of questions in the test .typically , might be 10i.e. each question might have a bank of 10 alternatives but we shall allow any value of , and give numerical results for and as well as for . in the case , i.e. a one - question test , we re - notate as , and observe that we have an equivalent to the classic coupon - collector problem : your favourite cereal has a coupon in each packet , and there are alternative types of coupon . is the number of packets you have to buy in order to get at least one coupon of each of the types .the coupon - collector problem has been much studied ; see e.g. .we can write as where each is the number of cereal packets you must buy in order to acquire a new type of coupon , when you already have types in your collection .thus , is the number of further packets you find you need to gain a second type , and so on .the random variables , , are mutually independent . for the distribution of , clearly we say that , or has a geometric distribution with parameter , if for , 2 , .thus . as the distribution has expectation it follows that for different values of we therefore have the following .the amount of variability can be better appreciated through the standard deviation .the asymptotic bounds on the standard deviation of are and some values for these are in table [ ta : sd ] .the lower bound is non - trivial , i.e. positive , in each case .we are grateful to dave pidcock , a colleague in the mathematics education centre at loughborough university , for raising the query in the first place . as a member of staff using computer - based tests to assess students , he was concerned about this issue from a practical viewpoint . that led rc to post a query on allstat .cmg was not the only person to respond to the query , and we also acknowledge the others who responded , particularly simon bond .cornish , r. , goldie , c. m. , and robinson , c. l. 2006 .computer - assisted assessment : how many questions are enough ? _ computer - aided assessment in mathematics _, 9pp . ; http://mathstore.ac.uk/articles/maths-caa-series/feb2006 .
computer - based tests with randomly generated questions allow a large number of different tests to be generated . given a fixed number of alternatives for each question , the number of tests that need to be generated before all possible questions have appeared is surprisingly low .
any theory based on classical concepts , such as locality and realism , predicts bounds on the correlations between measurement outcomes obtained in space - separation .these bounds are known as bell inequalities ( see for reviews ) .profoundly , the correlations measured on certain quantum states violate bell inequalities , implying incompatibility between the quantum and classical worldviews . which are these non - classical states of quantum mechanics ?here , we present a tool which allows one to extend the class of non - classical states , and gives further evidence that there may exist many - particle entangled states whose correlations admit a local realistic description . despite their fundamental role , with the emergence of quantum information , bell inequalities have found practical applications .quantum advantages of certain protocols , like quantum cryptography or quantum communication complexity , are linked with bell inequalities .thus , new inequalities lead to new schemes . as an example , we present communication complexity problem associated with the new multisetting inequality .specifically , based on a geometrical argument by ukowski , a bell inequality for many observers , each choosing between arbitrary number of dichotomic observables , is derived .many previously known inequalities are special cases of this new inequality , e.g. clauser - horne - shimony - holt inequality or tight two - setting inequalities . the new inequalities are maximally violated by the greenberger - horne - zeilinger ( ghz ) states .many other states violate them , including the states which satisfy two - settings inequalities and bound entangled states .this is shown using the necessary and sufficient condition for the violation of the inequalities . finally , it is proven that the bell operator has only two non - vanishing eigenvalues which correspond to the ghz states , and thus has a very simple form .this form is utilized to show that quantum states with positive partial transposes with respect to all subsystems ( in general the necessary but not sufficient condition for entanglement ) do not violate the new inequalities .this is further supporting evidence for a conjecture by peres that positivity of partial transposes could lead us to the existence of a local realistic model .the paper is organized as follows . in sectionii we present the multisetting inequality . in sectioniii the necessary and sufficient condition for a violation of the inequality is derived , and examples of non - classical states are given .next , we support the conjecture by peres in section iv , and follow the link with communication complexity problems in section v. section vi summarizes this paper .consider separated parties making measurements on two - level systems .each party can choose one of dichotomic , of values , observables .in this scenario parties can measure correlations , where the index denotes the setting of the observer .a general bell expression , which involves these correlations with some coefficients , can be written as : in what follows we assume certain form of coefficients , and compute local realistic bound as a maximum of a scalar product .the components of vector have the usual form : where denotes a set of hidden variables , their distribution , and the predetermined result of observer under setting .the quantum prediction for the bell expression ( [ general_bell ] ) is given by a scalar product of .the components of , according to quantum theory , are given by : where is a density operator ( general quantum state ) , is a vector of local pauli operators for observer , and denotes a normalized vector which parameterizes observable for the party .assume that local settings are parameterized by a single angle : .in the quantum picture we restrict observable vectors to lie in the equatorial plane : take the coefficients in a form : with the angles given by : the number is fixed for a given experimental situation , i.e. and , and equals : [ n]_2 + 1 , \label{eta}\ ] ] where ] , which reduces to .finally , the maximal length reads : where the modulus is no longer needed since the argument of sine is small .moreover , since the local results for each party can be chosen independently , the maximal length does not depend on particular , i.e. .since is a positive real number its power can be put to multiply the real part in ( [ re_angles ] ) , and one finds to be bounded by : ^{-n } \ ! \ ! \ ! \ !\cos \left ( \frac{\pi}{2 m } \eta + \phi_1 + ... + \phi_n \right),\ ] ] where the cosine comes from the phases of the sums in ( [ re_angles ] ) .these phases can be found from the definition ( [ vector ] ) . as only vectors rotated by a multiple of are summed ( or subtracted ) in ( [ vector ] ) , each phase can acquire only a restricted set of values .namely : with , i.e. for even , is an odd multiple of ; and for odd , is an even multiple of .thus , the sum is an even multiple of , except for even and odd . keeping in mind the definition of , given in ( [ eta ] ) ,one finds the argument of is always odd multiple of , which implies the maximum value of the cosine is equal to .finally , the multisetting bell inequality reads : ^{-n }\cos \left(\frac{\pi}{2 m } \right ) .\label{inequality}\ ] ] this inequality , when reduced to two parties choosing between two settings each , recovers the famous clauser - horne - shimony - holt inequality . for higher number of parties ,still choosing between two observables , it reduces to tight two - setting inequalities .when observers choose between three observables the inequalities of ukowski and kaszlikowski are obtained , and for continuous range of settings ( ) it recovers the inequality of ukowski .in this section we present a bell operator associated with the inequality ( [ inequality ] ) .next , it is used to derive the necessary and sufficient condition for the violation of the inequality . using this conditionwe recover already known results and present some new ones .the form of the coefficients we have chosen is exactly the same as the quantum correlation function for the greenberger - horne - zeilinger state : ,\ ] ] where the vectors and are the eigenstates of local operator of the party . for this statethe two vectors and are equal ( thus parallel ) , which means that the state maximally violates inequality ( [ inequality ] ) .the value of the left hand side of ( [ inequality ] ) is given by the scalar product of with itself : using the trigonometric identity one can rewrite this expression into the form : .\ ] ] as before , the second term can be written as a real part of a complex number .putting the values of angles ( [ angles ] ) one arrives at : note that is a primitive complex root of unity . since all complex roots of unity sum up to zero the above expression vanishes , and a maximal quantum value of the left hand side of ( [ inequality ] ) equals : if instead of one chooses the state ] forms the so - called correlation tensor .the correlation tensors of the projectors are denoted by . using the linearity of the trace operation and the fact that the trace of the tensor product is given by the product of local traces , one can write in terms of correlation tensors : since each of the local traces , the global trace is given by : the nonvanishing correlation tensor components of the ghz states are the same in the plane : for even ; and are exactly opposite in the plane : with indices equal to and all remaining equal to .inserting the traces ( [ tracing ] ) into the averaged bell operator ( [ averaged_bell_op ] ) one finds that the components in the plane cancel out , and components in the plane double themselves .finally , the necessary and sufficient condition to satisfy the inequality is given by : where the maximization is performed over the choice of local coordinate systems , includes all sets of indices with 2 indices equal to and the rest equal to , and ^{-n }\cos \left(\frac{\pi}{2 m } \right)\ ] ] denotes the local realistic bound .we now present examples of states , which violate the new inequality . as a measure of violation , , we take the average ( quantum ) value of the bell operator in a given state , divided by the local realistic bound : _ ghz state_. first , let us simply consider . for the case of two settings per side one recovers previously known results : for three settings per side the result of ukowski and kaszlikowski is obtained : for the continuous range of settings one recovers : in the intermediate ( unexplored before ) regime one has : for a fixed number of parties the violation increases with the number of local settings .surprisingly , the inequality implies for the cases of and that the violation decreases when the number of local settings grows .this behaviour is shown in the fig .[ m_plot ] . , for the -qubit ghz state.,scaledwidth=50.0% ]the violation of local realism always grows with increasing number of parties ._ generalized ghz state . _consider the ghz state with free real coefficients : its correlation tensor in the plane has the following nonvanishing components : , and the components with 2 indices equal to and the rest equal to take the value of ( there are such components ) .thus , all terms contribute to the violation condition ( [ ns ] ) .the violation factor is equal to . for and violation is bigger than the violation of standard two - setting inequalities .moreover , some of the states , for small and odd , do not violate _ any _ two - settings correlation function bell inequality , and violate the multisetting inequality . _ bound entangled state ._ interestingly , the inequality can reveal non - classical correlations of a bound entangled state introduced by dr : with ] .since is independent of , and for given inputs it is constant , one has ] , and it is in one - to - one correspondence with a `` weighted '' scalar product ( average success ) : using the definitions ( [ ww ] ) for and ( [ ff ] ) for one gets : with angles given by ( [ angles ] ) .we focus our attention on maximization of this quantity ._ classical scenario . _ in the _ best _ classical protocol each party locally computes a bit function , with , where denotes some previously shared classical resources .next , the bit is sent to alice , who puts as an answer the product .the same answer can be reached in the chain strategy , simply the party sends . for the given inputs the procedure is always the same , i.e. . to prove the optimality of this protocol ,one follows the proof of ref . , with the only difference that is a -valued variable now .this , however , does not invalidate any of the steps of , and we will not repeat that proof . inserting the product form of into the average success ( [ success ] ) , using the fact that , and summing over all s one obtains : which has the same structure as local realistic expression ( [ ineq_deter ] ) .thus , the highest classically achievable average success is given by a local realistic bound : ._ quantum scenario ._ in the quantum case participants share a -party entangled state . after receiving inputs each party measures observable on the state , where the observables are enumerated as in the bell inequality ( [ inequality ] ) .this results in a measurement outcome , .each party sends to alice , who then puts as an answer a product . for the given inputs the average answer reads , and the maximal average successis given by a quantum bound of : the average advantage of quantum versus classical protocol can be quantified by a factor which is equal to a violation factor , , introduced before . thus , all the states which violate the bell inequality ( including bound entangled state ) are a useful resource for the communication complexity task .optimally one should use the ghz states , as they maximally violate the inequality .alternatively , one can compare the probabilities of success , , in quantum and classical case .clearly , one outperforms classical protocols for every and every . as an example , in table [ table_adv ] we gather the ratios between quantum and classical success probabilities for small number of participants ..the ration between probabilities of success in quantum and classical case for the communication complexity problem with observers and settings .quantum protocol uses ghz state . [ cols="^,^,^,^,^,^",options="header " , ] one can ask about a ccp with no random inputs . since the numbers already represent bits of information , and only one bit can be communicated , this looks like a plausible candidate for a quantum advantage .however , in such a case a classical answer can not be put as a product of outcomes of local computations ( compare ) , and thus there is no bell inequality which would describe the best classical protocol . since classical performance of _all _ ccps which can lead to quantum advantage is given by some bell inequality , the task without s can not lead to quantum advantage .we presented a multisetting bell inequality , which unifies and generalizes many previous results .examples of quantum states which violate the inequality were given .it was also proven that all the states with positive partial transposes with respect to all subsystems can not violate the inequality . finally , the states which violate it were shown to reduce the communication complexity of computation of certain globally defined function .the bell inequality presented is the only inequality which incorporates arbitrary number of settings for arbitrary number of observers making measurements on two - level systems , to date .we thank m. ukowski for valuable discussions . w.l .and t.p . are supported by foundation for polish science and mnii grant no. 1 p03b 049 27 .the work is part of the vi - th eu framework programme qap ( qubit applications ) contract no . 015848 .j. f. clauser and a. shimony , rep .. phys . * 41 * , 1881 ( 1978 ) ; d. m. greenberger , m. a. horne , a. shimony , and a. zeilinger , am . j. phys . * 58 * , 1131 ( 1990 ) ; t. paterek , w. laskowski , and m. ukowski , mod .a * 21 * , 111 ( 2006 ) .
based on a geometrical argument introduced by ukowski , a new multisetting bell inequality is derived , for the scenario in which many parties make measurements on two - level systems . this generalizes and unifies some previous results . moreover , a necessary and sufficient condition for the violation of this inequality is presented . it turns out that the class of non - separable states which do not admit local realistic description is extended when compared to the two - setting inequalities . however , supporting the conjecture of peres , quantum states with positive partial transposes with respect to all subsystems do not violate the inequality . additionally , we follow a general link between bell inequalities and communication complexity problems , and present a quantum protocol linked with the inequality , which outperforms the best classical protocol .
type - i and type - ii censoring schemes are two most popular censoring schemes which are used in practice .they can be briefly described as follows .suppose units are put on a life test .in type - i censoring , the test is terminated when a pre - determined time , , on test has been reached , and failures after time are not observed . in type - iicensoring , the test is terminated when a pre - chosen number , , out of items has failed .it is also assumed that the failed items are not replaced .so , in type - i censoring scheme , the number of failures is random and in type - ii censoring scheme , the experimental time is random .a hybrid censoring scheme is a mixture of type - i and type - ii censoring schemes and it can be described as follows .suppose identical units are put to test .the test is finished when a pre - selected number out of items are failed , or when a pre - determined time on the test has been obtained . from now on, we call this type - i hybrid censoring scheme and this scheme has been used as a reliability acceptance test in .this censoring scheme was introduced by epstin , he also studied the life testing data under the assumption of exponential distribution with mean life .epstein proposed two - sided confidence intervals for without any formal proof .fairbanks et al . moderated partly the proposition of epstein and suggested a simple set of confidence intervals .chen and bhattacharya earned the exact distribution of the conditional maximum likelihood estimator ( mle ) of and implied a one - sided confidence interval .childs et al . proposed some simplifications of the exact distribution . from the bayesian point of view , drapper and guttmann studied the same problem , and reached a two - sided credible interval of the mean lifetime based on the gamma prior .comparison of the different methods using monte carlo simulations , can be found in gupta and kundu .for some related work , one may refer to ebrahimi , jeong et al . , childs et al . , kundu , banerjee and kundu , kundu and pradhan , dube et al . and the references cited there .one of the disadvantages of type - i hybrid censoring scheme is that there may be very few failures occurring up to the pre - fixed time . because of this , childs et al . proposed a new hybrid censoring scheme known as type - ii hybrid censoring scheme which can be described as follows .put identical items on test , and then stop the experiment at the random time , where , and are prefixed numbers and indicates the time of failure in a sample of size . under the type - ii hybrid censoring scheme , we have one of the following three types of observations : + case i : + case ii : if and + case iii : + where denote the observed ordered failure times of the experimental units . a schematic illustration of the hybrid censoring scheme is presented in figure [ fig1 ] .( 10,30)(10,15 ) ( 25,25)(1,0)220 ( 35,25)(1,1)20 ( 70,25)(1,1)20 ( 200,25)(1,1)20 ( 25,15) ( 35,50)1-st failure ( 60,15) ( 90,50)2-nd failure ( 190,15) ( 180,50)r - th failure ( experiment stops ) ( 165,15) ( 122,20) ( 300,25)case i ( 35,25 ) ( 70,25 ) ( 170,25 ) ( 10,30)(10,15 ) ( 25,25)(1,0)220 ( 35,25)(1,1)20 ( 70,25)(1,1)20 ( 155,25)(1,1)20 ( 200,25)(1,1)20 ( 25,15) ( 35,50)1-st failure ( 60,15) ( 90,50)2-nd failure ( 110,20) ( 145,15) ( 150,50)d - th failure ( 195,15) ( 215,50)experiment stops ( 300,25)case ii ( 35,25 ) ( 70,25 ) ( 200,25 ) ( 10,30)(10,15 ) ( 25,25)(1,0)220 ( 35,25)(1,1)20 ( 70,25)(1,1)20 ( 155,25)(1,1)20 ( 25,15) ( 35,50)1-st failure ( 60,15) ( 90,50)2-nd failure ( 110,20) ( 145,15) ( 150,50)n - th failure ( experiment stops ) ( 195,15) ( 300,25)case iii ( 35,25 ) ( 70,25 ) ( 200,25 ) ( 10,-5)[fig1]figure 1 : a schematic presentation for type - ii hybrid censored scheme . in this article , we consider the analysis of type - ii hybrid censored lifetime data when the lifetime of each experimental unit follows a two - parameter weighted exponential ( we ) distribution .this distribution was originally proposed by gupta and kundu .the two - parameter we distribution with the shape and scale parameters and , respectively , has the probability density function ( pdf ) as : we denote a two - parameter we distribution with the pdf ( [ we ] ) by and the corresponding cumulative distribution function ( cdf ) by .the aim of this article is two fold .first , we try to earn the mle s of the unknown parameters .it is observed that the maximum likelihood estimators can be obtained implicitly by solving two nonlinear equations , but they can not be obtained in closed form .so mle s of parameters are derived numerically .newton - raphson algorithm is one of the standard methods to determine the mle s of the parameters . to employ the algorithm , second derivatives of the log - likelihoodare required for all iterations .the em algorithm is a very powerful tool in handling the incomplete data problem see dempster et al . and mclachlan and krishnan . then we use the em algorithm to compute the mle s .we also evaluate the observed fisher information matrix using the missing information principle which have been used to obtained asymptotic confidence intervals of the unknown parameters .the second aim of this article is to provide the bayes inference for the unknown parameters for type - ii hybrid censored data .it is observed that bayes estimators can not be obtained explicitly , we provide two approximations namely lindley s approximation and gibbs sampling procedure .so we use the gibbs sampling procedure to compute the bayes estimators , and the hpd confidence intervals .we compare the performances of the different methods by monte carlo simulations , and for illustrative purposes we have analyzed one real data set .the rest of the article is arranged as follows . in section 2, we provide the mle s of the unknown parameters .fisher information matrix is evaluated in section 3 . using lindley s approximation and gibbs sampling we obtain bayes estimators and hpd confidence intervals for the parametes in section 4 .simulation results are presented in section 5 .we verify our theoretical results via analyzing data set in section 6 .in this section , we study mles of the model parameters and for distribution with density function : for simplicity , we apply a re - parametrization as and . by this, the distribution can be written as : the likelihood function in case i is given by for case ii , and for case iii , where is presented by ( [ bl ] ) , so we present likelihood functions ( [ a ] ) , ( [ b ] ) and ( [ iii ] ) by : where taking the logarithm of equation [ c ] , we obtain then the normal equations are maximum likelihood estimators can be secured by solving these equations , but they can not be expressed explicitly .so we use em algorithm to compute them .the advantage of this method is that it is convergence for any initial value fast enough .+ the em algorithm , originally proposed by dempster et al . , is a very powerful tool for handling the incomplete data problem .+ let us symbolize the observed and the censored data by and , respectively . here for a given r , are not observable .the censored data vector can be thought of as missing data .the combination of forms the whole data set .in next we follow the method kundu and pradhan for missing data introducing .+ if we denote the log - likelihood function of the uncensored data set by for the e - step of the em algorithm , one needs to compute the pseudo log - likelihood function as therefor , where =e(z_i|z_i > c ) ~~\mbox{and}~~ b(c;\alpha,\beta)=e[\ln(1-e^{-\beta z_i})|z_i > c],\ ] ] and they are obtained in appendix a. now the m - step includes the maximization of the pseudo log - likelihood function [ 2 ] .therefore , if at the kth stage , the estimation of is , then can be obtained by maximizing note that the maximization of [ 3 ] can be earned quite effectively by the similar method proposed by gupta and kundu .first , can be obtain by solving a fixed - point type equation the function is defined ^{-1}\ ] ] where and one can follow iteration method .once is determined , can be evaluated as . for the estimation of , we can use the invariance property maximum likelihood estimators and obtain as follow : of the advantages of using em algorithm is that presents a measure of information in censored data through the missing information principle .louis improved a procedure for extracting the observed information matrix . in this section ,we display the observed fisher information matrix by using the missing value principles of louis . the observed fisher information matrix can be used to build the asymptotic confidence intervals .+ using the notations : , x = observed data , w = complete data , =observed information , =complete information and =missing information , follow the relation to to evaluate + complete information and the missing information are given respectively as : \ ] ] .\ ] ] as the dimension of is 2 , and are both of the order .the elements of matrix for complete data set are presented in gupta and kundu .they re - parametrized distribution as and .+ we report which have been evaluated by them here as : \ ] ] where in which + on the other hand , with the above re - parametrization and by using ( [ iwx ] ) , one can easily verify ,\ ] ] where in which now , can be computed by ( [ ix ] ) .the asymptotic variance - covariance matrix of can be obtained by inverting .we use this matrix to secure the asymptotic confidence intervals for and . to obtain the asymptotic confidence interval for , we use the non - parametric bootstrap method .in this section , we study bayes estimators for parameters and under symmetric loss functions .a very well known symmetric loss function is the squared error which is defined as : with being an estimate of . here denotes some function of .bayes estimators , say , is evaluated by the posterior mean of .let be an observed sample from the hybrid censoring scheme , drawn from a distribution .we apply re - parametrization as and .so the likelihood function becomes and -likelihood function : it is assumed that and have the following independent gamma priors : so , the joint prior distribution of and is of the form then the posterior distribution and can be written as where now the bayes estimators of and under the squared error loss function l are respectively obtained as : =\frac{1}{k}\int_0^\infty\int_0^\infty\alpha^{w_4-n - r}\beta^{w_2+r-1}(\alpha+1)^re^{-\alpha w_3}e^{-\beta w_1}\]] and =\frac{1}{k}\int_0^\infty\int_0^\infty\alpha^{w_4-n - r-1}\beta^{w_2+r}(\alpha+1)^re^{-\alpha w_3}e^{-\beta w_1}\]] since is a function of and , then one can obtain the posterior density function of and so the bayes estimator of under the squared error loss function as : =\frac{1}{k}\int_0^\infty\int_0^\infty u^{w_4+w_2-n-1}\lambda^{w_2+r-1}(1+u)^{r}e^{-u(w_3+\lambda w_1)}\]] as these estimators can not be evaluated explicitly , so we adopt two different procedures to approximate them : * lindley approximation , * mcmc method . in previous section , based on type - ii hybrid censored scheme we obtained the bayes estimators of , and against squared error loss function .it is easily observed that theses estimators have not explicit closed forms . for these evaluation ,numerical techniques are required .one of the most numerical techniques is lindley s method ( see ) , that for these estimators can be describe as follows . in general ,bayes estimator of as a function of and is identified : where is -likelihood function ( defined by [ log ] ) and .+ by the lindley s method can be approximated as : +\frac{1}{2}[(\hat{u}_{\alpha}\hat{\sigma}_{\alpha\alpha}+ \hat{u}_{\beta}\hat{\sigma}_{\alpha\beta})(\hat{l}_{\alpha\alpha\alpha}\hat{\sigma}_{\alpha\alpha}+ \hat{l}_{\alpha\beta\alpha}\hat{\sigma}_{\alpha\beta}+\hat{l}_{\beta\alpha\alpha}\hat{\sigma}_{\beta\alpha}\]],\ ] ] where and are the mle s of and respectively .also , is the second derivative of the function with the respect to and valued of at other expressions can be calculated with following definitions : where and we have : with the above defined expressions , we obtain the approximation bayes estimators .+ also we have : the bayes estimator of under the squared error loss function becomes .\ ] ] proceeding similarly , the bayes estimator of under is given by .\ ] ] finally the bayes estimator of under is given by +\frac{1}{2}[(\hat{u}_{\alpha}\hat{\sigma}_{\alpha\alpha}+ \hat{u}_{\beta}\hat{\sigma}_{\alpha\beta})(\hat{l}_{\alpha\alpha\alpha}\hat{\sigma}_{\alpha\alpha}+ \hat{l}_{\alpha\beta\alpha}\hat{\sigma}_{\alpha\beta}+\hat{l}_{\beta\alpha\alpha}\hat{\sigma}_{\beta\alpha}\]].\ ] ] the approximate bayes estimators of , and can be obtained using lindley approximation , but it is not possible to construct highest posterior density ( hpd ) confidence intervals using this method .therefore , we suggest the following markov chain monte carlo ( mcmc ) method to generate samples from the posterior density function , and in turn to obtain the bayes estimators , and hpd confidence intervals . herewe study the gibbs sampling method to draw samples from the posterior density function and then compute the bayes estimators and hpd confidence intervals of , and under the squared errors loss function .let be an observed sample from the hybrid censoring scheme , drawn from a distribution . from ( [ pos ] ), we can write the joint posterior density function of and given as : by this , the posterior density function of given and is [ the1]the conditional distribution of given and is log - concave .see appendix , part b. + by ( [ pos2 ] ) , the posterior density function of given and is [ th2]the conditional distribution of given and has a finite maximum point .see appendix , part c. with the help of the acceptance rejection principle ( see devroye for details ) and the previous theorem , the generation from ( [ albe ] ) can be performed using the we generator .+ now we use theorems [ the1 ] and [ th2 ] and pursue the idea of geman and geman , and suggest the following scheme .+ step 1 ) take some initial value of and , such as and .step 2 ) generate and from and .step 3 ) repeat step 2 , times . step 4 )obtain bayes estimators of and with respect to a squared error loss function : where and are the burn - in periods in generating of and respectively . step 5 )obtain the hpd confidence interval of : order as and construct all the confidence intervals of , as : )}),\cdots,(\alpha_{([m_1\eta])},\alpha_{(m_1)}),\ ] ] where ] , is the number of operands of , $ ] and is the number of operands of .generalized hypergeometric function is quickly evaluated and readily available in standard software such as maple .the conditional distribution of given and is in this function , we have and , now it is enough that prove is bounded . with simple calculationwe see that this function is less than the gamma function and the gamma function is a bounded function , so this function is bounded .therefore has a finite maximum point .childs.a , chandrasekhar.b , balakrishnan.n , kundu.d ( 2003 ) exact likelihood inference based on type - i and type - ii hybrid censored samples from the exponential distribution,_ann ._ , 55 , 319 - 330 .
a hybrid censoring scheme is a mixture of type - i and type - ii censoring schemes . we study the estimation of parameters of weighted exponential distribution based on type - ii hybrid censored data . by applying em algorithm , maximum likelihood estimators are evaluated . also using fisher infirmation matrix asymptotic confidence intervals are provided . by applying markov chain monte carlo techniques bayes estimators , and corresponding highest posterior density confidence intervals of parameters are obtained . monte carlo simulations to compare the performances of the different methods is performed and one data set is analyzed for illustrative purposes . + + _ keywords _ : asymptotic distribution , em algorithm , markov chain monte carlo , hybrid censoring , bayes estimators , type - i censoring , type ii censoring , maximum likelihood estimators + + _ mathematics subject classification : _ 62f10 , 62f15 , 62n02
the need to communicate secretly has always been an important issue for military strategists during war time .the one - time pad , first proposed by vernam , has been shown to be one of the most secure means of encrypting a message provided the key is truly random and the key is as long as the message . however , a major problem with the one - time pad is the establishment of a secure key between the two physically separated parties without the services of a courier .recently , there has been a major proposal to apply the laws of quantum mechanics to establish this crucial key .this new proposal , called quantum key distribution ( qkd ) protocols , therefore involves the use of quantum features such as uncertainty principle or quantum correlations to establish a the necessary key and hence provides unconditionally secure communication .the first quantum key distribution was proposed by bennett and brassard ( bb84 ) in 1984 based on the fact that any measurement on an unknown state of a polarized photon by a third party will always disturb the state and hence detectable .an extension of the scheme to three - dimensional quantum states has recently been done and it was shown to be more secure than two - dimensional case .another well - known variation of qkd is based the idea of an entangled pair and detecting the presence of the eavesdropper using violations of the bell - clauser - horne - shimony - holt ( bell - chsh ) inequality .this protocol ( ekert protocol ) is fundamentally interesting as it provides an example of how a fundamental problem in quantum mechanics , namely bell - chsh inequality and violation of local realism , can be applied to a physical problem . naturally , one questions if it is possible to extend this latter protocol involving bell - chsh inequality to higher dimensional system .the extension of bell - chsh inequality to higher dimensions is a non - trivial and interesting problem . as higher dimensional quantum systems require much less entanglement to be non - separable than two - dimensional systems ( qubits ) ,it was suspected that higher dimensional entangled systems may lead to stronger violations of local realism .these results have been shown numerically using linear optimization method by searching for an underlying local realistic joint probability distribution that could reproduce the quantum predictions and confirmed analytically .the quantum channel we consider consists of a source producing two qutrits , which we denote by and , in the maximally entangled state , where and are the -th basis state of the qutrit and respectively ( these basis states can represent , for instance , spatial degrees of freedom of photons ) . qutrit flies towards alice whereas qutrit flies towards bob .each observer has at his or her disposal a symmetric unbiased six - port beamsplitter .an unbiased symmetric six - port beamsplitter performs a unitary transformation between `` mutually unbiased '' bases in the hilbert space .such devices were tested in several quantum optical experiments , and also various aspects of such devices were analyzed theoretically .this quantum optical device has three input and three output ports . in front of each input portthere is a phase shifter .when all the phase shifters are set to zero an incoming photon through one of the input ports has an equal chance to leave the device through any of the output ports .the elements of the unitary transformation , which describes its action , are given by where and the indices , ( ) denote the input and exit ports respectively ; are the phase shifters .these phase shifters can be changed by an observer . for convenience, we will denote the values of the three phase shifts in the form of a three dimensional vector . in our protocolboth observers perform three distinct unitary transformations on their qutrits .the transformations at alice s side are defined by the following vectors of phases , , whereas the transformations at bob s side are defined by , , .the observers choose their transformations randomly and independently for each pair of incoming qutrits . after performing the transformation defined by the vectors of phases the state reads .the observers perform the measurement of the state of the qutrit in the basis in which is defined , that is , ( ) . we have adopted an uncommon but useful complex value assignment to the results of the measurements , first used in : namely , for the result of the measurement of the ket we ascribe the value .this value assignment naturally leads to the following definition of the correlation function ( for short ) between the values of alice s and bob s results of measurements where denotes the probability of obtaining the result by alice and the result by bob for the respective values of the phase shifts they have used .it can be shown that the above correlation function reads \ , , \label{corr}\ ] ] where , for instance , denotes the second component of the -th vector of phases for alice .note that .this means that the results of the measurement obtained by alice and bob are strictly correlated . when alice obtains the results bob must register the results respectively .thus , only the following pairs of the results are possible ( denoted subsequently by ) and each pair of correlations occurs with the same probability equal to .let us also define the following quantity it can be shown , using the recently discovered bell inequality for two qutrits , that according to local realistic theory can not exceed .however , when using the quantum mechanical correlation function ( [ corr ] ) , acquires the value .therefore , to violate the above bell inequality in this case one must reduce the correlation function by the factor ( such reduction is possible by adding the symmetric noise to the system ) .it has been proved that the above bell inequality gives necessary and sufficient conditions for local realism in this case .after the transmission has taken place , alice and bob publicly announce the vectors of phase shifts that they have chosen for each particular measurement and divide the measurements into two separate groups : a first group for which they have used the vectors , and , , and a second group for which they have used .subsequently , alice and bob announce in public the results of the measurements they have obtained but only within the first group . in this waythey can compute the value of .if this value is not equal to it means that the qutrits have somehow been disturbed .the source of this disturbance can be either an eavesdropper or noise . in case of no disturbancethe results from the second group allow them , due to the mentioned correlations , to generate a ternary cryptographic key .for instance when alice gets the sequence of values , say then bob must get the following sequence of results , .let us consider a symmetric incoherent attack in which the eavesdropper ( eve ) controls the source that produces pairs of qutrits used by alice and bob to generate the cryptographic key .naturally , if eve wants to acquire any information about the key , she must introduce some disturbance to the state of the qutrits .her only chance of being undetected is to hide herself behind what , to alice and bob , may look like an environmental noise in the channel .we assume that the noise is symmetrical in the sense that the correlation function in the presence of it reads where .this requirement can only be fulfilled if the reduced state for alice and bob ( after tracing out eve s degrees of freedom ) is of the form where the real ( not necessarily all positive ) numbers , and where the maximally entangled orthogonal states ( ) read this choice of states stems from the fact that only the above states generate correlation functions that are proportional to . to be more specific , the state gives the correlation function whereas the state gives the correlation function .thus , if we compute the correlation function on the state , we arrive at the following formula from eq.([noise ] ) , we obtain the condition , which is only possible if ( is real ) .eve can prepare the reduced density operator ( [ state ] ) by preparing an entangled state of the form , are the computational basis states of the two qutrits , and are states of ancilla . without loss of generality , we can assume that they are normalized ( which implies that ) . note that the most general state of the joint system of alice s and bob s qutrits and eve s ancilla reads . however , eq . ( [ state ] ) and the requirement that imposes the following conditions on the states of the ancilla denoting by we arrive at the following set of conditions eve s strategy is the following .she prepares the state ( [ ancilla ] ) , sends the qutrits to alice and bob and keeps her ancilla .she then waits for public communication between alice and bob .when the settings of alice s and bob s apparatus ( phase shifts ) are revealed , eve adopts the following algorithm : ( i ) if the chosen settings are not the ones used for the key generation she ignores the ancilla ; ( ii ) if the settings are the ones for which the key is generated , i.e. , , she identifies the ancilla state .let us first find the transformed state in case ( ii ) , i.e. , the state .a straightforward computation yields where the un - normalized states read note that ( [ eq : transstate ] ) can also be written more conveniently as where we have grouped the terms into three orthogonal subspaces associated with alice and bob generating the correct key , and the two incorrect keys , or . note also that the ancilla states of one subspace are orthogonal to the ancilla states of the other subspaces .the probability that eve projects into the subspaces spanned by the states and are respectively .we have considered the fact that the states within each bracket in eq.([groupeq ] ) have the same norms with the same mutual scalar products . moreover , these scalar products are all real .eve now has to determine the state of her ancilla , given that alice and bob have projected the whole state into one of three subspaces associated with the three cases .these subspaces are orthogonal so that eve can , in principle , determine without error , which of these cases alice and bob have .the three ancilla vectors in each subspace corresponding to the result obtained by alice and bob are symmetric and equiprobable .this makes eve s task of discrimination easier as this case has an analytic optimal solution using the so - called `` square - root measurement '' .we define the operator , where are the ancilla states spanning the subspace associated with alice and bob s measurement outcomes .since we are discriminating 3 vectors in a 3-dimensional space , the optimum measurement directions , are orthogonal , hence eve simply performs a projective measurement on her ancilla ( fig .[ fig : discrim ] ) .thus , eve s error rate is given by where is the probability of correctly identifying the three states of the ancilla in the i - th subspace .these probabilities are given by where due to the symmetry of the noise introduced by eve , the error rate between alice and bob determined using eq.([state ] ) and the conditions in eq.([condition ] ) is we also note that whenever eve eavesdrops , the correlation function obtained by alice and bob is reduced by .therefore , if this factor is less than , the bell inequality is not violated and so alice and bob will abort the protocol .this implies that eve must keep this factor above this value .fig.[fig : error ] shows the three dimensional plots of the error rates of eve as a function of the parameters and ( labeled by surface i ) as well as the error rate between alice and bob ( labeled by surface ii ) .the region in which the factor is greater than the threshold value ( ) is demarcated by the `` wall '' labeled . in the region bounded by , the error rate of eve is always greater than the error rate between alice and bob .an alternative approach to test the security of the protocol against such incoherent symmetric attack is to consider the mutual information between alice and eve and compare it with the mutual information between alice and bob .the mutual information between alice and eve is given by the following expression fig .[ mutual ] shows the plan elevation of the 3-dimensional plots of the mutual information as a function of the parameters and .the line of intersection between and clearly lies behind the wall separating the region in which the bell inequality is violated from the region ( ) in which local realistic description is possible ( ) . in the region , . from numerical calculation ,the maximum value of v for which eve s mutual information equals alice and bob s is 0.6629 .thus , alice and bob have a buffer region in which to operate securely from this kind of attack by eve . to summarize ,we have presented a cryptographic protocol using qutrits which is resistant to a form of symmetric , incoherent attacks .the qutrit bell inequality provides a sufficient condition for secure communication .however , this attack may not be optimal so the bell inequality may prove to be necessary .shannon , bell syst . tech .j. , * 28 * 656 ( 1949 ) .d. bruand c. macchiavello , phys .88 * , 127901 ( 2001 ) .ekert , phys .lett . * 67 * , 661 ( 1991 ) .d. kaszlikowski , p. gnaciski , m. ukowski , w. miklaszewski and a. zeilinger , phys .* 85 * , 4418 ( 2000 ) .d. kaszlikowski , l. c. kwek , j .- l .chen , m. ukowski and c. h. oh , quant - ph//0106010 .d. collins , n. gisin , n. linden , s. massar , s. popescu , quant - ph//0106024 .t. durt , d. kaszlikowski , and m. ukowski , private communication ( 2000 ) .j. schwinger , proc .acad . sc . *46 * , 570 ( 1960 ) .i. d. ivanovic , j. phys .a * 14 * , 3241 ( 1981 ) .w. k. wooters , found .* 16 * , 391 ( 1986 ) . c. mattle , m. michler , h. weinfurter , a. zeilinger and m. ukowski , appl .b * 60 * , s111 ( 1995 ) .m. reck , phd thesis ( supervisor : a. zeilinger ) ( university of innsbruck , 1996 , unpublished ) .m. reck , a. zeilinger , h. j. bernstein and p. bertani , phys .lett . * 73 * , 58 ( 1994 ) .i. jex , s. stenholm and a. zeilinger , opt . comm . * 117 * , 95 ( 1995 ) .jing - ling chen , d. kaszlikowski , l. c. kwek and c. h. oh .d. kaszlikowski , l. c. kwek , jing ling chen , m. ukowski , and c. h. oh , phys . rev .a * 65 * , 032118 ( 2002 ) .chen , d. kaszlikowski , l.c .kwek , c.h .oh and m. zukowski , phys .a , * 64 * , 052109 ( 2001 ) .a. chefles , _ contemporary physics _ * 41 * , 401 ( 2000 ) .
we present a crytographic protocol based upon entangled qutrit pairs . we analyse the scheme under a symmetric incoherent attack and plot the region for which the protocol is secure and compare this with the region of violations of certain bell inequalities . = msbm10 scaled # 1_#1 _ # 1*#1 * # 1#1
a new form of generalized nonextensive entropy recently proposed by us has been shown in to give small but interesting departures from the shannon case in terms of thermodynamic properties of a system in a manner similar to but also somewhat different from tsallis entropy .conceptually , the new entropy appears from a novel definition of entropy in terms of the rescaled phase cells due to correlated clusters , and in a limit similar to the tsallis case approaches shannon s classical extensive entropy .kaniadakis has also suggested the use of deformed functions leading to unusual forms of entropy , from a kinetic principle related to phase space , which gives excellent results for cosmic ray spectra . in our casethe definition of entropy is particularly simple , as it can be expressed simply as the divergence of a vector representing the modified probabilities for the different possible states taking into account a rescaling due to correlations or clustering due to interactions between the microsystems . in microscopic systemsquantum entanglement of states is also a relevant issue .some authors have studied the problem of quantum entanglement of two states in the picture of tsallis type nonextensive entropy .the generalization of shannon entropy to the very similar von neumann entropy using density operators in place of probability distributions reveals common features of the stochastic and the quantum forms of uncertainties and this treatment can be extended to tsallis form too .our purpose here is to present a combined study of stochasticity and quantum entanglement , so that the former emerges from the quantum picture in a natural way , and then we intend to show that our new approach of defining entropy also allows us to obtain a measure of mutual information that involves stochasticity and entanglement together in a clear comprehensible way .the fact that our new definition of entropy , which is conceptually very simple , also gives the probability distribution function in a closed form in terms of lambert functions allows one to carry out many calculations with the same ease as for tsallis entropy . in this work , however , the probability distribution will not be needed for explicit use .entropy is intuitively associated with randomness , because it is a measure of the loss of information about a system , or the indeterminacy of its exact state , which in turn depends on the probability distribution for various states .a uniform probability distribution function ( pdf ) among all states indicates maximal uncertainty in state space and gives the maximal entropy , whereas dirac / kronecker delta ( continuous or discrete states ) pdf with no uncertainty has zero entropy .combinatorics gives the boltzmann form \ ] ] because in equilibrium the are simply , where is the total number of subsystems , and is the number of subsystems in the i - th state . in terms of the themselvesone gets the shannon form given below .it is well - known that maximizing the entropy with the constraints and ( with the energy of the i - th state , and u the total energy , which is fixed ) gives the exponential probability distribution where the lagrange multiplier constant can be identified as the inverse of the temperature .let us now consider shannon coding theorem : when the letters of the alphabet used in a code have the probabilities , then it can be shown fairly easily that a stream of random letters coming out of the source with the given probabilities in the long run will relate the entropy to the probability of the given sequence : \ ] ] where is the entropy per unit and is the large number of letters in the sequence .we shall now define entropy from a somewhat different viewpoint which takes into account interaction among the units , producing clusters of size units for the i - th state .this effective size may in general be a fraction , and if the interaction is weak , the average cluster size is just over unity . if we think of liquid clusters , the typical subsystem in state may bean assembly of molecules , but this may change due to environmental factors , such as ph value , to , so that we have a rescaling value of , which may be greater or less than 1 .in general we allow to be different for each .since is the probability of a single occurrence of the i - th state , i.e. for a cluster of size unity ( which may consist of a typical number of subunits ) , the probability for the formation of a cluster of size is . let us now consider the vector this is -dimensional , where is the number of single unit states available to the system components .let us now consider the phase space " defined by the co - ordinates . as we have said above ,the deviations of these parameters from unity give the effective ( which may be fractional when an average is taken ) cluster sizes in each of the states . a value smaller than unity indicates a degeneration of the micro - system to a smaller one in a hierarchical fashion , partially if it is a fraction . in other wordswe are considering a scenario where clusters may form superclusters or be composed of subclusters , with a corresponding change of scale in terms of the most basic unit obtainable .we have dealt elsewhere with the interesting question an oligo - parametric hierarchical structure of complex systems , but here , we restrict ourselves to cluster hierarchy changes that do no qualitatively change the description of the system . hence , if we take the divergence of the vector in the space , it is a measure of the escape of systems from a given configuration of correlated clusterings . and , inversely , the negative of the divergence shows the net influx of systems into an infinitesimal cell with cluster sizes . if all the are unity , then we have unfragmented and also non - clustered , i.e. uncorrelated units at that hierarchy level. we can argue first from the point of view of statistical mechanics that this negative divergence or influx of probability , may be interpreted as entropy . we know that the free energy is defined by where is the free energy , is the internal energy , and is the entropy . is the temperature , or a measure of the average random thermal energy per unit with boltzmann constant chosen to be unity , and hence is a measure of the random influx of energy into the system due to the breaking / making of correlated clusters due to random interactions in a large system .this allows the subtracted quantity to be the free energy or the useful " energy .there are usual thermodynamic phase space factors in dealing with a macroscopic system , which we drop as common factors in what follows . in terms of the shannon coding theoremalso we arrive at the same expression .since , with average clustering the i - th state occurs with probability , a stream of units ( clustered ) emitted will correspond to the probability = \prod_i p_i^{p_i^{q_i}}\ ] ] which too gives us eqn .[ ent ] . in have developed the statistical , mechanics of this entropy in detail , by first obtaining its probability distribution function in terms of the lambert w function .however , we used , for simplicity an isotropic ( in state space ) correlation and rescaling , i.e. we had a single common , as in tsallis entropy . in the rest of this work we shall use the same simpler expression .when the state vector in the combined hilbert space of two particles ( or subsystems in and ) can not be expressed as the factorizable product of vectors in the hilbert spaces of the two subsystems , it is by definition entangled .hence entanglement is actually a property related to projection in the subspaces , and can not be expected to be measurable by properties in the bigger space alone . given the state which gives density matrix for the space for the trace over the part where the are now the coefficient matrices . in terms of density matricesan entangled state ( for an explicit example ) of two qubits ( a qubit " , or quantum bit , being a quantum superposition of two possible states ) may be expressed by the reduced matrix from the basis sub - set : \ ] ] with for the pure quantum ( entangled ) state , and .this entanglement occurs in the subspace of the product hilbert space involving only the two basis vectors and .other entangled combinations are equivalent to this form and may be obtained from it simply by relabeling the basis vectors , and hence we shall use this as the prototype . for , we have an impure state with a classical stochastic component in the probability distribution , although we still have probability conservation as which remains unchanged under any unitary transformation .a possible measure of the factorizability ( purity " ) of a quantum state , or its quantum non - entanglement , remains invariant under changes of \nonumber \\= c^4 + s^4 .\end{aligned}\ ] ] so , for the maximum entanglement when , and the minimal entanglement corresponds to ( pure factorizable states ) when .quantum impurity represented by classical stochasticity attains the maximum value when , and is nonexistent when corresponding to a pure entangled state .we note that does not involve the stochasticity - related parameter at all , but remains the quantifier of the quantum entanglement .another equivalent but conceptually possibly more interesting way of quantifying entanglement may be the parameter - tr[\rho_a]tr[\rho_b ] ) = \sin^2(2 \theta)\end{aligned}\ ] ] which is more symmetric in the two subspaces and is similar to a correlation function .it gives 0 for no entanglement when , and maximal entanglement 1 for , as desired .this definition of entanglement is in the spirit of mutual information , though we have not used the entropy at this stage , but only the probabilities directly .it too does not involve the stochasticity in terms of the purity parameter . in the relation above we have used \ ] ] and similarly for . in our specific case , for or for , \ ] ] with = 1 $ ] ensured .it is possible to formulate the stochasticity by coupling the entangled state to the environment state quantum mechanically and then taking the trace over the environment states . and the trace over the environment yields for the entangled mixture of and in and the couplings with and , the trace over states gives the density matrix \ ] ] so , classical stochasticity has been introduced by taking the trace over the environment space with us consider a single system interacting with the environment .the product space contains entanglement between the measured system and the environment , and hence the density operator for the combined system - environmental space is as given in eqn .[ rhogam ] with to indicate a pure entangled state , and the environment - traced density is given by eqn .[ rhoabtr ] . here and are equal .mutual information may be identified as the entanglement , and defined by - ( tr_{a}[\rho_a])^2 ) = \sin^2(2\theta')\end{aligned}\ ] ] as before , with the angle of entanglement .hence , measurements on the system reflects the coupling of the system to the environment to the environment , and the mutual information is contained in the parameters of the system itself . in terms of von neumann entropy , which , with mixing in an orthogonal quantum basis , becomes similar to shannon entropy , \ ] ] .we know from the araki - lieb relation that , with in a pure quantum state , we must have , and hence \end{aligned}\ ] ] which too confirms the view that the system itself contains in its parameters the mutual information in such a case , as we found above . if we use our new form of entropy with the hypothesis that the mutual information is still given by the same form with the parameter not equal to , which is the case for shannon entropy , then we get \nonumber \\ = - 2c'^{2q}\log(c'^{2q } ) -2 s'^{2q } \log(s'^{2q})\end{aligned}\ ] ] with the 3-system entanglement shown in eqn .[ psiabe ] and the relatively simple choice of couplings in eqn .[ carr ] , we have already shown in eqn .[ rhoc ] .similar construction of , and , and defining the 3-system mutual information as and with for any , for a single 3-system pure state , we may find the 3-system mutual information .tracing over the space gives , which , using as basis , , and in the product space , yields \ ] ] and an identical matrix for .finally we get using the relevant eigenvalues where the eigenvalues and are for the matrix obtained after tracing over -space .)\ ] ] with given by eqn .[ gamma ] .had we started with a stochastic picture of entangled impure a - b system , with representing the stochasticity , as we have suggested above , then the mutual information would be in fig .[ fig1 ] we first show the mutual information ( mi ) calculated according to the shannon form of the entropy , which is equivalent to our form at , as a function of the entanglement angle and the entanglement angle of -system with the environment , which is related to the stochasticity as explained above .we note that mutual information is virtually independent of the angle of entanglement with the environment .hence , it seems that traditional entropy in this case is insensitive to details of coupling with the environment when the mutual information between two systems is measured . as a function of entanglement angle in a - b space and the entanglement angle with the environment , which is related to the stochasticity.,width=302 ] in fig .[ fig2 ] and in fig .[ fig3 ] we show the deviations of our mi from the shannon mi , as a function and entanglement angles and respectively , keeping the other angle at in each case .there is symmetry around .the variation is fairly smooth for fixed , i.e. fixed entanglement with the environment .however , if the entanglement between and is kept fixed at near , then the mutual information using our form of entropy changes sharply with near the symmetry value .one can see that this comes from one of the eigenvalues of the density matrix approaching zero for this mixing value , and with , the there is either a sharp peak or dip compared to the shannon entropy case , which has fixed .difference of mi from our entropy with that from shannon entropy at ,width=302 ] same as fig .[ fig2 ] with ,width=302 ] fig .[ fig2 ] shows little variation with changing for almost any .[ fig3 ] shows pronounced changes at small q ( )for different .mi difference between our entropy form and shannon for ,width=302 ] same as fig .[ fig4 ] but for ,width=302 ] in fig .[ fig4 ] and fig .[ fig5 ] we show the difference between our mi and the shannon mi as a function of and simultaneously , keeping fixed at and at .of course at , we get no difference , as our entropy then coincides with the shannon form . heretoo we notice that the mixing angle between and shows fairly smooth variation , but or equivalently the stochasticity , causes pronounced peak ( for ) , or dip ( for ) . heretoo we can conclude that our method of entropy calculation can indicate a greater role of the entanglement with the environment when this mixing is nearly equal for the entangled states .difference between mi from our entropy and tsallis s with ,width=302 ] same as fig .[ fig6 ] but with .,width=302 ] in view of the prevalent familiarity with the tsallis form of nonextensive entropy , we have previously compared our form with results from tsallis entropy in formulating a general thermodynamics .there we showed that despite the conceptual and functional differences between tsallis entropy and our new form , the results are very similar if we take the tsallis to be twice as far from unity ( the shannon equivalent value ) as our value of . in fig .[ fig6 ] and fig .[ fig7 ] we show the difference of our mi from that derived from tsallis s entropy .we again note that among and differences are both relatively more significant for values different from , for both angles near , with peaks and dips similar to the comparison with the mutual information calculated with shannon entropy .it is interesting to note here that recently in a study of the entropy of a chain of spins in a magnetic field it has been found that both the usual von neumann and renyi forms of entropy yield nonzero and surprisingly simple closed expressions . though this work does not mention entanglement explicitly , the correlation functions presented here , which determine the density matrix and hence its diagonalized form needed for entropy calculation , actually are manifestations of the entanglement among the spins and between the spins and the magnetic field .the chain has been split into two parts , similar to our a and b subsystems , and the external magnetic field acts like the environment we have introduced in this work . though they carry out their extensive calculations at zero temperature , unlike our finite temperature treatment , the fact they obtain nonzero for the first spins is apparently due to the segmentation of the pure state of the fully entangled quantum system and the consideration of part only for the entropy calculation , which effectively is equivalent to summing the states of part and the entanglement with the environment , and produces entropy due to the corresponding loss of information about state of the whole system .hence , their results for this explicit model is consistent with our general result that classical stochasticity and entropy may be a reflection of segmented consideration of bigger complete systems .the values of the entropy of different types such as the canonical shannon form or generalized forms such as the renyi form , which goes to the shannon form in the usual limit , like that of the related parameter we have mentioned for tsallis entropy and for our new form of entropy in this work , reflect the extent of entanglement or interaction or , equivalently , correlation . in their worka length scale comes out of this segmentation , which appears to be similar to the angle of entanglement in our case .we do not get a phase transition as they do , because we have considered a simplified general finite system of only two or three component subsystems , not an infinite chain , and finite systems can not show any phase transitions .we have shown how a simple definition of the entropy in terms of influx of states into cells of any given cluster sizes in various states can give us a new nonextensive form of entropy with a closed form of the probability distribution function , and which coincides with the usual form for uncorrelated microsystems .we have then seen that classical stochasticity can be derived from quantum entanglement with the environment , and it influences the mutual information between quantum states .the functional form of the mi differs as per the definition of the entropy , but the numerical differences in mi resulting from various forms of entropy usually differ rather subtly according the parameters of entanglement , within the system , and with the environment , as well as the scaling type parameter introduced in the tsallis form and in the new form introduced by us .however , for the angle of entanglement with the environment our entropy differs from both the shannon entropy and tsallis entropy when both angles of entanglement are near the symmetry point .the differences between the forms become more pronounced as varies from unity .entanglement and mutual information are such fundamental concepts that experimental tests need to be designed to distinguish from minute quantitative differences the appropriateness of various theoretical forms of entropy .theoretical works like the study of large systems such as spin chains may also help differentiate the appropriateness of various forms of entropy such as shannon , tsallis , renyi or our suggested new form , and the quantification of mutual information . the notion of clusters changing constantly into various sizes , which is the basis of our definition of the new form of generalized entropy may be the most relevant concept for the treatment of liquids and other material , where such phenomena form an integral part of the dynamics .the author would like to thank andrew tan and ignacio sola for discussions and g. kaniadakis and j.f .collet for useful feedback .00 1999 quantum entanglement inferred by the principle of maximum tsallis entropy .a _ * 60 * , 34613466 . 2002 nonadditive entropies and quantum entanglement . _ physica a _ * 306 * , 316 1970 entropy inequalities .phys . _ * 18 * , 160170 .2001 classical and quantum complexity and non - extensive thermodynamics ._ chaos , fractals and solitons _ * 13 * , 367370 .2004 quantum spin chains , toeplitz determinants and the fisher - harwig conjecture ._ j. stat ._ * 116 * , 7995 .2001 nonlinear kinetics underlying generalized statistics . _physica a _ * 296 * , 405425 .2002 statistical mechanics in the context of special relativity .e _ * 66 * , 056125 .( john wiley , ny , 1998 ) p. 368 .( cambridge u.p . ,ny , 2000 ) 1994 the classical n - body problem within a generalized statistical mechanics._j .* 27 * , 57075757 . 2007the lambert function and a new nonextensive entropy _i m a j. appl .( in press ) 2007 oligo - parametric hierarchical structure of complex systems ._ neuroquantology journal _ * 5 * , 8599 1988 possible generalization of boltzmann - gibbs statistics . _ j. stat .phys . _ * 52 * , 479487 .2002 entanglement versus bell violations and their behaviour under local filtering operations .lett . _ * 89 * , 170401 .1999 entanglement and nonextensive statistics .lett a _ * 260 * , 335 - 339 .
we first show how a new definition of entropy , which is intuitively very simple , as a divergence in cluster - size space , leads to a generalized form that is nonextensive for correlated units , but coincides exactly with the conventional one for completely independent units . we comment on the relevance of such an approach for variable - size microsystems such as in a liquid . we then indicate how the entanglement and purity of a two - unit compound state can depend on their entanglement with the environment . we consider entropies of tsallis , which is used in many different real - life contexts , and also our new generalization , which takes into account correlated clustering in a more transparent way , and is just as amenable mathematically as that of tsallis , and show how both purity and entanglement can appear naturally together in a measure of mutual information in such a generalized picture of the entropy , with values differing from the shannon type of entropy . this opens up the possibility of using such an entropy in a quantum context for relevant systems , where interactions between microsystems makes clustering and correlations a non - ignorable characteristic .
max - convolution occurs frequently in signal processing and bayesian inference : it is used in image analysis , in network calculus , in economic equilibrium analysis , and in a probabilistic variant of combinatoric generating functions , wherein information on a sum of values into their most probable constituent parts ( _ e.g. _ identifying proteins from mass spectrometry ) .max - convolution operates on the semi - ring , meaning that it behaves identically to a standard convolution , except it employs a operation in lieu of the operation in standard convolution ( max - convolution is also equivalent to min - convolution , also called infimal convolution , which operates on the tropical semi - ring ) . due to the importance and ubiquity of max - convolution ,substantial effort has been invested into highly optimized implementations ( _ e.g. _ , implementations of the quadratic method on gpus ; ) .max - convolution can be defined using vectors ( or discrete random variables , whose probability mass functions are analogous to nonnegative vectors ) with the relationship .given the target sum , the max - convolution finds the largest values ] for which . & = & \max_{\ell , r : \,m = \ell+r } l[\ell ] r[r ] \\ & = & \max_\ell l[\ell ] r[{m-\ell}]\\ & = & \left ( l ~*_{\max}~ r \right)[m ] \\\end{aligned}\ ] ] where denotes the max - convolution operator . in probabilistic terms, this is equivalent to finding the highest probability of the joint events that would produce each possible value of the sum ( note that in the probabilistic version , the vector would subsequently need to be normalized so that its sum is 1 ) .although applications of max - convolution are numerous , only a small number of methods exist for solving it .these methods fall into two main categories , each with their own drawbacks : the first category consists of very accurate methods that are have worst - case runtimes either quadratic or slightly more efficient than quadratic in the worst - case .conversely , the second type of method computes a numerical approximation to the desired result , but in steps ; however , no bound for the numerical accuracy of this method has been derived .while the two approaches from the first category of methods for solving max - convolution do so by either using complicated sorting routines or by creating a bijection to an optimization problem , the numerical approach solves max - convolution by showing an equivalence between and the process of first generating a vector for each index of the result ( where = l[\ell ] r[{m-\ell}] ] .when and are nonnegative , the maximization over the vector can be computed exactly via the chebyshev norm & = & \max_\ell u^{(m)}[\ell ] \\ & = & \lim_{p \to \infty } \| u^{(m ) } \|_p\\\end{aligned}\ ] ] but requires steps ( where is the length of vectors and ) .however , once a fixed -norm is chosen , the approximation corresponding to that can be computed by expanding the -norm to yield \right)}^{p } \right)}^{\frac{1}{p } } \\ & \approx & { \left ( \sum_\ell { \left ( u^{(m)}[\ell ] \right)}^{p^ * } \right)}^{\frac{1}{p^*}}\\ & = & { \left ( \sum_\ell { l[\ell]}^{p^ * } ~ { r[{m-\ell}]}^{p^ * } \right)}^{\frac{1}{p^*}}\\ & = & { \left ( \sum_\ell { \left(l^{p^*}\right)}[\ell ] ~ { \left(r^{p^*}\right)}[{m-\ell } ] \right)}^{\frac{1}{p^*}}\\ & = & { \left ( l^{p^ * } ~*~ r^{p^ * } \right)}^{\frac{1}{p^*}}[m]\end{aligned}\ ] ] where \right)}^{p^ * } , { \left ( l[1 ] \right)}^{p^ * } , ~\ldots,~{\left ( l[{k-1 } ] \right)}^{p^ * } ~\rangle ] \gets { r[r ] } ^{p^*} ] the effects of underflow will be minimal ( as it is not very far from standard fft convolution , an operation with high numerical stability ) , but it can still be imprecise due to numerical `` bleed - in '' ( _ i.e. _ error due to contributions from non - maximal terms for a given because the -norm is not identical to the chebyshev norm ) .overall , this will perform well on indices where the exact value of the result is small , but perform poorly when the exact value of the result is large . as noted above , will offer the converse pros and cons compared to using a low : numerical artifacts due to bleed - in will be smaller ( thus achieving greater performance on indices where the exact values of the result are larger ) , but underflow may be significant ( and therefore , indices where the exact results of the max - convolution are small will be inaccurate ) . the higher - order piecewise method formalizes the empirical cutoff values found in serang 2015 ; previously , numerical stability boundaries were found for each by computing both the exact max - convolution ( via the naive method ) and via the numerical method using the ascribed value of , and finding the value below which the numerical values experienced a high increase in relative absolute error .those previously observed empirical numerical stability boundaries can be formalized by using the fact that the employed numpy implementation of fft convolution has high accuracy on indices where the result has a value relative to the maximum value ; therefore , if the arguments and are both normalized so that each has a maximum value of 1 , the fast max - convolution approximation is numerically stable for any index where the result of the fft convolution , _i.e. _ ] will be the result of underflow from repeated addition and subtraction ( neglecting the non - influencing multiplication with twiddle factors , which each have magnitude ) .the numerically imprecise routines are thus limited to ; when ( _ i.e. _ , , the machine precision ) , then will return instead of . to recover at least one bit of the significand , the intermediate results of the fft must surpass machine precision ( since the worst case addition initially happens with the maximum ) .the maximum sum of any values from a list of such elements can never exceed ; for this reason , a conservative estimate of the numerical tolerance of an fft ( with regard to underflow ) will be the smallest value of for which ; thus , .this yields a conservative estimate of the minimum value in one index at the result of an fft convolution : when the result at some index is , then the result should be numerically stable . for this reason, we use a numerical tolerance , thereby ensuring that the vast majority of numerical error for the numerical max - convolution algorithm is due to the -norm approximation ( _ i.e. _ , employing instead of ) and not due to the long - used and numerically performant fft result .furthermore , in practice the mean squared error due to fft will be much smaller than the conservative worst - case outlined here , because it is difficult for the largest intermediate summed value ( in this case ) to be consistently large when many such very small values ( in this case ) are encountered in the same list .although could be chosen specifically for a problem of size , note that this simple derivation is very conservative and thus it would be better to use a tighter bound for choosing .regardless , for an fft implementation that is nt as performant ( _ e.g. _ , because it uses float types instead of double ) , increasing slightly would suffice . therefore , from this point forward we consider that the dominant cause of error to come from the max - convolution approximation .using larger values will provide a closer approximation ; however , using a larger value of may also drive values to zero ( because the inputs and will be normalized within * algorithm [ algorithm : numericalmaxconvolvegivenpstar ] * so that the maximum of each is 1 when convolved via fft ) , limiting the applicability of large to indices for which \geq \tau ] ] } ] \gets ] ) \gets \max \ { i:~ { \left ( resforallpstar[i][m ] \right)}^{\text{allpstar[ ] } } \geq \tau ) \} ] \gets resforallpstar[i][m] ]this section derives theoretical error bounds as well as a practical comparison on an example for the standard piecewise method .furthermore the development of an improvement with affine scaling is shown .eventually , an evaluation of the latter is performed on a larger problem .therefore we applied our technique to compute the viterbi path for a hidden markov model ( hmm ) to assess runtime and the level of error propagation .we first analyze the error for a particular underflow - stable and then use that to generalize to the piecewise method , which seeks to use the highest underflow - stable .we first scale and into and respectively , where the maximum elements of both and are ; the absolute error can be found by unscaling the absolute error of the scaled problem : - numeric(l',r')[m ] |\\ = \max_\ell l[\ell ] ~ \max_r r[r ] \ ; | exact(l',r')[m ] - numeric(l',r')[m ] |.\end{gathered}\ ] ] we first derive an error bound for the scaled problem on ( any mention of a vector refers to the scaled problem ) , and then reverse the scaling to demonstrate the error bound on the original problem on .for any particular `` underflow - stable '' ( _ i.e. _ , any value of for which ) , the absolute error for the numerical method for fast max - convolution can be bound fairly easily by factoring out the maximum element of ( this maximum element is equivalent to the chebyshev norm ) from the -norm : - numeric(l',r')[m ] |\ ] ] where is a nonnegative vector of the same length as ( this length is denoted ) where contains one element equal to ( because the maximum element of must , by definition , be contained within ) and where no element of is greater than 1 ( also provided by the definition of the maximum ) . thus , since , the error is bound : - numeric(l',r')[m]\\ & = & \| u^{(m ) } \|_\infty \left(\| v^{(m ) } \|_{p^ * } - 1 \right)\\ & \leq & \| v^{(m ) } \|_{p^ * } - 1\\ & \leq & k_m^\frac{1}{p^ * } - 1,\\\end{aligned}\ ] ] because for a scaled problem on . however , the bounds derived above are only applicable for where .the piecewise method is slightly more complicated , and can be partitioned into two cases : in the first case , the top contour is used ( _ i.e. _ , when is underflow - stable ) .conversely , in the second case , a middle contour is used ( _ i.e. _ , when is not underflow - stable ) . in this context ,in general a contour comprises of a set of indices with the same maximum stable . in the first case ,when we use the top contour , we know that must be underflow - stable , and thus we can reuse the bound given an underflow - stable . in the second case , because the used is , it follows that the next higher contour ( using ) must not be underflow - stable ( because the highest underflow - stable is used and because the are searched in log - space ) .the bound derived above that demonstrated can be combined with the property that for any to show that .\ ] ] thus the absolute error can be bound again using the fact that we are in a middle contour : the absolute error from middle contours will be quite small when is the maximum underflow - stable value of at index , because , the first factor in the error bound , will become , and ( qualitatively , this indicates that a small is only used when the result is very close to zero , leaving little room for absolute error ) .likewise , when a very large is used , then becomes very small , while ( qualitatively , this indicates that when a large is used , the , and thus there is little absolute error ) .thus for the extreme values of , middle contours will produce fairly small absolute errors .the unique mode can be found by finding the value that solves which yields an appropriate choice of should be so that the error for any contour ( both middle contours and the top contour ) is smaller than the error achieved at , allowing us to use a single bound for both .choosing would guarantee that all contours are no worse than the middle - contour error at ; however , using is still quite liberal , because it would mean that for indices in the highest contour ( there must be a nonempty set of such indices , because the scaling on and guarantees that the maximum index will have an exact value of , meaning that the approximation endures no underflow and is underflow - stable for every ) , a better error _ could _ be achieved by increasing .for this reason , we choose so that the top - contour error produced at is not substantially larger than all errors produced for before the mode ( _ i.e. _ , for ) . choosing any value of guarantees the worst - case absolute error bound derived here ; however , increasing further over may possibly improve the mean squared error in practice ( because it is possible that many indices in the result would be numerically stable with values substantially larger than ) .however , increasing will produce diminishing returns and generally benefit only a very small number of indices in the result , which have exact values very close to . in order to balance these two aims ( increasing enough over but not excessively so ), we make a qualitative assumption that a non - trivial number of indices require us to use a below ; therefore , increasing to produce an error significantly smaller than the lowest worst - case error for contours below the mode ( _ i.e. _ ) will increase the runtime without significantly decreasing the mean squared error ( which will become dominated by the errors from indices that use ) .the lowest worst - case error contour below the mode is ( because the absolute error function is unimodal , and thus must be increasing until and decreasing afterward ) ; therefore , we heuristically specify that should produce a worst - case error on a similar order of magnitude to the worst - case error produced with . in practice , specifying the errors at and should be equal is very conservative ( it produces very large estimates of , which sometimes benefit only one or two indices in the result ) ; for this reason , we heuristically choose that the worst - case error at should be no worse than square root of the worst case error at ( this makes the choice of less conservative because the errors at are very close to zero , and thus their square root is larger ) . the square root was chosen because it produced , for the applications described in this paper , the smallest value of for which the mean squared error was significantly lower than using ( the lowest value of guaranteed to produce the absolute error bound ) .this heuristic does satisfy the worst - case bound outlined here ( because , again , ) , but it could be substantially improved if an expected distribution of magnitudes in the result vector were known ahead of time : prior knowledge regarding the number of points stable at each considered would enable a well - motivated choice of that truly optimizes the expected mean squared error . from this heuristic choice of , solving ( with the square root of the worst - case at on the left and the worst - case error at on the right ) yields for any non - trivial problem ( _ i.e. _ , when ) , and thus indicating that the absolute error at the top contour will be roughly equal to the fourth root of . by setting in this manner ,we guarantee that the absolute error at any index of any unscaled problem on is less than ~ \max_r r[r ] ~ \tau^\frac{1}{2 p^*_{mode } } \left ( 1 - k_m^\frac{-1}{p^*_{mode } } \right)\ ] ] where is defined above . the full formula for the middle - contour error at this value of does not simplify and is therefore quite large ; for this reason , it is not reported here , but this gives a numeric bound of the worst case middle - contour error that is bound in terms of the variable ( and with no other free variables ) .the piecewise method clearly performs ffts ( each requiring steps ) ; therefore , since is chosen to be ( to achieve the desired error bound ) , the total runtime is thus for any practically sized problem , the factor is essentially a constant ; even when is chosen to be the number of particles in the observable universe ( ; ) , the is , meaning that for any problem of practical size , the full piecewise method is no more expensive than computing between and ffts .we first use an example max - convolution problem to compare the results from the low - value , the high - value and piecewise methods . at every index ,these various approximation results are compared to the exact values , as computed by the naive quadratic method ( * figure [ figure : doublefigmethodsbimodalexa ] * ) ..48 .48 * figure [ figure : doublefigmethodsbimodalexb ] * depicts a scatter plot of the exact result vs. the piecewise approximation at every index ( using the same problem from * figure [ figure : doublefigmethodsbimodalexa ] * ) .it shows a clear banding pattern : the exact and approximate results are clearly correlated , but each contour ( _ i.e. _ , each collection of indices that use a specific ) has a different average slope between the exact and approximate values , with higher contours showing a generally larger slope and smaller contours showing greater spread and lower slopes .this intuitively makes sense , because the bounds on ] ] } ] \gets ] ) \gets \max \ { i:~ { \left ( resforallpstar[i][m ] \right)}^{\text{allpstar[ ] } } \geq \tau ) \} ] \gets 1 ] = i \} ] [m] ] [mmax] ] \gets ymin - slope[i ] \times xmin ] ] by exploiting the convex combination used to define , the absolute error of the affine piecewise method can also be bound .qualitatively , this is because , by fitting on the extrema in the contour , we are now interpolating .if the two points used to determine the parameters of the affine function were not chosen in this manner to fit the affine function , then it would be possible to choose two points with very close x - values ( _ i.e. _ , similar approximate values ) and disparate y - values ( _ i.e. _ , different exact values ) , and extrapolating to other points could propagate a large slope over a large distance ; using the extreme points forces the affine function to be a convex combination of the extrema , thereby avoiding this problem .\\ \shoveleft{\subseteq \left [ \lambda_m \frac{\| u^{(m_{\max } ) } \|_{p^*}}{k^\frac{1}{p^ * } } + \left ( 1 - \lambda_m \right ) \frac{\| u^{(m_{\min } ) } \|_{p^*}}{k^\frac{1}{p^ * } } , \right.}\\ \left .\lambda_m \| u^{(m_{\max } ) } \|_{p^ * } + \left ( 1 - \lambda_m \right ) \| u^{(m_{\min } ) } \|_{p^ * } \right]\\ \shoveleft { = \left [ k^\frac{-1}{p^ * } \left ( \lambda_m \| u^{(m_{\max } ) } \|_{p^ * } + \left ( 1 - \lambda_m \right ) \| u^{(m_{\min } ) } \|_{p^ * } \right ) , \right.}\\ \left .\lambda_m \| u^{(m_{\max } ) } \|_{p^ * } + \left ( 1 - \lambda_m \right ) \| u^{(m_{\min } ) } \|_{p^ * } \right]\\ \shoveleft{= \left [ k^\frac{-1}{p^ * } \| u^{(m ) } \|_{p^ * } , \| u^{(m ) } \|_{p^ * } \right]}\end{gathered}\ ] ] the worst - case absolute error of the scaled problem on can be defined because the function is affine , it s derivative can never be zero , and thus lagrangian theory states that the maximum must occur at a boundary point .therefore , the worst - case absolute error is which is identical to the worst - case error bound before applying the affine transformation . thus applyingthe affine transformation can dramatically improve error , but will not make it worse than the original worst - case .one example that profits from fast max - convolution of non - negative vectors is computing the viterbi path using a hidden markov model ( hmm ) ( _ i.e. _ , the _ maximum a posteriori _ states ) with an additive transition function satisfying for some arbitrary function ( can be represented as a table , because we are considering all possible discrete functions ) .this additivity constraint is equivalent to the transition matrix being a `` toeplitz matrix '' : the transition matrix is a toeplitz matrix when all cells diagonal from each other ( to the upper left and lower right ) have identical values ( _ i.e. _ , ) . because of the markov property of the chain , we only need to max - marginalize out the latent variable at time to compute the distribution for the next latent variable and all observed values of the data variables .this procedure , called the viterbi algorithm , is continued inductively : and continuing by exploiting the self - similarity on a smaller problem to proceed inductively , revealing a max - convolution ( for this specialized hmm with additive transitions ) : \pr(d_{i-1 } | x_{i-1}=x_{i-1 } ) \delta[x_i - x_{i-1 } ] = } \\ \shoveright{\left(fromleft[i-1]~likelihood[d_{i-1 } ] \right ) ~*_{\max}~ \delta[x_i - x_{i-1}].}\\\end{gathered}\ ] ] after computing this left - to - right pass ( which consisted of max - convolutions and vector multiplications ) , we can find the _ maximum a posteriori _ configuration of the latent variables backtracking right - to - left , which can be done by finding the variable value that maximizes [x_i ] \times \delta[x_{i+1}^ * - x_i] ] \gets fromleft[i ] \times likelihood[data[i]] ] \gets fromleft[n ] \times likelihood[data[n]] ] \times \delta[l - path[i+1]] ] we apply this hmm with additive transition probabilities to a data analysis problem from economics .it is known for example , that the current figures of unemployment in a country have ( among others ) impact on prices of commodities like oil .if one could predict unemployment figures before the usual weekly or monthly release by the responsible government bureaus , this would lead to an information advantage and an opportunity for short - term arbitrage .the close relation of economic indicators like market prices and stock market indices ( especially of indices combining several stocks of different industries ) to unemployment statistics can be used to tackle this problem . in the following demonstration of our method, we create a simple hmm with additive transitions and use it to infer the _ maximum a posteriori _ unemployment statistics given past history ( _ i.e. _ how often unemployment is low and high , as well as how often unemployment goes down or up in a short amount of time ) and current stock market prices ( the observed data ) .we discretized random variables for the observed data ( s&p 500 , adjusted closing prices ; retrieved from yahoo !historical stock prices : http://data.bls.gov/cgi-bin/surveymost?bls series cuur0000sa0[http://data.bls.gov / cgi - bin / surveymost?bls series cuur0000sa0 ] ) , and `` latent '' variables ( unemployment insurance claims , seasonally adjusted , were retrieved from the u.s .department of labor : https://www.oui.doleta.gov/unemploy/claims.asp ) .stock prices were additionally inflation adjusted by ( _ i.e. _ divided by ) the consumer price index ( cpi ) ( retrieved from the u.s .bureau of labor statistics : https://finance.yahoo.com/q?s=^gspc[https://finance.yahoo.com/q?s=^gspc ] ) . the intersection of both `` latent '' and observed data was available weekly from week 4 in 1967 to week 52 in 2014 , resulting in 2500 data points for each type of variable . to investigate the influence of overfitting, we partition the data in two parts , before june 2005 and after june 2005 , so that we are effectively training on of the data points , and then demonstrate the viterbi path on the entirety of the data ( both the training data and the of the data withheld from empirical parameter estimation ) .unemployment insurance claims were discretized into and stock prices were discretized into bins .simple empirical models of the prior distribution for unemployment , the likelihood of unemployment given stock prices , and the transition probability of unemployment were built as follows : the initial or prior distribution for unemployment claims at was calculated by marginalizing the time series of training data for the claims ( _ i.e. _ counting the number of times any particular unemployment value was reached over all possible bins ) .our transition function ( the conditional probability ) similarly counts the number of times each possible change occurred over all available time points .interestingly , the resulting transition distribution roughly resembles a gaussian ( but is not an exact gaussian ) .this underscores a great quality of working with discrete distributions : while continuous distributions may have closed - forms for max - convolution ( which can be computed quickly ) , discrete distributions have the distinct advantage that they can accurately approximate any smooth distribution .lastly , the likelihoods of observing a stock price given the unemployment at the same time were trained using an empirical joint distribution ( essentially a heatmap ) , which is displayed in * figure [ figure : likelihoodheatmap]*. we compute the viterbi path two times : first we use naive , exact max - convolution , which requires a total of steps .second , we use fast numerical max - convolution , which requires steps . despite the simplicity of the model ,the exact viterbi path ( computed via exact max - convolution ) is highly informative for predicting the value of unemployment , even for the of the data that were not used to estimate the empirical prior , likelihood , and transition distributions .also , the numerical max - convolution method is nearly identical to the exact max - convolution method at every index ( * figure [ figure : viterbi ] * ) . even with a fairly rough discretization ( _ i.e. _ , ) , the fast numerical method used seconds compared to the seconds required by the naive approach .this speedup will increase dramatically as is increased , because the term in the runtime of the numerical max - convolution method is essentially bounded above .although the -norm provides a good approximation of the chebyshev norm , it discards significant information ; specifically the curve for various could be used to identify and correct the worst - case scenario where ; using only two points , the exact value of can be computed for those worst - case vectors by computing the norms at two different values and solving the following equations for : where the proportionality constant is and where the computed value yields the exact chebyshev norm . more generally , when there are unique values ( ) in , we can model the norms perfectly with where is an integer that indicates the number of times occurs in ( and where ) . this multi - set view of the vector can be used to project it down to a dimension : \cdot \left [ \begin{array}{c } n_1\\ n_2\\ n_3\\ \vdots\\ n_r\\ \end{array } \right ] = \left [ \begin{array}{c } \| u^{(m ) } \|_{p^*}^{p^ * } \\ \| u^{(m ) } \|_{2 { p^*}}^{2 { p^*}}\\ \| u^{(m ) } \|_{2 { p^*}}^{2 { p^*}}\\ \vdots\\ \| u^{(m ) } \|_{\ell { p^*}}^{\ell { p^*}}\\ \end{array } \right].\ ] ] by solving the above system of equations for all , the maximum can be used to approximate the true maximum . this projection can be thought of as querying distinct moments of the distribution that corresponds to some unknown vector , and then assembling the moments into a model in order to predict the unknown maximum value in .of course , when , the number of terms in our model , is sufficiently large , then computing norms of will result in an exact result , but it could result in execution time , meaning that our numerical max - convolution algorithm becomes quadratic ; therefore , we must consider that a small number of distinct moments are queried in order to estimate the maximum value in .regardless , the system of equations above is quite difficult to solve directly via elimination for even very small values of , because the symbolic expressions become quite large and because symbolic polynomial roots can not be reliably computed when the degree of the polynomial is . even in caseswhen it can be solved directly , it will be far too inefficient .for this reason , we solve for the values using an exact , alternative approach : if we define a polynomial , then .we can expand , and then write \cdot \left [ \begin{array}{cccc } \alpha_1^{p^ * } & \alpha_2^{p^ * } & & \alpha_r^{p^ * } \\ \alpha_1^{2 { p^ * } } & \alpha_2^{2 { p^ * } } & \cdots & \alpha_r^{2 { p^ * } } \\\alpha_1^{3 { p^ * } } & \alpha_2^{3 { p^ * } } & & \alpha_r^{3 { p^ * } } \\ \vdots & \vdots & & \vdots \\ \alpha_1^{\ell { p^ * } } & \alpha_2^{\ell { p^ * } } & & \alpha_r^{\ell { p^ * } } \\ \end{array } \right ] \cdot \left [ \begin{array}{ccccc } n_1\\ n_2\\ n_3\\ \vdots\\ n_r\\ \end{array } \right ] = \\ \left [ \begin{array}{ccccc } \alpha_1^{p^ * } \gamma(\alpha_1^{p^ * } ) & \alpha_2^{p^ * } \gamma(\alpha_2^{p^ * } ) & \alpha_3^{p^ * } \gamma(\alpha_3^{p^ * } ) & \cdots & \alpha_r^{p^ * } \gamma(\alpha_r^{p^ * } ) \end{array } \right ] \cdot \left [ \begin{array}{ccccc } n_1\\ n_2\\ n_3\\ \vdots\\ n_r\\ \end{array } \right ] = \\ \left [ \begin{array}{ccccc } 0 & 0 & 0 & \cdots & 0 \end{array } \right ] \cdot \left [ \begin{array}{ccccc } n_1\\ n_2\\ n_3\\ \vdots\\ n_r\\ \end{array } \right ] = 0,\end{gathered}\ ] ] which indicates that \cdot \left [ \begin{array}{c } \| u^{(m ) } \|_{p^*}^{p^ * } \\ \| u^{(m ) } \|_{2 { p^*}}^{2 { p^*}}\\ \| u^{(m ) } \|_{2 { p^*}}^{2 { p^*}}\\ \vdots\\ \| u^{(m ) } \|_{\ell { p^*}}^{\ell { p^*}}\\ \end{array } \right ] = 0.\ ] ] furthermore , ; therefore we can write \cdot \left [ \begin{array}{c } \| u^{(m ) } \|_{p^*}^{p^ * } \\ \| u^{(m ) } \|_{2 { p^*}}^{2 { p^*}}\\ \| u^{(m ) } \|_{2 { p^*}}^{2 { p^*}}\\ \vdots\\ \| u^{(m ) } \|_{\ell { p^*}}^{\ell { p^*}}\\ \end{array } \right ] = \\ \left [ \begin{array}{ccccc } \| u^{(m ) } \|_{p^*}^{p^ * } & \| u^{(m ) } \|_{2 { p^*}}^{2 { p^ * } } & \| u^{(m ) } \|_{3 { p^*}}^{3 { p^ * } } & \cdots & \| u^{(m ) } \|_{(r+1 ) { p^*}}^{(r+1 ) { p^ * } } \\ \| u^{(m ) } \|_{2 { p^*}}^{2 { p^ * } } & \| u^{(m ) } \|_{3 { p^*}}^{3 { p^ * } } & \| u^{(m ) } \|_{4 { p^*}}^{4 { p^ * } } & \cdots & \| u^{(m ) } \|_{(r+2 ) { p^*}}^{(r+2 ) { p^ * } } \\ \| u^{(m ) } \|_{3 { p^*}}^{3 { p^ * } } & \| u^{(m ) } \|_{4 { p^*}}^{4 { p^ * } } & \| u^{(m ) } \|_{5 { p^*}}^{5 { p^ * } } & \cdots & \| u^{(m ) } \|_{(r+3 ) { p^*}}^{(r+3 ) { p^ * } } \\ & & \vdots & & \\\| u^{(m ) } \|_{(\ell - r-1 ) { p^*}}^{(\ell - r-1 ) { p^ * } } & \cdots & \| u^{(m ) } \|_{(\ell-2 ) { p^*}}^{(\ell-2 ) { p^ * } } & \| u^{(m ) } \|_{(\ell-1 ) { p^*}}^{(\ell-1 ) { p^ * } } & \| u^{(m ) } \|_{\ell { p^*}}^{\ell { p^ * } } \\\end{array } \right ] \cdot \left [ \begin{array}{c } \gamma_0\\ \gamma_1\\ \gamma_2\\ \vdots\\ \gamma_r \end{array } \right ] = 0.\end{gathered}\ ] ] therefore , \in null\left ( \left [ \begin{array}{ccccc } \| u^{(m ) } \|_{p^*}^{p^ * } & \| u^{(m ) } \|_{2 { p^*}}^{2 { p^ * } } & \| u^{(m ) } \|_{3 { p^*}}^{3 { p^ * } } & \cdots & \| u^{(m ) } \|_{(r+1 ) { p^*}}^{(r+1 ) { p^ * } } \\\| u^{(m ) } \|_{2 { p^*}}^{2 { p^ * } } & \| u^{(m ) } \|_{3 { p^*}}^{3 { p^ * } } & \| u^{(m ) } \|_{4 { p^*}}^{4 { p^ * } } & \cdots & \| u^{(m ) } \|_{(r+2 ) { p^*}}^{(r+2 ) { p^ * } } \\ \| u^{(m ) } \|_{3 { p^*}}^{3 { p^ * } } & \| u^{(m ) } \|_{4 { p^*}}^{4 { p^ * } } & \| u^{(m ) } \|_{5 { p^*}}^{5 { p^ * } } & \cdots & \| u^{(m ) } \|_{(r+3 ) { p^*}}^{(r+3 ) { p^ * } } \\ & & \vdots & & \\\| u^{(m ) } \|_{(\ell - r-1 ) { p^*}}^{(\ell - r-1 ) { p^ * } } & \cdots & \| u^{(m ) } \|_{(\ell-2 ) { p^*}}^{(\ell-2 ) { p^ * } } & \| u^{(m ) } \|_{(\ell-1 ) { p^*}}^{(\ell-1 ) { p^ * } } & \| u^{(m ) } \|_{\ell { p^*}}^{\ell { p^ * } } \\\end{array } \right ] \right).\ ] ] because the columns of \ ] ] must be linearly independent when are distinct ( which is the case by the definition of our multiset formulation of the norm ) , then will determine a unique solution ; thus the null space above is computed from a matrix with columns and rows , yielding a single vector for .this vector can then be used to compute the roots of the polynomial , which will determine the values , which can each be taken to the power to compute ; the largest of those values is used as the estimate of the maximum element in .when contains at least distinct values ( _ i.e. _ , ) , then the problem will be well - defined ; thus , if the roots of the null space spanning vector are not well - defined , then a smaller can be used ( and should be able to compute an exact estimate of the maximum , since can be projected exactly when is the precise number of unique elements found in ) . note that this projection method is valid for any sequence of norms with even spacing : .in general , the computation of both the null space spanning vector and of machine - precision approximations for the roots of the polynomial ( which can be approximated by constructing a matrix with that characteristic polynomial and performing eigendecomposition ) are both in for each index in the result ; however , by using a small , we can compute a closed form solution of both the null space spanning vector and of the resulting quadratic roots .this enables faster exploitation of the curve of norms for estimating the maximum value of ( although it does nt achieve the high accuracy possible with a much larger ) .this is equivalent to approximating , where . in this case , the single spanning vector of the null space of \ ] ] will be = \left [ \begin{array}{c } \| u^{(m ) } \|_{2 p^*}^{2 p^ * } \| u^{(m ) } \|_{4 p^*}^{4 p^ * } - { \left ( \| u^{(m ) } \|_{3 p^*}^{3 p^ * } \right)}^2\\ \| u^{(m ) } \|_{p^*}^{p^ * } \| u^{(m ) } \|_{4 p^*}^{4 p^ * } - \| u^{(m ) } \|_{2 p^*}^{2 p^ * } \| u^{(m ) } \|_{3 p^*}^{3 p^*}\\ \| u^{(m ) } \|_{p^*}^{p^ * } \| u^{(m ) } \|_{3 p^*}^{3 p^ * } - { \left ( \| u^{(m ) } \|_{2 p^*}^{2 p^ * } \right)}^2\\ \end{array } \right]\ ] ] and thus can be computed by using the quadratic formula to solve for , and computing using the maximum of those zeros : .when the quadratic is not well defined , then this indicates that the number of unique elements in is less than 2 , and thus can not be projected uniquely ( _ i.e. _ , ) ; in this case , the closed - form linear solution can be used rather than a closed - form quadratic solution : when the closed - form linear solution is not numerically stable ( due to division by a value close to zero ) , then the -norm approximation can likewise be used . because the norms must have evenly spaced values in order to use the projection method described above, the exponential sequence of values used in the original piecewise algorithm will not contain four evenly spaced points ( which are necessary to solve the quadratic formulation , _i.e. _ ) .one possible solution would be to take the maximal stable value of for any index ( which will be a power of two found using the original piecewise method ) , and then also computing norms ( via the fft , as before ) for ; however , this will result in a slowdown in the algorithm , because for every -norm computed via fft before , now four must be computed .an alternative approach reuses existing values in the sequence of : for sufficiently large , then the exponential sequence is guaranteed to include these stable values : . by considering in candidates , then we can be guaranteed to have four evenly spaced and stable values .this can be achieved easily by noting that meaning that we can insert all possible necessary values for evenly spaced sequences of length four by first computing the exponential sequence of values and then inserting the averages between every pair of adjacent powers of two ( and inserting them in a way that maintains the sorted order ) : becomes .thus , if ( for some index ) 16 is the highest stable that is a power of two ( _ i.e. _ , the value that would be used by the original piecewise algorithm ) , then we are guaranteed to use the evenly spaced sequence . by interleaving the powers of two with the averages from the following powers of two , we reduce the number of ffts to that used by the original piecewise algorithm . for small values of ( such as the used here ) , the estimation of the maximum from each sequence of four norms is in , meaning the total time will still be , which is the same as before . because the spacing in this formulation is , and given the maximal root of the quadratic polynomial , then ( taking the maximal root to the power instead of , which had been the spacing used in the description of the projection method ) .the null space projection method is shown in * algorithm [ algorithm : piecewisewithprojection]*. ] } ] {\log_2(p^*_{\max } ) } } ] ] \gets 0.5 \times ( allpstar[i]+allpstar[i+1 ] ) ] ` fftnonnegmaxconvolvegivenpstar`( , , i ] -= maxstablepstarindex[o ] \% 2 ] ] \gets ] - 4] ] ` maxquad` `affinecorrect` \times r[r_{\max } ] \times result ] .the final constraint on in ( 0,1 ) is because any containing only one unique value ( which must be in this case since dividing by the maximum element in to compute has divided the value at that index by itself ( ) will lead to instabilities . when values in are identical to one another , using yields an exact solution , and thus solving with is not well - defined because . because all elements ] is ( computed again with mathematica ) .overall , assuming our conjecture regarding the forms of the vectors achieving the minima and maxima , then it follows that ] ) composition and length ( ) .the first and second input vector were generated seperately but are always of same length .* table [ table : runtimesinclbussieck ] * shows the result of this experiment .all methods were implemented in python , using numpy where applicable ( _ e.g. _ to vectorize ) . a non - vectorized version of naive max - convolution was included to estimate the effects of vectorization .the approach from bussieck et al . ran as a reimplementation based on the pseudocode in their manuscript . from their variants of proposed methods ,fill1 was chosen because of its use in their corresponding benchmark and its recommendation by the authors for having a lower runtime constant in practice compared to other methods they proposed .the method is based on sorting the input vectors and traversing the ( implicitly ) resulting partially ordered matrix of products in a way that not all entries need to be evaluated , while only keeping track of the so - called cover of maximal elements .fill1 already includes some more sophisticated checks to keep the cover small and thereby reducing the overhead per iteration .unfortunately , although we observed that the fill1 method requires between and iterations in practice , this per - iteration overhead results in a worst - case cost of per iteration , yielding an overall runtime in practice between and . as the authors state , this overhead is due to the expense of storing the cover , which can be implemented _e.g. _ using a binary heap ( recommended by the authors and used in this reimplementation ) . additionally , due to the fairly sophisticated datastructures needed for this algorithm it had a higher runtime constant than the other methods presented here , and furthermore we saw no means to vectorize it to improve the efficiency . for this reason ,it is not truly fair to compare the raw runtimes to the other vectorized algorithms ( and it is not likely that this python reimplementation is as efficient as the original version , which implemented in ansi - c ) ; however , comparing a non - vectorized implementation of the naive approach with its vectorized counterpart gives an estimated speedup from vectorization , suggesting that it is not substantially faster than the naive approach on these problems ( it should be noted that whereas the methods presented here have tight runtime bounds but produce approximate results , the fill1 algorithm is exact , but its runtime depends on the data processed ) . during investigation of these runtimes , we found that on the given problems , the proposed average case of iterations was rarely reached .a reason might be an unrecognized violation of the assumptions of the theory behind this theoretical average runtime in how the input vectors were generated .in contrast to the exact method from , the herein proposed approximate procedure are faster whenever the input vectors are at least elements long ( shorter vectors are most efficiently processed with the naive approach ) .the null space projection method is the fastest method presented here ( because it can use a lower ) , although the higher density of values it uses ( and thus , additional ffts ) make the runtimes nearly identical for both approximation methods .both piecewise numerical max - convolution methods are highly accurate in practice and achieve a substantial speedup over both the naive approach and the approach proposed by .this is particularly true for large problems : for the original piecewise method presented here , the multiplier may never be small , but it grows so slowly with that it will be even when is on the same order of magnitude as the number of particles in the observable universe .this means that , for all practical purposes , the method behaves asymptotically as a slightly slower method , which means the speedup relative to the naive method becomes more pronounced as becomes large . for the second method presented ( the null space projection ) , the runtime for a given relative error bound will be in . in practice ,both methods have similar runtime on large problems . the basic motivation of the first approach described _ i.e. _ , the idea of approximating the chebyshev norm with the largest -norm that can be computed accurately , and then convolving according to this norm using fft also suggests further possible avenues of research .for instance , it may be possible to compute a single fft ( rather than an fft at each of several contours ) on a more precise implementation of complex numbers .such an implementation of complex values could store not only the real and imaginary components , but also other much smaller real and imaginary components that have been accumulated through operations , even those which have small enough magnitudes that they are dwarfed by other summands .with such an approach it would be possible to numerically approximate the max - convolution result in the same overall runtime as long as only a bounded `` history '' of such summands was recorded ( _ i.e. _ , if the top few magnitude summands whether that be the top 7 or the top stored and operated on ) . in a similar vein , it would be interesting to investigate the utility of complex values that use rational numbers ( rather than fixed - precision floating point values ) , which will be highly precise , but will increase in precision ( and therefore , computational complexity of each arithmetic operation ) as the dynamic range between the smallest and largest nonzero values in and increases ( because taking to a large power may produce a very small value ) . other simpler improvements could include optimizing the error vs. runtime trade - off between the log - base of the contour search : the method currently searches contours , but a smaller or larger log - base could be used in order to optimize the trade - off between error and runtime .it is likely that the best trade - off will occur by performing the fast -norm convolution with a number type that sums values over vast dynamic ranges by appending them in a short ( _ i.e. _ , bounded or constant size ) list or tree and sums values within the same dynamic range by querying the list or tree and then summing in at the appropriate magnitude .this is reminiscent of the fast multipole algorithm .this would permit the method to use a single large rather than a piecewise approach , by moving the complexity into operations on a single number rather than by performing multiple ffts with simple floating - point numbers .the basic motivation of the second approach described _ i.e. _ , using the _ sequence _ of -norms ( each computed via fft ) to estimate the maximum value generalizes the -norm fast convolution numerical approach into an interesting theoretical problem in its own right : given an oracle that delivers a small number of norms ( the number of norms retrieved must be to significantly outperform the naive quadratic approach ) about each vector , amalgamate these norms in an efficient manner to estimate the maximum value in each .this method may be applicable to other problems , such as databases where the maximum values of some combinatorial operation ( in this case the _ maximum a posteriori _ distribution of the sum of two random variables ) is desired but where caching all possible queries and their maxima would be time or space prohibitive . in a manner reminiscent of how we employ fft , it may be possible to retrieve moments of the result of some combinatoric combination between distributions on the fly , and then use these moments to approximate true maximum ( or , in general , other sought quantities describing the distribution of interest ) . in practice, the worst - case relative error of our quadratic approximation is quite low .for example , when is stable , then the relative error is less than , regardless of the lengths of the vectors being max - convolved .in contrast , the worst - case relative error using the original piecewise method would be , where is the length of the max - convolution result ( when , the relative error of the original piecewise method would be ) . of course, the use of the null space projection method is predicated on the existence of at least four sequential points , but it would be possible to use finer spacing between values ( _ e.g. _ , to guarantee that this will essentially be the case as long as fft ( _ i.e. _ ) is stable . but more generally , the problem of estimating extrema from -norms ( or , equivalently , from the -th roots of the -th moments of a distribution with bounded support ) , will undoubtedly permit many more possible approaches that we have not yet considered .one that would be compelling is to relate the fourier transform of the sequential moments to the maximum value in the distribution ; such an approach could permit all stable at any index to be used to efficiently approximate the maximum value ( by computing the fft of the sequence of norms ) .such new adaptations of the method could permit low worst - case error without any noticable runtime increase .the fast numerical piecewise method for max - convolution ( and the affine piecewise modification ) are both applicable to matrices as well as vectors ( and , most generally , to tensors of any dimension ) .this is because the -norm ( as well as the derived error bounds as an approximation of the chebyshev norm ) can likewise approximate the maximum element in the tensor generated to find the max - convolution result at index of a multidimensional problem , because the sum computed by convolution corresponds to the frobenius norm ( _ i.e. _ the `` entrywise norm '' ) of the tensor , and after taking the result of the sum to the power , will converge to the maximum value in the tensor ( if is large enough ) .this means that the fast numerical approximation , including the affine piecewise modification , can be used without modification by invoking standard multidimensional convolution ( _ i.e. _ , ) .matrix ( and , in general , tensor ) convolution is likewise possible for any dimension via the row - column algorithm , which transforms the fft of a matrix into sequential ffts on each row and column .the accompanying python code demonstrates the fast numerical max - convolution method on matrices , and the code can be run on tensors of any dimension ( without requiring any modification ) .the speedup of fft tensor convolution ( relative to naive convolution ) becomes considerably higher as the dimension of the tensors increases ; for this reason , the speedup of fast numerical max - convolution becomes even more pronounced as the dimension increases .for a tensor of dimension and width ( _ i.e. _ , where the index bounds of every dimension are ) , the cost of naive max - convolution will be in , whereas the cost of numerical max - convolution is ( ignoring the multiplier ) , meaning that there is an speedup from the numerical approach .examples of such tensor problems include graph theory , where adjacency matrix representation can be used to describe respective distances between nodes in a network . as a concrete example, the demonstration python code computes the max - convolution between two matrices .the naive method required seconds , but the numerical result with the original piecewise method was computed in seconds ( yielding a maximum absolute error of and a maximum relative error of ) and the numerical result with the null space projection method was computed in seconds ( using , which corresponds to a relative error of in the top contour , yielding a maximum absolute error of and a maximum relative error of ) and in seconds ( using , which corresponds to a relative error of in the top contour , yielding a maximum absolute error of and a maximum relative error of ) . not only does the speedup of the proposed methods relative to naive max - convolution increase significantly as the dimension of the tensor is increased , no other faster - than - naive algorithms exist for max - convolution of matrices or tensors .multidimensional max - convolution can likewise be applied to hidden markov models with additive transitions over multidimensional variables ( _ e.g. _ , allowing the latent variable to be a two - dimensional joint distribution of american and german unemployment with a two - dimensional joint transition probability ) .the same -norm approximation can also be applied to the problem of max - deconvolution ( _ i.e. _ , solving for when given and ) .this can be accomplished by computing the ratio of to ( assuming has already been properly zero - padded ) , and then computing the inverse fft of the result to approximate ; however , it should be noted that deconvolution methods are typically less stable than the corresponding convolution methods , computing a ratio is less stable than computing a product ( particularly when the denominator is close to zero ) .although the largest absolute error of the affine piecewise method is the same as the largest absolute error of the original piecewise method , the mean squared error ( mse ) of the affine piecewise method will be lower than the square of the worst - case absolute error . to achieve the worst - case absolute error for a given contour the affine correction must be negligible ; therefore , there must be two nearly vertical points on the scatter plot of vs. , which are both extremes of the bounding envelope from * figure [ figure : approxexactlinearzoom]*. thus , there must exist two different indices and with vectors where and where ( creating two vertical points on the scatter plot , and forcing that both can not simultaneously be corrected by a single affine mapping ) . in order to do this , it is required to have filled with a single nonzero value and for the remaining elements of to equal zero .conversely , must be filled entirely with large , nonzero values ( the largest values possible that would still use the same contour ) . together , these two arguments place strong constraints on the vectors and ( and transitively , also constrains the unscaled vectors and ) : on one hand , filling with zeros requires that elements from either or must be zero ( because at least one factor must be zero to achieve a product of zero ) . on the other hand , filling with all large - value nonzeros requires that elements of _ both _ and are nonzero .together , these requirements stipulate that both , because entries of and can not simultaneously be zero and nonzero . therefore , in order to have many such vertical points , constrains the lengths of the vectors corresponding to those points .while the worst - case absolute error bound presumes that an individual vector may have length , this will not be possible for many vectors corresponding to vertical points on the scatter plot .for this reason , the mse will be significantly lower than the square of the worst - case absolute error , because making a high affine - corrected absolute error on one index necessitates that the absolute errors at another index can not be the worst - case absolute error ( if the sizes of and are fixed ) .code for exact max - convolution and the fast numerical method ( which includes both and null space projection methods ) is implemented in python and available at https://bitbucket.org/orserang/fast-numerical-max-convolution .all included code works for numpy arrays of any dimension , _i.e. _ tensors ) .we would like to thank mattias frnberg , knut reinert , and oliver kohlbacher for the interesting discussions and suggestions .j.p . acknowledges funding from bmbf ( center for integrative bioinformatics , grant no . 031a367 ) .o.s . acknowledges generous start - up funds from freie universitt berlin and the leibniz - institute for freshwater ecology and inland fisheries .david bremner , timothy m chan , erik d demaine , jeff erickson , ferran hurtado , john iacono , stefan langerman , and perouz taslakian .necklaces , convolutions , and x+ y. in _ algorithms esa 2006 _ , pages 160171 .springer , 2006 .
max - convolution is an important problem closely resembling standard convolution ; as such , max - convolution occurs frequently across many fields . here we extend the method with fastest known worst - case runtime , which can be applied to nonnegative vectors by numerically approximating the chebyshev norm , and use this approach to derive two numerically stable methods based on the idea of computing -norms via fast convolution : the first method proposed , with runtime in ( which is less than for any vectors that can be practically realized ) , uses the -norm as a direct approximation of the chebyshev norm . the second approach proposed , with runtime in ( although in practice both perform similarly ) , uses a novel null space projection method , which extracts information from a sequence of -norms to estimate the maximum value in the vector ( this is equivalent to querying a small number of moments from a distribution of bounded support in order to estimate the maximum ) . the -norm approaches are compared to one another and are shown to compute an approximation of the viterbi path in a hidden markov model where the transition matrix is a toeplitz matrix ; the runtime of approximating the viterbi path is thus reduced from steps to steps in practice , and is demonstrated by inferring the u.s . unemployment rate from the s&p 500 stock index .
the axelrod model is one of the most popular agent - based models of cultural dynamics .in addition to a spatial structure , which is modeled through a graph in which vertices represent individuals and edges potential dyadic interactions between two individuals , it includes two important social factors : social influence and homophily .the former is the tendency of individuals to become more similar when they interact , while the latter is the tendency of individuals to interact more frequently with individuals who are more similar .note that the voter model accounts for social influence since an interaction between two individuals results in a perfect agreement between them .the voter model , however , excludes homophily . to also account for this factor, one needs to be able to define a certain opinion or cultural distance between any two individuals through which the frequency of the interactions between the two individuals can be measured . in the model proposed by political scientist robert axelrod ,each individual is characterized by her opinions on different cultural features , each of which assumes possible states .homophily is modeled by assuming that pairs of neighbors interact at a rate equal to the fraction of cultural features for which they agree , and social influence by assuming that , as a result of the interaction , one of the cultural features for which members of the interacting pair disagree ( if any ) is chosen uniformly at random , and the state of one of both individuals is set equal to the state of the other individual for this cultural feature .more formally , the axelrod model on the one - dimensional lattice is the continuous - time markov chain whose state space consists of all spatial configurations that map the vertex set viewed as the set of all individuals into the set of cultures . to describe the dynamics of the axelrod model , it is convenient to introduce where refers to the coordinate of the vector , which denotes the fraction of cultural features the two vertices and share . to describe the elementary transitions of the spatial configuration, we also introduce the operator defined on the set of configurations by in other words , configuration is obtained from configuration by setting the feature of the individual at vertex equal to the feature of the individual at vertex and leaving the state of all the other features in the system unchanged .the dynamics of the axelrod model is then described by the markov generator defined on the set of cylinder functions by { \mathbf{1}}\bigl\{\eta(x , i ) \neq \eta(y , i ) \bigr\ } \bigl[f ( \sigma_{x , y , i } \eta ) - f ( \eta)\bigr].\ ] ] the expression of the markov generator indicates that the conditional rate at which the feature of vertex is set equal to the feature of vertex given that these two vertices are nearest neighbors that disagree on their feature can be written as = f ( x , y ) \times \frac{1}{f ( 1 - f ( x , y ) ) } \times \frac{1}{2},\ ] ] which , as required , is equal to the fraction of features both vertices have in common , which is the rate at which the vertices interact , times the reciprocal of the number of features for which both vertices disagree , which is the probability that any of these features is the one chosen for update , times the probability one half that vertex rather than vertex is chosen to be updated .note that , when the number of features , the system is static , while when the number of states per feature there is only one possible culture . also , to avoid trivialities, we assume from now on that the two parameters of the system are strictly larger than one . the main question about the axelrod model is whether the system fluctuates and evolves to a global consensus or gets trapped in a highly fragmented configuration . to define this dichotomy rigorously, we say that the system fluctuates whenever \\[-8pt ] \eqntext{\mbox{for all } x \in{\mathbb{z}}\mbox { and } i \in\{1 , 2 , \ldots , f \}}\end{aligned}\ ] ] and fixates if there exists a configuration such that \\[-8pt ] \eqntext{\mbox{for all } x \in{\mathbb{z}}\mbox { and } i \in\{1 , 2 , \ldots , f \}.}\end{aligned}\ ] ] in other words , fixation means that the culture of each individual is only updated a finite number of times , so fluctuation ( [ eqfluctuation ] ) and fixation ( [ eqfixation ] ) exclude each other .we define convergence to a global consensus mathematically as a clustering of the system , that is , note that whether the system fluctuates or fixates depends not only on the number of cultural features and the number of states per feature , but also on the initial distribution .indeed , regardless of the parameters , the system starting from a configuration in which all the individuals agree for a given cultural feature while the states at the other cultural features are independent and occur with the same probability always fluctuates . on the other hand ,regardless of the parameters , the system starting from a configuration in which all the even sites share the same culture and all the odd sites share another culture which is incompatible with the one at even sites always fixates .also , we say that fluctuation / fixation occurs for a given pair of parameters if the one - dimensional system with these parameters fluctuates / fixates when starting from the distribution in which the states of the cultural features within each vertex and among different vertices are independent and uniformly distributed .we also point out that neither fluctuation implies clustering nor fixation excludes clustering in general .indeed , the voter model in dimensions larger than or equal to three for which coexistence occurs is an example of spin system that fluctuates but does not cluster while the biased voter model is an example of spin system that fixates and clusters . in spite of these counter - examples, we conjecture that fluctuation implies clustering and fixation excludes clustering for the one - dimensional axelrod model starting from the distribution .we now give a brief review of the previous results about the one - dimensional axelrod model and state the new results proved in this article .since two neighbors are more likely to interact as the number of cultural features increases and the number of states per feature decreases , one expects the phase transition between the fluctuation / clustering regime and the fixation / no clustering regime to be an increasing function in the - plane .the numerical simulations together with the mean - field approximation of suggest that the system starting from : * exhibits consensus ( clustering ) when and * gets trapped in a highly fragmented configuration ( no clustering ) when .looking now at analytical results , the first result in states that the one - dimensional , two - feature , two - state axelrod model clusters .the second result deals with the system on a large but finite interval , and indicates that , for a certain subset of the parameter region , the system gets trapped in a random configuration in which the expected number of cultural domains scales like the number of vertices .this strongly suggests fixation of the infinite system in this parameter region , which we prove in this paper .shortly after , lanchier and schweinsberg realized that the analysis of the axelrod model can be greatly simplified using a coupling to translate problems about the model into problems about a certain system of random walks . to visualize this coupling ,think of each spatial configuration as a -coloring of the set and for all and all cultural features .we call a blockade when it contains particles , or equivalently when the two individuals on each side of completely disagree . when the number of states per feature , lanchier and schweinsberg proved that construction ( [ eqcoupling ] ) induces a system of annihilating symmetric random walks that has a certain site recurrence property , which is equivalent to fluctuation of the axelrod model , when starting from . from this property, they also deduced extinction of the blockades and clustering , thus extending the first result of to the model with two states per feature and any number of features .in contrast , the present paper deals with the fixation part of the conjecture and extends the second result of by again using the random walk representation induced by ( [ eqcoupling ] ) .the first step is to prove that , for all values of the parameters , construction ( [ eqcoupling ] ) induces a system of random walks in which collisions result independently in either annihilation or coalescence with some specific probabilities .coalescing events only occur when the number of states .this is then combined with large deviation estimates for the initial distribution of particles to obtain survival of the blockades when starting from in the parameter region described in the second result of .this not only implies fixation of the infinite system , but also excludes clustering so the system gets trapped in a highly fragmented configuration . [ thfixation - general ] assume that then , fixation ( [ eqfixation ] ) occurs and clustering ( [ eqclustering ] ) does not occur .- plane .the diagram on the left - hand side is simply an enlargement of the diagram on the right - hand side that focuses on small parameters .the continuous straight line with equation is the transition curve conjectured in .the set of crosses is the set of parameters for which the conjecture has been proved analytically : the vertical line of crosses on the left - hand side of the diagrams is the set of parameters for which fluctuation and clustering have been proved in while the triangular set of crosses is the set of parameters such that for which fixation is proved in theorem [ thfixation - general ] .the dashed line is the straight line with equation where the slope is such that . ]interestingly , though the second result in relies on a coupling between the axelrod model and a certain urn problem along with some combinatorial techniques that strongly differ from the techniques in our proof , both approaches lead to the same sufficient condition ( [ eqfixation - general ] ) .the set of parameters described implicitly in condition ( [ eqfixation - general ] ) corresponds to the triangular set of crosses in the two diagrams of figure [ figdiagram ] , which we obtained using a computer program .the picture suggests that this parameter region is ( almost ) equal to the set of parameters below a certain straight line going through the origin . to find the asymptotic slope ,observe that if , then in other respects , if , then we have from which we deduce that this proves that the condition in the theorem holds for and so all since is decreasing with respect to its second variable .in particular , fixation occurs whenever see figure [ figdiagram ] for a picture of the straight line with equation .finally , though and therefore theorem [ thfixation - general ] does not imply fixation for the two - feature three - state axelrod model , our approach can be improved to also obtain fixation in this case .[ thfixation-2 ] the conclusion of theorem [ thfixation - general ] holds whenever and .note that this fixation result is sharp since the first result in gives fluctuation and clustering of the two - feature two - state axelrod model in one dimension .in particular , the two - feature model fixates if and only if the number of states per feature . to conclude , we note that , in contrast with the techniques introduced in that heavily relies on the fact that the system starts from , our proof of theorem [ thfixation - general ] easily extends to show that , starting from more general product measures , the one - dimensional system fixates under a certain assumption stronger than ( [ eqfixation - general ] ) .however , the estimates of lemmas [ lemoutcome ] and [ lemgeneral ] , and consequently the condition for fixation , in this more general context become very messy while the proof does not bring any new interesting argument . therefore , we focus for simplicity on the most natural initial distribution .as pointed out in , one key to understanding the axelrod model is to keep track of the disagreements between neighbors rather than the actual set of opinions of each individual . when the number of states per feature , this results in a collection of nonindependent systems of annihilating symmetric random walks .lanchier and schweinsberg have recently studied these systems of random walks in detail and deduced from their analysis that the two - state axelrod model clusters in one dimension .when the number of states per feature is larger than two , these systems are more complicated because each collision between two random walks can result in either both random walks annihilating or both random walks coalescing . in this section , we recall the connection between the axelrod model and systems of symmetric random walks , and complete the construction given in to also include the case in which coalescing events take place . to begin with , we think of each edge of the graph as having levels , and place a particle on an edge at level if and only if the two individuals that this edge connects disagree on their feature .more precisely , we define the process and place a particle at site at level whenever . to describe this system ,it is convenient to also introduce the process that keeps track of the number of particles per site , and to call site a -site whenever it contains a total of particles : . to understand the dynamics of these particles ,the first key is to observe that , since each interaction between two individuals is equally likely to affect the culture of any of these two individuals , each particle moves one unit to the right or one unit to the left with equal probability one half .because the rate at which two neighbors interact is proportional to the number of cultural features they have in common , a particle at jumps at a rate that depends on the total number of particles located at site , which induces systems of particles which are not independent .more precisely , since two adjacent vertices that disagree on exactly of their features , and therefore are connected by an edge that contains a pile of particles , interact at rate , the fraction of features they share , conditional on the event that is a -site , each particle at site jumps at rate which represents the rate at which both vertices interact times the probability that any of the particles is the one selected to jump . motivated by ( [ eqrate ] ), the particles at site are said to be active if the site has less than particles , and frozen if the site has particles , in which case we call a blockade .to complete the construction of these systems of random walks , the last step is to understand the outcome of a collision between two particles .assume that and are occupied at time and that the particle at jumps one unit to the right at time , an event that we call a collision and that we denote by this happens when the individual at disagrees with her two nearest neighbors on her feature at time and imitates the feature of her left neighbor at time .this collision results in two possible outcomes .if the individuals at and agree on their feature just after the update , or equivalently the individuals at and agree on their feature just before the update , then becomes empty so both particles annihilate , which we write on the other hand , if the individuals at and still disagree on their feature after the update , then is occupied at time so both particles coalesce , which we write we refer to figure [ figparticles ] for an illustration of the coupling between the four - feature , three - state axelrod model and systems of annihilating - coalescing random walks .each particle is represented by a cross and the three possible states by the colors black , grey and white . in our example, there are two jumps resulting in two collisions : an annihilating event then a coalescing event .we also refer the reader to figure [ figwalks ] for simulation pictures of the systems of random walks when .lanchier and schweinsberg observed that , when , random walks can only annihilate , which was the key to proving clustering .this is due to the fact that , in a simplistic world where there are only two possible alternatives for each cultural feature , two individuals who disagree with a third one must agree . in our context , the individuals at and must agree just before the update when , which results in an annihilating event .in contrast , when the number of states per feature is larger , the three consecutive vertices may have three different views on their cultural feature , which results in a coalescing event .we point out that , since the system of random walks collects all the times at which pairs of neighbors interact , the knowledge of the initial configuration of the axelrod model and the system of random walks up to time allows us to re - construct the axelrod model up to time regardless of the value of the parameters .there is , however , a crucial difference depending on the number of states .when , collisions always result in annihilating events , so knowing the configuration of the axelrod model is unimportant in determining the evolution of the random walks .in contrast , when , whether a collision results in a coalescing or an annihilating event depends on the configuration of the axelrod model just before the time of the collision .the key to all our results is that , in spite of this dependency , collisions result independently in either an annihilating event or a coalescing event with some fixed probabilities .in particular , the outcome of a collision is independent of the past of the system of random walks though it is not independent of the past of the axelrod model itself . to prove this result , we need to construct the one - dimensional process graphically from a percolation structure and then define active paths which basically keep track of the descendants of the initial opinions .first , we consider the following collections of independent poisson processes and random variables : for each pair of vertex and feature : * we let be a rate one poisson process ; * we denote by its arrival time : ; * we let be a collection of independent bernoulli variables with * and we let be a collection of independent .the axelrod model is then constructed as follows . at time , we draw an arrow labeled from vertex to vertex to indicate that if then the individual at vertex imitates the feature of the individual at vertex . in particular , as indicated in ( [ eqrate ] ) , the rate at which the imitation occurs is equal to one half times the fraction of cultural features both vertices have in common divided by the number of features for which both vertices disagree , which indeed produces the local transition rates of the axelrod model .the graphical representation defines a random graph structure , also called percolation structure , from which the process starting from any initial configuration can be constructed by induction based on an argument due to harris .each arrow in this percolation structure is said to be active if condition ( [ eqactive ] ) is satisfied .note that whether an arrow is active or not depends on the initial configuration , and that the fact that an -arrow from vertex to vertex at time is active implies that the feature of must be equal to the feature of at time . we say that there is an active -path from to there are sequences of times and vertices such that the following two conditions hold : for , there is an active -arrow from to at time . for , there is no active -arrow that points at .we say that there is a generalized active path from to whenever for , there is an active arrow from to at time . later, we will use the notation and to indicate the existence of an active -path and a generalized active path , respectively .conditions 1 and 2 above imply that moreover , because of the definition of active arrows and simple induction , the cultural feature of vertex at time is equal to the initial value of the cultural feature of , so we call vertex the ancestor of vertex at time for the feature . in contrast , generalized active paths , which can be seen as concatenations of active -paths for possibly different values of , do not have such an interpretation , but the concept will be useful later to prove fixation .[ lemoutcome ] conditional on the realization of the system of random walks until time and the event that at time , we have let .due to one - dimensional nearest neighbor interactions , active -paths can not cross each other , from which we deduce that where denotes the ancestor at time for the feature , that is , moreover , conditional on the event of a collision at time , there is a particle at and a particle at at time , therefore from ( [ eqoutcome-1 ] ) and ( [ eqoutcome-2 ] ) , we deduce that , conditional on at time , in other respects , we have in particular , the outcome either an annihilating event or a coalescing event of a collision at time is independent of the realization of the system of random walks up to time .moreover , since the initial states are independent and uniformly distributed , the conditional probability of an annihilating event is equal to the conditional probability where are independent uniform random variables over . by conditioning on the possible values of , we obtain that ( [ eqoutcome-3 ] ) is equal to finally , since each collision results in either an annihilating event or a coalescing event , the conditional probability of a coalescing event directly follows .this completes the proof .the main objective of this section is to extend a result of to the axelrod model , and obtain a sufficient condition for fixation which is based on certain properties of the active -paths .[ lemfixation ] for all , let then , the axelrod model fixates whenever extending an idea of bramson and griffeath and generalizing the technique in , we set for every cultural feature and define recursively the sequence of stopping times in other words , the stopping time is the time the individual at the origin changes the state of her cultural feature .also , for each cultural feature , we define the random variables as well as the collection of events see the left - hand side of figure [ figactive - path ] for a schematic illustration of the stopping times and the corresponding vertices .assumption ( [ eqfixation-1 ] ) together with reflection symmetry implies that , for each cultural feature , the event occurs almost surely for some .it follows that since the event that the individual at the origin changes her culture infinitely often is also the event that at least one of the events occurs , in view of the previous inequality , in order to establish fixation , it suffices to prove that our proof of ( [ eqfixation-2 ] ) relies on some symmetry properties of the axelrod model that do not hold for the cyclic particle systems considered in .first , we let be the set of descendants of at time , and denote by its cardinality . .dashed lines represent active -paths for some whereas the continuous thick line on the right - hand side is a generalized active path as defined in section [ secwalks ] . ] since each interaction between two individuals is equally likely to affect the culture of each of these two individuals , the number of descendants of any given site is a martingale whose expected value is constantly equal to one . in particular, the martingale convergence theorem implies that therefore , for almost all realizations of the process , the number of descendants of converges to a finite value . since in additionthe number of descendants is an integer - valued process , using that simultaneous updates occur with probability zero , we deduce that the set of descendants inherits the properties of its cardinality in the sense that , with probability one , \\[-8pt ] \nonumber \rho(x , i)&:= & \inf \bigl\{t > 0 \dvtx i_t ( x , i ) = i_{\infty } ( x , i ) \bigr\ } < \infty,\end{aligned}\ ] ] where , due to one - dimensional nearest neighbor interactions , is a random interval which is almost surely finite . to conclude, we simply observe that , conditional on , the last time the individual at the origin changes the state of her cultural feature is at most equal to the largest of the stopping times for from which it follows that according to ( [ eqfixation-3 ] ) .this proves ( [ eqfixation-2 ] ) and therefore the lemma .in view of lemma [ lemfixation ] , in order to prove fixation , it suffices to show that the probability of the event in equation ( [ eqfixation-1 ] ) , that we denote by , tends to zero as .the first step is to extend the construction proposed by bramson and griffeath to the axelrod model , the main difficulty being that two active paths at different levels can cross each other .let be the first time an active -path for some that originates from hits the origin , and observe that from which it follows that denote by the initial position of this active path .also , we set \\[-8pt ] \nonumber z_+ & : = & \max \bigl\{z \in{\mathbb{z}}\dvtx ( z , 0 ) \leadsto(0 , \sigma ) \mbox { for some } \sigma< \tau\bigr\ } \geq 0\end{aligned}\ ] ] and define .we point out that in general since vertex is defined from the set of active -paths whereas vertex is defined from generalized active paths that are concatenations of active -paths with different values of .see the right - hand side of figure [ figactive - path ] for an illustration where the two vertices are different . now , note that each blockade which is initially in the interval must have been destroyed , that is , turned into a set of active particles through the annihilation of one of the particles that constitute the blockade , by time .moreover , active particles initially outside the interval can not jump inside the space time region delimited by the two generalized active paths implicitly defined in ( [ eqpaths ] ) . indeed , assuming that such particles exist would contradict either the minimality of or the maximality of .in particular , on the event , all the blockades initially in must have been destroyed before time by either active particles initially in or active particles that result from these blockade destructions . to estimate the probability of this last event, we first give a weight of to each particle initially active by setting to define when is initially occupied by a blockade , we observe that by lemma [ lemoutcome ] the number of collisions required to break a blockade is geometric with mean . moreover , each blockade destruction results in a total of active particles .therefore , we set where are independent geometric random variables with mean .the fact that occurs only if all the blockades initially in are destroyed by active particles initially in or active particles resulting from these blockade destructions , can then be written as \\[-8pt ] \nonumber & \subset & \biggl\{\sum _ { u = l}^r \phi(u ) \leq0 \mbox { for some and some } \biggr\}.\end{aligned}\ ] ] to understand the first inclusion , simply observe that the sum of the is equal to the number of collisions required to break all the blockades minus the total number of active particles initially in the interval or created from the destruction of blockades initially in . since the number of collisions is bounded by the number of such active particles , all the blockades initially in can only be destroyed if the number of such active particles exceeds the number of collisions required , which gives the first inclusion .the second inclusion simply follows from the fact that the expression of can be understood heuristically as follows : since the are independent , one expects that fixation occurs if . but which , since , is precisely equal to . to deduce rigorously fixation from the positiveness of the expected value , which is done in the next two lemmas , we now prove large deviation estimates for .the first of these two lemmas will be used in the proof of the second one to show that the total number of collisions required to break all the blockades in a large interval does not deviate too much from its expected value .[ lemgeometric ] let be an infinite sequence of independent geometric random variables with the same parameter .then , for all , there exists such that let for all . since , in a sequence of independent bernoulli trials with success probability , the event that the success occurs at step is included in the event that successes occur in the first steps , we have letting denote the integer part of , we deduce that since large deviation estimates for the binomial distribution imply that for a suitable constant , the result follows .[ lemgeneral ] let and assume that .then for a suitable constant and all sufficiently large . to begin with, we define since the random variables , , are independent , standard large deviation estimates for the binomial distribution imply that for all there exists such that \\[-8pt ] \eqntext{\mbox{for all } i = 0 , 1 , \ldots , f,}\end{aligned}\ ] ] where with .the expression for follows from the fact that initially each level of each site is independently occupied with probability , which implies that the are independent binomial random variables .let be the event that then , there exists a constant such that , on the event , in particular , letting be the integer part of , we have now , since , there exists small such that from which we deduce , also using ( [ eqgeneral-2 ] ) and lemma [ lemgeometric ] , that for all sufficiently large . combining ( [ eqgeneral-1 ] ) and ( [ eqgeneral-3 ] ) , we obtain for all sufficiently large . using the inclusion in ( [ eqinclusion ] ) and lemma [ lemgeneral ], we deduce this , together with lemma [ lemfixation ] , implies fixation whenever .to begin with , note that , when and , we have for the comparison function defined in the previous section . in particular , to find a good enough upper bound for the probability of in the case and , one needs to define a new comparison function that also takes into account additional events that promote fixation , such as collisions between active particles and blockade formations . recall that in the comparison function of section [ secfixation - general ] , each particle which is initially active is assigned a weight of , which corresponds to the worst case scenario in which the active particle hits a blockade .however , each active particle can also hit another active particle or form a new blockade with another active particle .more precisely , there are four possible outcomes for each active particle : if the active particle hits a blockade , it is assigned a weight of .if the active particle coalesces with another active particle , then at most one collision with a blockade can result from this pair of particles so the pair is assigned a total weight of ; that is , each particle of the pair is individually assigned a weight of .if the active particle annihilates with another active particle , then no collision with a blockade can result from this pair so each active particle that annihilates with another active particle is assigned a weight of 0 . if the active particle forms a blockade with another active particle , then following the same approach as in the previous section the pair is assigned a total weight equal to plus a geometric random variable with mean . in view of cases 24 above , the weight of an active particle that either collides with another active particle or forms a blockade with another active particle is at least , andtherefore we define a new comparison function , again denoted by , as follows : where the random variables are again independent geometric random variables with the same expected value . the value of when is the same as in the previous section whereas we distinguish between active particles that satisfy case 1 or cases 24 above . the same reasoning and construction as in section [ secfixation - general ] againimply that for this new comparison function . to prove that the probability of the event on the right - hand side converges to zero as , we follow the same strategy as for lemma [ lemgeneral ] but also find a lower bound for the probability that a particle initially active either collides with another active particle or forms a blockade with another active particle , which is done in the next lemma . [ lemdeviation ] assume that and .then , there exists such that where as in lemma [ lemgeneral ] .the first step is to find a lower bound for the initial number of active particles that will either collide or form a blockade with another active particle .to do so , we introduce the following definition : an active particle initially at site is said to be a good particle if \\[-8pt ] \eqntext{\mbox{where } \{u , v \ } = \ { 2n - 1/2 , 2n + 1/2 \ } \mbox { for some } n \in{\mathbb{z}}.}\end{aligned}\ ] ] in other words , we partition the lattice into countably many pairs of adjacent sites , and call an active particle at time 0 a good particle if the other site of the pair is initially occupied by an active particle as well . an active particle which is not goodis called a bad particle . since initially each level of each site is independently occupied with probability , the variables are independent binomial random variables , so for as in ( [ eqdeviation-1 ] ) we have where .similarly , we have since in addition the events that nonoverlapping pairs of adjacent sites are initially occupied by two good particles , or one bad particle , or one blockade , or one bad particle and one blockade , or two blockades are independent , standard large deviation estimates for the binomial distribution imply that there exists a positive constant such that where and denote respectively the initial number of good particles , the initial number of bad particles and the initial number of blockades in the interval . to estimate the probability that a pair of good particles collide or form a blockade, we first observe that , when there are only two features , the graphical representation of the axelrod model simplifies as follows : for each pair of neighbors , draw an arrow at the times of a poisson process with intensity one fourth , which is equal to half of the rate at which neighbors who agree on one cultural feature interact .if the two neighbors agree on exactly one cultural feature at the time of the interaction then the culture of the individual at vertex becomes the same as the culture of the individual at vertex . in this graphical representation , there are exactly six possible arrows that may affect the system of random walks at the pair of sites , namely the event that one of the two arrows in the first line of ( [ eqarrows ] ) appears before any of the four other ones occurs with probability two ( arrows ) over six ( arrows ) = 1/3 , and on the intersection of this event and the event that there is initially a pair of good particles at , the two particles either collide or form a blockade .moreover , the event that one of the two arrows in the first line appears first only depends on the realization of the graphical representation in in particular , parts of the graphical representation associated with nonadjacent pairs do not intersect which , by independence of the poisson processes , implies that the events that the two arrows in the first line of ( [ eqarrows ] ) appears before any of the other ones are independent for nonadjacent pairs .it follows that the initial number of good particles in that either collide or form a blockade is stochastically larger than a binomial random variable with trials and success probability one third .large deviation estimates for the binomial distribution then imply that for a suitable constant .now , let be the event that and observe that there exists a constant such that , on the event , in particular , letting be the integer part of , we have in other respects , recalling the definition of for , we have which , recalling the definition of , is equal to for all . in particular , there exists small such that since , the previous estimate , ( [ eqdeviation-4 ] ) and lemma [ lemgeometric ] imply that for all sufficiently large . combining ( [ eqdeviation-2 ] ) , ( [ eqdeviation-3 ] ) and ( [ eqdeviation-5 ] ), we obtain for all sufficiently large , which completes the proof . as in the previous section , ( [ eqinclusion-2 ] ) and lemma [ lemdeviation ]imply that which , together with lemma [ lemfixation ] , implies fixation when and .the authors would like to thank an anonymous referee for her / his careful reading of the proofs and suggestions to improve the clarity of the paper , and for pointing out a mistake in a preliminary version of the proof of lemma [ lemdeviation ] .
the axelrod model is a spatial stochastic model for the dynamics of cultures which includes two important social factors : social influence , the tendency of individuals to become more similar when they interact , and homophily , the tendency of individuals to interact more frequently with individuals who are more similar . each vertex of the interaction network is characterized by its culture , a vector of cultural features that can each assumes different states . pairs of neighbors interact at a rate proportional to the number of cultural features they have in common , which results in the interacting pair having one more cultural feature in common . in this article , we continue the analysis of the axelrod model initiated by the first author by proving that the one - dimensional system fixates when where the slope satisfies the equation . in addition , we show that the two - feature model with at least three states fixates . this last result is sharp since it is known from previous works that the one - dimensional two - feature two - state axelrod model clusters .
the _ sp theory of intelligence _ aims to simplify and integrate ideas in artificial intelligence , mainstream computing , and human perception and cognition , with information compression as a unifying theme .the theory is described in several peer - reviewed articles , and links from there . ] and most fully in .the main purpose of this article is to describe how the sp theory may be applied to the understanding of natural vision and the development of computer vision , and to discuss associated issues . both of those themes natural vision and artificial vision are discussed together throughout the article , since each one may illuminate the other . in broad terms , the potential benefits of the sp theory in those two areas are the simplification and integration of concepts , deeper insights , better performance ( of artificial systems ) , and the seamless integration of vision with other sensory modalities , and with other aspects of intelligence such as reasoning , planning , problem solving , and unsupervised learning . what is perhaps the main attraction of the theory is the potential for one relatively simple framework to accommodate several different aspects of intelligence , including vision . as a preliminary, the next section describes the theory in outline , with associated ideas .the sp theory combines conceptual simplicity with descriptive and explanatory power in several areas , including concepts of ` computing ' , the representation of knowledge , natural language processing , pattern recognition , several kinds of reasoning , the storage and retrieval of information , planning and problem solving , unsupervised learning , information compression , and human perception and cognition .since the sp theory has been described quite fully in , only the essentials will be given here , with enough detail to ensure that the rest of the article makes sense .the main elements of the sp theory are : * the theory is conceived as an abstract system that , like a brain , may receive ` new ' information via its senses and store some or all of it as ` old ' information . *all new and old information is expressed as arrays of atomic symbols ( _ patterns _ ) in one or two dimensions . *the system is designed for the unsupervised learning of old patterns by compression of new patterns . *an important part of this process is , where possible , the economical encoding of new patterns in terms of old patterns .this may be seen to achieve such things as pattern recognition , parsing or understanding of natural language , or other kinds of interpretation of incoming information in terms of stored knowledge , including several kinds of reasoning .* compression of information is achieved via the matching and unification ( merging ) of patterns , with key roles for the frequency of occurrence of patterns , and their sizes . *the concept of _ multiple alignment _ , outlined in section [ multiple_alignment_section ] , is a powerful central idea , similar to the concept of multiple alignment in bioinformatics but with important differences .* owing to the intimate connection between information compression and concepts of prediction and probability ( see , for example , * ? ? ?* ) , it is relatively straightforward for the sp system to calculate probabilities for inferences made by the system , and probabilities for parsings , recognition of patterns , and so on . * in developing the theory , i have tried to take advantage of what is known about the psychological and neurophysiological aspects of human perception and cognition , and to ensure that the theory is compatible with such knowledge . the way the sp concepts may be realised with neurons ( _ sp - neural _ ) is discussed in ( * ? ? ?* chapter 11 ) .the sp theory is realised in the form of computer models which may be regarded as first versions of the _ sp machine _ , an expression of the theory and a means for it to be applied .the sp70 model is the most comprehensive version , with capabilities in the building of multiple alignments and unsupervised learning .the sp62 model is the same but it lacks any ability to learn .although sp62 is a subset of sp70 , it has proved convenient to maintain them as separate models .. ] at the heart of the sp models is a process for finding good full or partial matches between patterns ( * ? ? ?* appendix a ) , with a flexibility that is somewhat like the winmerge utility for finding similarities and differences between files , or standard ` dynamic programming ' methods for the alignment of sequences . the main difference between the sp process and others , is that the former can deliver several alternative matches between patterns , while winmerge and standard methods deliver one ` best ' result .multiple alignments are built in stages , with pairwise matching and merging of patterns , and with merged patterns from any stage being carried forward to later stages . at all stages , the aim is to encode new information economically in terms of old information and to weed out multiple alignments that score poorly in that regard . in the sp70 model, there are additional processes for deriving old patterns from multiple alignments , evaluating sets of newly - created old patterns in terms of their effectiveness for the economical encoding of the new information , and weeding out low - scoring sets .more detail about sp70 may be found in ( * ? ? ? * sections 3.9 and 9.2 ) .the sp61 model , a precursor of sp62 which is very similar to it , is described in sections 3.9 and 3.10 ( _ ibid . _ ) .the main limitations of current models are : * that they work with one - dimensional patterns and have not yet been generalised to work with 2d patterns ( although a preliminary attempt has been made to consider how the sp principles may be generalised to patterns in two dimensions ( * ? ? ?* section 13.2.1 ) ) . * that the arithmetic meaning of numbers is not recognised they are simply treated as patterns . *that sp70 does not yet learn intermediate levels of abstraction in grammars , or discontinuous patterns in data .i believe these problems are soluble .potential solutions will be mentioned at relevant points below .owing to the first of these limitations , most of the examples in this article , and much of the discussion , will relate to one - dimensional patterns . like most problems in artificial intelligence , the problems that are addressed in the sp models finding good full and partial matches between patterns , the formation of multiple alignments , and the learning of useful sets of patterns are not tractable if the requirement is to find ideal solutions .but , as with most programs in artificial intelligence , things become much easier if one is content with solutions that are reasonably good and not necessarily perfect . likemost programs in artificial intellegence , the sp models apply constraints on the process of searching , to reduce the size of the search space so that useful results may be achieved with the available computational resources .an example of multiple alignment in the sp system is shown in figure [ kittens_figure ] . here ,row 0 contains a new pattern representing a sentence : ` t w o k i t t e n s p l a y ' , while each of rows 1 to 8 contains an old pattern representing a grammatical rule or a word with grammatical markers .this multiple alignment , which achieves the effect of parsing the sentence in terms of grammatical structures , is the best of several built by the sp62 model when it is supplied with the new pattern and a set of old patterns that includes those shown in the figure and several others as well . in this example , and others in this article , ` best ' means that the multiple alignment in the figure is the one that enables the new pattern to be encoded most economically in terms of the old patterns . details of how the encoding is done may be found in (* section 3.5 ) .a point of interest about this multiple alignment is the way that , in row 8 , the symbols ` np ' and ` vp ' mark the grammatical dependency between the plural subject of the sentence ( ` k i t t e n s ' ) and the plural main verb ( ` p l a y ' ) .this kind of dependency is often described as ` discontinous ' because there may be arbitrarily large amounts of intervening structure between one element of the dependency and another .this method of marking discontinous dependencies is , arguably , simpler and more elegant than how they are marked in other grammatical systems .much of the descriptive and explanatory power of the sp theory is due to the versatility of the multiple alignment concept in : * _ the representation of knowledge_. despite the simplicity of sp patterns , the way they are processed within the multiple alignment framework gives them the versatility to represent several kinds of knowledge , including grammars for natural languages , ontologies , class hierarchies , part - whole hierarchies , decision networks and trees , relational tuples , if - then rules , associations of medical signs and symptoms , causal relations , and concepts in mathematics and logic such as ` function ' , ` variable ' , ` value ' , and ` set ' . * _ the processing of knowledge_. the sp system has demonstrable capabilities in several areas , including natural language processing , pattern recognition , several kinds of reasoning , the storage and retrieval of information , planning , problem solving , unsupervised learning , and information compression .it is pertinent to mention that part of the inspiration for the sp theory is research by fred attneave( eg , * ? ? ?horace barlow ( eg , * ? ? ?* ) , and others , showing that aspects of visual perception ( and , more generally , the workings of brains and nervous systems ) may be understood in terms of information compression . other sources of inspiration for the sp theory include research on ` minimum length encoding ' ( eg , * ? ? ?* ) , and evidence for the importance of information compression in the unsupervised learning of language ( eg , * ? ? ?* ) , . ] and in mathematics and logic ( * ? ? ?* chapters 2 and 10 ) .at an abstract level , information compression brings three main benefits : * for any given body of information , , it reduces the amount of storage space required . *reducing the size of can mean increases in efficiency .it would , for example , mean less searching if we are trying to find something within .* perhaps most importantly , information compression provides the key to inductive prediction . in the sp system, it is the basis for all kinds of inference , and for calculations of probabilities . in animals, we would expect these things to have been favoured by natural selection because of the competitive advantage they can bring .and they are likely to be useful in artificial systems . in the sp framework ,information compression is achieved via the discovery of recurrent patterns ( like those shown in rows 1 to 8 in figure [ kittens_figure ] and columns 1 to 6 in figure [ class_part_plant_figure ] ) , and also via the economical encoding of new information in terms of old patterns , as explained in ( * ? ? ?* section 3.5 ) .it is now widely accepted that , at ` low ' levels in vertebrate and invertebrate visual systems , there are processes that recognise perceptual features such as edges and corners . some relevant evidence is outlined in subsections below . in this section ,the main focus is on features that may be regarded as ` explicit ' because they derive directly from visual input .but it is well known that we may ` see ' things that have little or no counterpart in the visual input , such as the ` subjective contours ' in ( * ? ? ?* figure 2 - 6 ) or the edge of one leaf where it overlaps another in ( * ? ? ?* figure 4 - 1 ( a ) ) .these kinds of ` implicit ' features will be considered in section [ seeing_things_that_are_not_there ] . in two respects ,explicit perceptual features sit comfortably with the sp theory : * they may be seen to provide a means of encoding perceptual information in an economical manner .for example , writes that `` common objects may be represented with great economy , and fairly striking fidelity , by copying the points at which their contours change direction maximally , and then connecting these points appropriately with a straight edge . '' ( p. 185 ) .he illustrates this with the now - famous picture of a sleeping cat , reproduced in figure [ sleeping_cat_figure ] .* at lowish levels , perceptual features may function as if they were the atomic symbols that provide the foundation for all higher - level structures , even though they themselves have been constructed from lower - level components .as just indicated , vision begins with images as they are first projected , not perceptual features .the latter must be somehow discovered or detected within the images .the following subsections consider how the sp theory may be applied in this area , starting with a consideration of options for the encoding of light intensities . in the design of artificial systems for vision, it seems natural and obvious that light intensities in images should be expressed as numbers .but , in itself , the sp system recognises only atomic symbols that can be matched in an all - or - nothing manner with other atomic symbols .it is true that , in principle , it may be supplied with patterns that express peano s axioms or similar information , and it may then interpret numbers correctly ( see * ? ? ?* chapter 10 ) .but this has not yet been explored in any depth and , in any case , numbers are probably a distraction in understanding how sp principles may be applied to vision . to simplify the discussion here, we shall assume that we are processing monochrome images with just two categories of pixel : black and white . with that kind of representation , the lightness in any given small area may be encoded via the _ densities _ of black and white pixels in that area , without using explicit numbers .it is true that such pixels may be represented with the symbols ` 1 ' and ` 0 ' but these are simply atomic symbols ( as required by the sp system ) , without numerical meanings .it is relevant to this discussion to consider briefly how edges may be detected with neurons .figure [ limulus_figure ] shows two sets of recordings from a single visual receptor ( ` ommatidium ' ) of the horseshoe crab , _limulus_. in both sets of recordings , the eye of the crab was illuminated in a rectangular area bordered by a dark rectangle of the same size ( producing a step function as shown at the top right of the figure ) . in both cases ,successive recordings were taken with the pair of rectangles in successive positions across the eye along a line which is at right angles to the boundary between light and dark areas .this achieves the same effect as but is easier to implement than keeping the two rectangles in one position and taking recordings from a range of receptors across the light and dark areas .in the top set of recordings ( triangles ) all the ommatidia except the one from which recordings were being taken were masked from receiving any light . in this case, the target receptor responds with frequent impulses when the light is bright and at a sharply lower rate in the dark . in the bottom set of recordings ( circles )the mask was removed so that all the ommatidia were exposed to the pattern of light and dark rectangles . in this case ,positive and negative responses are exaggerated near the border between light and dark areas but the target receptor fires at or near a background rate in areas which are evenly illuminated ( either light or dark ) .this kind of effect which is seen elsewhere in the animal kingdom appears to be due to lateral inhibition between neurons in the visual system ( * ? ? ?* pp 172 - 174 ) .it has been recognised for some time that the dampening of the response in regions of uniform illumination ( light or dark ) may be seen to achieve the effect of compressing visual information by extracting redundancy from it .it is somewhat like the ` run - length coding ' technique for compression of information : a symbol or group of symbols that repeats in a contiguous sequence may be reduced to a single instance , perhaps marked for repetition ., retrieved 2013 - 02 - 04 . ] a boundary between one uniform area and another may be represented economically by two such compressed representations , side - by - side . in the neural case ,the upswing near the light / dark boundary may be seen as an economical representation of the idea that the whole of the preceding area is light , the downswing on the other side may be seen as a succinct marking of the fact that the following area is dark , while the two together may be seen to serve as a compressed representation of the boundary .although it is less directly relevant to the present discussion , it is pertinent to mention that there are ` complex ' cells in mammalian visual systems that respond selectively to edges , and also to ` lines ' and ` slits ' ( see , for example , * ? ? ?* pp 215 - 219 ) . in the sp framework, the effect of run - length coding may be achieved via recursion , as illustrated in figure [ recursive_run - length_coding_figure ] . here, each instance of `` a b c ` ' in the new pattern in row 0 is matched to an appearance of the self - referential old pattern `` x 1 a b c x 1 # x # x ` ' .it is self - referential because `` x 1 # x ` ' in the body of the pattern may be matched and unified with `` x 1 ... # x ` ' at the start and end of the pattern .the encoding of the new pattern which we may derive from this multiple alignment is the relatively short sequence `` x 1 # x ` . ' as before , two such encodings , side - by - side , would be an economical representation of the boundary between one uniform region and another . of course, this does not look much like lateral inhibition with neurons , as outlined in section [ edge_detection_with_neurons_section ] .but at an abstract level , the two things may be seen to produce the same result : the extraction of redundancy from uniform regions , leaving information about the boundaries between such regions as an economical representation of the raw data , like david marr s ` primal sketch ' . with other developments such as the generalisation of the sp concepts to two dimensions this kind of technique may be applied in computer vision . meanwhile , existing techniques , such as those described in ( * ? ? ?* chapter 4 ) , may serve instead . so far, we have said nothing about the orientations of edges or their lengths . in principle , those things may be encoded mathematically , and very economically , in the manner of computer graphics .but that does not seem very likely in a biological system and it is not necessarily the best option for any artificial system that aspires to human - like capabilities in vision . as mentioned above , the visual cortex in mammals is populated by large numbers of ` complex ' neurons , each one of which responds to an ` edge ' , ` slit ' , or ` line ' , at a particular orientation .there is a good coverage of different angles within each small area ( see , for example , * ? ?* chapter 9 ) .these observations suggests that , in natural vision , the orientation of any edge may be encoded quite simply and directly in terms of the corresponding type of neuron , and likewise in an artificial system .a sequence of such codes would describe both the orientation and length of a line but it would contain the same kind of redundancy as is discussed in section [ recursive_run - length_coding_section ] .so we may guess that , in natural vision , some kind of run - length coding may operate , reducing the redundancy within the body of the line and preserving information where the repetition stops at the points where the line begins and where it ends .some relevant evidence comes from studies showing the existence of ` end stopped ' hypercomplex cells that respond selectively to a bar of a defined length , or a corner ( see , for example , * ? ? ?* pp 216 - 217 ) . in keeping with attneave s remarks quoted earlier, we may guess that , in mammalian vision , the orientation and length of an edge , slit or line , is to a large extent encoded via neurons that record the beginning and end of the line and any associated corners .orientation - sensitive neurons would provide the input for this ` higher ' level of encoding .in artificial systems , this kind of coding may in principle be done within the multiple alignment framework , as outlined in section [ recursive_run - length_coding_section ] .as before , existing techniques may provide stop - gap solutions .readers may , with some justice , object that real visual data is rarely as clean as the example in figure [ recursive_run - length_coding_figure ] may suggest .most areas are some shade of grey , not purely black or purely white , and there are likely to be blots and smudges of various kinds .what appears to be a promising answer to this kind of problem is that the sp system is designed to search for optimal solutions and is not unduly disturbed by errors of omission , commission and substitution. there is more on this topic in section [ noisy_data_recognition_section ] ( see also section [ noisy_data_learning_section ] ) .in some respects , object recognition is like parsing in natural language processing ( see , for example , * ? ? ? * ; * ? ? ? * ) .since the sp system works well in parsing , as outlined in section [ multiple_alignment_section ] , it may also prove useful in computer vision .naturally , it would be necessary for the sp machine to have been generalised to work with patterns in two dimensions . and in this discussion we shall assume that low - level perceptual features have been identified , and that they may be treated as atomic symbols , in accordance with the sp theory .figure [ object_recognition_figure ] shows schematically how someone s face , with their ears , may be parsed within the multiple alignment framework .row 0 in the figure contains a new pattern representing incoming information .each part has been aligned with an old pattern representing stored knowledge of the structure of an ear , an eye , etc . andthese are aligned with a pattern in row 2 representing the higher - level structure of someone s head .although this is schematic , i believe the approach has potential , as described in the following subsections .contrary to the impression one might gain from figure [ object_recognition_figure ] , the sp system is quite robust in the face of errors .this is illustrated in figure [ noisy_data_recognition_figure ] where the new pattern in row 0 is the same sentence as in figure [ kittens_figure ] but with the omission of the ` w ' in ` t w o ' , the substitution of ` m ' for `n ' in ` k i t t e n s ' , and the addition of ` x ' within the word ` p l a y ' . despite these errors , the best multiple alignment created bythe sp62 model is , as shown , the one that we judge intuitively to be ` correct ' .this kind of ability to cope gracefully with noisy data is really essential in any system which aspires to explain or emulate our ability to recognise things despite fog , snow , falling leaves , or other things that may obstruct our view . in general terms, the reason that the sp models can cope with noisy data is that they search for optimal solutions , without relying on the presence or absence of any particular feature or combination of features .a strength of the multiple alignment concept is that it provides a simple but effective vehicle for the representation and processing of part - whole hierarchies , class hierarchies , and their integration .recognition of an entity in terms of its parts is illustrated rather simply in figure [ object_recognition_figure ] and more realistically in figure [ kittens_figure ] . in the latter case ,the sentence is divided into a noun phrase and a main verb , the noun phrase is divided into a determiner and a noun , and the noun contains the root or stem , ` k i t t e n ' , with the plural suffix , ` s ' . continuing with the feline theme but not illustrated here is the way that , in the multiple alignment framework , a cat may be recognised at several levels of abstraction : as an animal , as a mammal , as a cat , and as a specific individual , say ` tibs ' ( * ? ? ?* figure 6.7 ) .the framework also provides for the representation of heterarchies or cross classification : a given entity , such as ` jane ' ( or a class ) , may belong in two or more higher - level classes that are not themselves hierarchically related , such as ` woman ' and ` doctor . 'the way that part - whole relations and class - inclusion relations may be combined in one multiple alignment is illustrated in figure [ class_part_plant_figure ] . so that it fits more easily on the page .] here , some features of an unknown plant are expressed as a set of new patterns , shown in column 0 : the plant has chlorophyll , the stem is hairy , it has yellow petals , and so on . from this multiple alignment , we can see that the unknown plant is most likely to be the meadow buttercup , _ ranunculus acris _ , as shown in column 1 . as such, it belongs in the genus _ ranunculus _ ( column 6 ) , the family _ ranunculaceae _ ( column 5 ) , the order _ ranunculales _ ( column 4 ) , the class _ angiospermae _ ( column 3 ) , and the phylum _ plants _ ( column 2 ) .each of these higher - level classifications contributes information about attributes of the plant and its division into parts and sub - parts .for example , as a member of the the class _ angiospermae _ ( column 3 ) , the plant has a shoot and roots , with the shoot divided into stem , leaves , and flowers ; as a member of the family _ ranunculaceae _ ( column 5 ) , the plant has flowers that are ` regular ' , with all parts ` free ' ; as a member of the phylum _ plants _ ( column 2 ) , the buttercup has chlorophyll and creates its own food by photosynthesis ; and so on . of course , this example does not describe the visual appearance of an object .but it should be apparent that this system , when it has been generalised to work with patterns in two dimensions , has potential as a means of representing and processing both the parts and sub - parts of an object s image , and how that information relates to any hierarchy of classes to which that object belongs . andeach of those two types of hierarchy is a very effective means of expressing visual information in a compressed form .scene analysis may also be viewed as a kind of parsing ( see , for example , * ? ? ?for the analysis of a seascape , for example , there may be a high - level structure recording the kinds of things that one sees in a typical seascape ( sea , beach , rocks , boats , and so on ) , with a more detailed description for each one of those things .there seem to be two main complications in scene analysis : * any one thing may be partially obscured by another . in our seascape ,a boat may be partially obscured by , for example , waves , sea birds , or members of the crew .* the locations of things may be quite variable .a boat may be in the sea or on the beach ; people can appear almost anywhere ; and so on .of course , people cope easily with both those things , but there may be a problem with ` naive ' kinds of parsing system .the sp framework may accommodate these aspects of scene analysis in three main ways : * as we saw in section [ noisy_data_recognition_section ] , parsing can be done successfully despite errors or omission , commission , or substitution .thus there is reason to believe that , when the sp models have been generalised to work with patterns in two dimensions , an object may be recognised even if it is partially obscured . *the variability of scenes is broadly similar to the variability of sentences in natural language .artificial parsing systems , including the sp system , can cope with that variability by providing information about a wide variety of types of sentences and phrases , including recursive forms such as _ this is the man all tattered and torn that kissed the maiden all forlorn that milked the cow with the crumpled horn ... _ .the same principles may be applied to vision . * where existing knowledge ca nt cope , the system may learn as discussed in section [ unsupervised_learning_section ] , next .it is clear that learning is an integral part of vision since vision is an important means of gaining new information about the world . andit is clear that , in general , we learn via vision in a manner that is ` unsupervised ' in the sense that it does not require the intervention of a ` teacher ' , or the provision of ` negative ' samples , or the grading of samples from simple to complex ( _ cf ._ ) .we take in information through our eyes ( and other senses ) and try to make sense of it as best we can . in this section ,we consider unsupervised learning as it has been developed in the sp framework , and how it may be applied in vision .but as background for what follows we first look at the ` donsvic ' principle in unsupervised learning . in our dealings with the world ,certain kinds of structures appear to be more prominent and useful than others : in natural languages , there are words , phrase and sentences ; we understand the visual and tactile worlds to be composed of discrete ` objects ' ; and conceptually , we recognise classes of things like ` person ' , ` house ' , ` tree ' , and so on .it appears that these ` natural ' kinds of structure are significant in our thinking because they provide a means of compressing sensory information , and that compression of information provides the key to their learning or discovery . at first sight, this looks like nonsense because popular programs for compression of information , such as those based on the lzw algorithm , or programs for jpeg compression of images , seem not to recognise anything resembling words or objects . but those programs are designed to work fast on low - powered computers . with other programs that are slower but more thorough , natural structures can be revealed : * figure [ discovery_of_words ] shows part of a parsing of an unsegmented sample of natural language text created by the mk10 program using only the information in the sample itself and without any prior dictionary or other knowledge about the structure of language .although all spaces and punctuation had been removed from the sample , the program does reasonably well in revealing the word structure of the text .statistical tests confirm that it performs much better than chance .* the same program does quite well significantly better than chance in revealing phrase structures in natural language texts that have been prepared , as before , without spaces or punctuation but with each word replaced by a symbol for its grammatical category .although that replacement was done by a person trained in linguistic analysis , the discovery of phrase structure in the sample is done by the program , without assistance . * the snpr program for grammar discovery can ,without supervision , derive a plausible grammar from an unsegmented sample of artificial language , including the discovery of words , of grammatical categories of words , and the structure of sentences . a key feature of both the mk10 program and the snpr program is compression of information by the matching and unification of patterns .but much the same can be said of ordinary ` utility ' programs for data compression . what is distinctive about the mk10 and snpr programs is that they are designed to search through what is normally a wide variety of alternative ways in which patterns may be matched and unified , and to select those patterns or sets of patterns that yield relatively high levels of compression .it seems likely that the principles that have been outlined in this subsection may be applied not only to the discovery of words , phrases and grammars in language - like data but also to such things as the discovery of objects in images , and classes of entity in all kinds of data .these principles may be characterised as ` the discovery of natural structures via information compression ' , or ` donsvic ' for short .although the sp theory has grown out of my earlier work on the unsupervised learning of language , the mk10 and snpr models are not well suited to the goal of simplifying and integrating concepts across several different aspects of intelligence .it has been necessary to develop a radically new conceptual framework , with the sp concept of multiple alignment at centre - stage .but information compression and the donsvic principles are as important in the new conceptual framework as they were before . as mentioned in section [ computer_models_section ] , the sp70 model works by creating multiple alignments , deriving old patterns from the multiple alignments , evaluating sets of newly - created old patterns in terms of their effectiveness for the economical encoding of the new information , and weeding out low - scoring sets . the first two of those processes is illustrated schematically in figure [ unsupervised_learning_figure ] .as mentioned earlier , the sp system is conceived as an abstract system that , like a brain , may receive ` new ' information via its senses and store some or all of it as ` old ' information .we may think of the ` brain ' as that of a baby listening to what people are saying .let s imagine that he or she hears someone say `` t h a t b o y r u n s. '' if the baby has never heard anything similar , then , if it is stored at all , that new information may be stored as a relatively straightforward copy , something like the old pattern shown in row 1 of the multiple alignment in part ( a ) of the figure .now let us imagine that the information has been stored and that , at some later stage , the baby hears someone say `` t h a t g i r l r u n s '' .then , from that new information and the previously - stored old pattern , a multiple alignment may be created like the one shown in part ( a ) of figure [ unsupervised_learning_figure ] . and , by picking out coherent sequences that are either fully matched or not matched at all , four putative words may be extracted : ` t h a t ' , ` b o y ' , ` g i rl ' , and ` r un s ' , as shown in the first four patterns in part ( b ) of the figure . in addition , a fifth pattern may be created , as shown in the figure , that records the sequence ` t h a t ... r u n s ' , with the category ` c # c ' in the middle representing a choice between ` b o y ' and ` g i r l ' .this is the beginnings of a grammar to describe that kind of phrase .this example shows how old patterns may be derived from a multiple alignment but it gives a highly misleading impression of how the sp70 model actually works . in practice , the program forms many multiple alignments that are much less tidy than the one shown and it creates many old patterns that are clearly ` wrong ' .however , the program contains procedures for evaluating candidate sets of patterns and weeding out those that score badly in terms of their effectiveness for encoding the new information economically . out of all the muddle , it can normally abstract one or two ` best ' grammars and these are normally ones that appear intuitively to be ` correct ' , or nearly so .as was mentioned in section [ computer_models_section ] , the sp70 model has two main weaknesses at it stands now : it does not learn intermediate levels in a grammar or discontinuous dependencies of the kind mentioned in section [ multiple_alignment_section ] .but i believe some reorganisation of the model would solve both problems and greatly enhance the model s capabilities . as with the structures of natural language , it is clear that we have to learn the structures that are significant in vision , including objects . some insights into how this may be done may be gained from a consideration of random - dot stereograms like the one shown in figure [ stereogram_figure_1 ] . here ,each of the two images is a random array of black and white pixels , with no discernable structure .but there is a relationship between them , as shown in figure [ stereogram_figure_2 ] : both images are the same except that a square area near the middle of the left image is further to the left in the right image . .reproduced from , with permission of lucent technologies inc./bell labs.,scaledwidth=90.0% ] when these images are viewed in a stereoscope , the central square appears as a discrete object suspended above the background .the focus of interest here will be on how we come to see that discrete object , while possible implications for our understanding of depth perception are discussed in section [ space_depth_section ] .a little analysis shows that seeing the central square means finding an alignment between pixels in the left image and pixels in the right image , that there are many alternative such alignments , and that some are better than others .one solution is the algorithm developed by .another solution , potentially , is the kind of processing that builds multiple alignments in the sp models , but generalised for two dimensions . as noted in section [ computational_complexity_section ] , the complexity of the matching problem can , in general , be reduced by applying constraints to the process of searching and thus reducing the size of the search space .figure [ julesz_analogue_figure ] shows how the sp62 model can solve a one - dimensional analogue of the stereo matching problem . here, the old pattern ( row 1 ) may be seen as an analogue of the left image and the new pattern ( row 0 ) may be seen to stand in for the right image .both patterns have been prepared from a random sequence of digits , ) .the results are , they say , better than with pseudo - random number algorithms because `` atmospheric noise '' is the source of randomness . ] with a displacement of the middle section , much as in figure [ stereogram_figure_2 ] .this multiple alignment is the best of several different multiple alignments created by the sp62 model with those two patterns . in the figure, one can see how the central sequence of 10 integers ( analogous to the central square in figure [ stereogram_figure_2 ] ) has been isolated from the ` background ' sequences to the left and right , and this despite repetitions of integers in both patterns and the formation of plenty of ` wrong ' alignments on the route to the ` correct ' result .it seems likely that the processes can be generalised to work with patterns in two dimensions .the kinds of processing just described may also be applied to objects in motion .consider , for example , a flatfish with a sandy , speckled colouration , lying on a sandy and speckled area on the bed of the sea .such a creature would be very well camouflaged but with one proviso : it must stay still .as soon as it moves , it will become very much easier to see .apart from the motion itself , an important reason seems to be that movement creates two images ( or more ) , rather like the two images in a random - dot stereogram . andby a process of matching , much as described above , a predator or other observer will be able to see the fish standing out as a distinct entity with distinct boundaries like the square that can be seen when the two images in figure [ stereogram_figure_1 ] are viewed in a stereoscope . more generally , we see any object in motion such as a car travelling along a road as a single entity , not a multitude of images like the frames in a video or film . in all such cases ,we merge the many instances into one .the process of merging those many instances , which is likely to yield high levels of compression , requires a process of matching and unification , much as before . andthose processes serve to define the boundaries of the entity and to distinguish it from the background .if we only ever see parts of an object perhaps a rare creature in its natural habitat that we have only seen in fleeting glimpses we can nevertheless develop a coherent concept of the whole object via alignments amongst the fragmentary views : `` a b ` ' may be aligned with `` b c ` ' and unified to create `` a b c ` ' ; `` c d ` ' may be aligned with `` d e ` ' to create `` c d e ` ' ; `` a b c ` ' may be aligned with `` c d e ` ' ... , and so on . this is like the ` sequence assembly ' technique in bioinformatics , , retrieved 2013 - 02 - 21 . ] or the stitching together of overlapping photos to create a panorama . and the matching may be achieved via multiple alignment , as developed in the sp theory .similar things may be said about the learning of everyday concepts like ` person ' or ` house ' , or the more formal botanical categories shown in figure [ class_part_plant_figure ] .if , for example , we see one thing with the characteristics `` a b c f l m n p x y z ` ' and another with the characteristics `` a b c g l m n q x y z ` ' , we may create a unified pattern like this : `` a b c 1 # 1 l m n 2 # 2 x y z ` ' , with the patterns `` 1 f # 1 ` ' , `` 1 g # 1 ` ' , `` 2 p # 2 ` ' , and `` 2 q # 2 ` ' , to fill in the slots .the unified pattern may be seen to represent the class of things with the characteristics `` a b c ... l m n ... x y z ` ' .this example is , of course , rather similar to the example shown in section [ unsupervised_learning_section ] .that similarity is not accidental .it derives from the principle , which is a key part of the sp theory , that , with compression of information via the multiple alignment framework , all kinds of knowledge may be represented economically with sp patterns . andit is consistent with the long - established idea that there may be a syntax for images , not just natural languages ( see , for example , * ? ? ?* ) , and with the previously - mentioned idea that object recognition and scene analysis may each be seen as a form of parsing ( section [ object_scene_section ] ) .there is potential with this kind of learning to create structures that are quite subtle and expressive . despite its limitations, the sp70 model can already discover grammatical structures with alternatives everywhere , and without any fixed elements as in `` a b c ... l m n ... x y z ` ' .it is envisaged that , with the kind of reorganisation mentioned earlier , the system should be able discover structures that express part - whole hierarchies and class - inclusion hierarchies , both of them with multiple levels , and to abstract discontinuous dependencies in data of the kind mentioned in section [ multiple_alignment_section ] . as was noted in sections [ noisy_data_low - level_features_section ] and [ noisy_data_recognition_section ] ,visual information is normally ` noisy ' in the sense that , compared with any stored information , it is likely to contain errors of omission , commission , or substitution , in any combination .as shown in figure [ noisy_data_recognition_figure ] , the sp system has a capacity to cope with these kinds of errors , at least in tasks like parsing , recognition , or scene analysis .what about learning ?how can any system learn ` correct ' structures from noisy data in an ` unsupervised ' manner and without any help from a ` teacher ' , or from examples that are marked as ` wrong ' , or from anything else of that kind ?this is not merely an issue in vision .it also arises in connection with language learning , as illustrated in figure [ generalisation_figure ] . .in ascending order size , they are : the finite sample of utterances from which a child learns ; the ( infinite ) set of utterances in ; and the ( infinite ) set of all possible utterances . adapted from figure 7.1 in , with permission.,scaledwidth=60.0% ]when we learn our first language or languages , we learn from what we hear a finite sample of language shown as the smallest envelope in the figure .but there are two apparent problems : * how we learn despite what is marked in the figure as ` dirty data ' : sentences that are not complete , false starts , words that are mis - pronounced , and more . * how we generalise from the finite sample represented by the smallest envelope to a knowledge of the language corresponding to the middle - sized envelope , without overgeneralising into the region between the middle envelope and the outer one .one possible answer is that mistakes are corrected by parents , teachers , and others .but the weight of evidence is that children can learn their first language without that kind of assistance .an alternative answer favoured here is that information compression provides the key : * any particular error is , by its nature , rare and so in the search for useful patterns ( which , other things being equal , are the more frequently - occurring ones ) , it is discarded along with many other candidate structures . * as a general rule , the highest levels of compression can be achieved with grammars that represent moderate levels of generalisation , neither too little nor too much . in practice ,the mk10 and snpr programs have been found to be quite insensitive to errors ( of omission , addition , or substitution ) in their data . andthe snpr program has been shown to produce plausible generalisations , without over - generalising .since the principles are general , it seems likely that visual learning within the sp framework may be achieved in the face of noisy data .as mentioned earlier , it is envisaged that , in the sp theory , all kinds of knowledge will be represented with patterns in one or two dimensions .superficially , this seems to rule out anything with more dimensions , and suggests that there might be a need to introduce patterns with three dimensions and possibly more .however , this has been rejected , at least for the time being , for these main reasons : * although the multiple alignment concept may in principle be generalised to patterns in three or more dimensions , it is difficult to see how it could be made to work in practice and it looks implausible as a model for any kind of structure or process in the brain . *a tentative part of the sp theory is the idea that the cortex of the brains of mammals which is , topologically , a two - dimensional sheet may be , in some respects , like a sheet of paper on which ` pattern assemblies ' ( neural analogues of sp patterns ) may be written ( * ? ? ?* chapter 11)as shown schematically in figure [ class_part_figure ] .* if we exclude processes of interpretation in terms of harmonics , colours , or the like , raw sensory data may be seen to come in either one dimension ( eg sound ) or two ( eg visual images ) .* three - dimensional structures may be represented with patterns in two dimensions , somewhat in the manner of architects drawings ( * ? ? ?* section 13.2.2 ) . with the development of mathematical concepts within the sp framework ( * ? ? ?* chapter 10 ) , four or more dimensions may be represented in much the same way as is done now with mathematical techniques .this and the following two subsections consider some aspects of the visual perception of space and depth , and whether or how the sp theory may be applied . if an object is viewed from several different angles , with overlap between one view and the next ( as illustrated in figure [ 3d_object_views_figure ] ) , the several views may be stitched together to create what is at least a partial and approximate 3d model of the object .this is similar to the piecing together of fragments to create a coherent concept , as outlined in section [ concepts_from_fragments_section ] .as before , it may be achieved via multiple alignment as that concept has been developed in the sp theory .the model will be partial if , for example , it excludes views from above or below . andit is likely to be approximate because a given set of views may not be sufficient for an unambiguous definition of the object s geometry : there may be variations in the shape that would be compatible with the given set of views .do these deficiencies matter ?for many practical purposes , the answer is likely to be `` no '' .if we want a rock to put in a rockery , or a stick to throw for a dog , the exact shape is not important . andif we want more accurate information , we can inspect the object more closely , or supplement vision with touch .evidence that people do something like what has been described is our ordinary experience that things can be harder to recognised from unfamiliar viewpoints than from familiar ones the basis of some trick photos .that observation is confirmed in experimental studies showing that people are both slower at recognising things , and less accurate , when the viewpoint is unfamiliar .although what has been described is like the stitching together of overlapping photos to create a panorama , the sp theory suggests that , with people , the visual information would be compressed via the encoding , within the sp system , of part - whole relations , class - inclusion relations , and other kinds of regularities ., there is potential for the sp system to yield higher levels of compression and more natural structures . ]that compression can be of benefit in both natural and artificial systems , as indicated in section [ compression_efficiency_prediction_section ] .similar processes may be at work when we move around in our environment and learn about it .successive views that overlap each other may be stitched together , as before , to create a model of the streets or other places where we have been .this is essentially what has been and is being done with google s ` street view . ' . ]the main difference between what has been achieved with street view and what is envisaged for the sp system is that , in the latter case , visual information would be compressed via the mechanisms in the sp system , as noted in section [ three_dimensional_objects_section ] . as with objects ( section [ three_dimensional_objects_section ] ) , a model of our environment that is created via overlapping viewsmay not be geometrically precise .but , as before , some ambiguity may not matter very much for many practical purposes .topological maps , such as the classic map of the london underground , can be quite good enough for finding one s way around .however , if greater geometric accuracy is needed , it may be increased by gathering more information , especially information about areas between roads , paths or other routes . in connection with finding one s way around, the sp system may be relevant in two ways : * if a robot has stored representations of one or more places , perhaps compressed via recurrent patterns as indicated in section [ compression_efficiency_prediction_section ] , then , via the building of multiple alignments ( as in section [ object_scene_section ] ) , it should be able to recognise when it has reached one of those places , using incoming visual information as new patterns and stored knowledge as old patterns . if it has stored information about an entire route or network of routes , then , within that environment , it should be able to identify where it is at any time .similar things may be true of people . * with an appropriate set of old patterns , each one of which represents a direct connection between two places , the sp system , via the building of multiple alignments ,can work out one or more routes between any two of the relevant places , including routes via two or more of the direct connections ( * ? ?* chapter 8) .the example in figure [ planning_figure ] shows one such flying route between beijing and new york . .... 0 1 2 3 4 beijing ------------------------- beijing 1 melbourne - melbourne 10a cape_town ------------- cape_town 14 paris ---- paris 25a new_york - new_york 0 1 2 3 4 .... these points about how we may build a model of our environment and find our way around relate to the topic of ` simultaneous localization and mapping ' ( slam ) in robotics . , retrieved 2013 - 03 - 07 . ] without attempting a comprehensive discussion of the complex subject of depth perception , this section offers some thoughts about stereoscopic vision , and the possible relevance of the sp theory . for any given object that we are looking at, we can in principle work out its distance by a process of triangulation like that which has been widely used in cartography , at least as it used to be .but there appear to be snags : * for this mechanism to work with reasonable accuracy , it would be necessary for one to have a rather accurate sense of the direction of gaze for each eye and the angle between that direction of gaze and the line between the two eyes .it seems unlikely that we can sense the positions of our eyes with the necessary accuracy .* there is evidence that , with the ames distorted room illusion , , the illusion persists when people view the room with two eyes , although , in that case , the effect may be reduced .this suggests that any information about distance that may be gained via triangulation is not sufficiently clear or precise to overcome viewers preconceptions that the room has the conventional rectangular form .* triangulation can not work with a stereoscope or a 3d film because what we are looking at is all at one distance , with nothing to differentiate one part of the picture from another .the spear which makes us jump as we see it coming towards us out of a 3d film is no closer to us than anything else in the film .we can not rule out triangulation altogether it may have a role in some situations but some other mechanism is needed to explain how we see depth with a stereoscope or a 3d film . with random - dot stereograms , it is clear that our brains are capable of forming an alignment between the left and right images that is good enough to identify the displaced area in the middle as a discrete entity ( section [ discovery_via_stereo_matching_section ] ) . by identifying the displaced area and distinguishing it from the surrounding area , we may also gain an accurate knowledge of the size of the displacement . how can the size of the displacement tell us about depth ?there are at least three possible answers ( which are not necessarily mutually exclusive ) : * for any given displacement , our brains perform a geometrical calculation of what that displacement implies about relative distances , between the observer and the perceived object , and between the perceived object and the background .* we are born with knowledge that is , in effect , a table of associations between displacements and distances .* we learn those kinds of associations from experience .that learning is important is suggested by the powerful influence of our experience ( of rectangular rooms ) in the ames room illusion .building up a knowledge of associations is part of what the sp system is designed to achieve .the sp theory has things to say about some other aspects of vision , as discussed in the following subsections .as noted in section [ low_level_features_section_section ] , we often ` see ' things that are not objectively present in what we are looking at .we may see ` subjective contours ' in certain kinds of images , or we may see the edge of a leaf where it overlaps another leaf despite there being little or nothing to mark the boundary .the multiple alignment in figure [ kittens_figure ] provides an example of how the sp system may accommodate these kinds of things . here , the new pattern is the sentence ` t w o k i t t e n s p l a y ' with nothing to mark the boundary between one word and the next .but those boundaries are clearly marked via the parsing of the sentence into its constituent parts .more generally , we infer things that are not immediately visible : when we see the unbroken shell of a hazel nut , we expect to find an edible kernel inside ; when we see a horse partially obscured by a tree , we expect to see the whole animal when it moves into full view ; and so on . this kind of inference is an integral part of how the sp system works . in figure[ noisy_data_recognition_figure ] , the word ` t w o ' appears in the new pattern as ` t o ' , but the parsing interpolates the missing ` w ' . in figure[ class_part_plant_figure ] , the rather sketchy information in column 1 is extended via the information in columns 1 to 6 : we can infer that the plant photosynthesises ( column 2 ) , that it has five petals ( column 6 ) , that it is poisonous ( column 5 ) , and so on .a prominent feature of natural vision is that we can recognise something despite wide variations in viewing distance and corresponding variations in the size of the retinal image .although this phenomenon is not consistent with any simple pattern - matching model of vision , it appears that it can be accommodated within the sp theory .let us suppose that , as described in section [ recursive_run - length_coding_section ] , the image to be processed is reduced to a ` primal sketch ' , showing boundaries between uniform areas but without the redundancy within those areas . for any given scene , the effect of that processing will be to reduce or eliminate variations in the size of the original image .the primal sketch that is derived from a large version of the scene will be much the same as the primal sketch that is derived from a small version .any residual variations in size , or noise in the image , may be overcome by the flexibility of the matching process in the sp system ( section [ computer_models_section ] ) and by the system s ability to tolerate noise ( sections [ noisy_data_low - level_features_section ] , [ noisy_data_recognition_section ] , and [ noisy_data_learning_section ] ) .another prominent feature of natural vision is ` lightness constancy ' : the fact that , normally , we perceive the lightness of an object to be fixed , despite wide variations in the incident light and corresponding variations in the amount of light that is reflected from the object ( its ` luminence ' ) .we would normally see a lump of coal as black and snow as white , even though the coal in bright sunlight may be reflecting more light per unit area than snow in shadow . in order to account for this phenomenon, it seems necessary to suppose that , for each kind of object , we maintain some kind of table of associations between levels of illumination and corresponding values for luminance .since we are unlikely to have an inborn knowledge of coal , snow , and the like , we must suppose that those tables are learned .as noted in section [ alternatives_section ] , learning associations of that kind is part of what the sp system is designed to achieve .notice that any given table can only be applied if we have some idea of what kind of object we are looking at , otherwise we might see coal as if it was snow , or _vice versa_. there is some evidence that our perception of the lightness of an object does indeed depend on what we think the object is ( * ? ? ?* chapter 16 ) . in a similar way, our judgements of lightness seem to depend on our perceptions of how a given object is illuminated ( * ? ? ?* figure 1.10 ) .it seems likely that much of what has been said in this section about lightness constancy would also apply to colour constancy : the way we see the colour of an object to be fixed , despite wide variations in the colour of the incident light and corresponding variations in the colour of the light that is reflected from the object .since information compression is central in the sp theory , it is pertinent to mention that lightness constancy and colour constancy may each be seen as a means of encoding information economically .it is simpler to remember that a particular object is ` black ' or ` red ' than all the complexity of how its appearance changes in different lighting conditions .it is often remarked that we recognise things more easily in their familiar contexts than in unfamiliar ones , and this is confirmed in formal studies ( see , for example , * ? ? ?* ; * ? ? ?this observation makes sense in terms of the sp framework because any part of a multiple alignment may be a context for any other , and because of the way the system searches for a global optimum which embraces any given entity and its context .if , in our seascape example ( section [ scene_analysis_section ] ) , we see a beach and the sea then , in effect , we are primed to see boats because , in that context , boats are likely to yield multiple alignments with better scores than , say , office furniture .a less common observation is that , with some kinds of image , there is more than one plausible interpretation .an example is the ` young woman / old woman ' picture of psychology text books . in the sp framework, this kind of ambiguity is accommodated in the way that , with some kinds of data , the system may create two or more multiple alignments that have good scores .an example in the area of natural language processing is the way the sp62 model can produce two parsings corresponding to both readings of the ambiguous sentence_ fruit flies like a banana _ , as shown in ( * ? ? ?* figure 5.1 ) .it is clear that in people and other animals , vision does not stand alone but works in close association with other senses .our concept of a ship , for example , is an amalgam of images , sounds , smells , the flavour of food on board , textures of different surfaces , and so on . in a similar way, vision works closely with other aspects of intelligence : different kinds of reasoning , learning , understanding and producing natural language , recalling information , and non - visual kinds of recognition .achieving these kinds of integration without undue complexity has been a central aim in the development of the theory . and in that development , many candidate ideas have been rejected because they did not help to promote the simplification and integration of concepts .to the extent that the theory achieves a combination of simplicity with versatility , it is down to three main things : representing all kinds of knowledge with ` patterns ' ; the multiple alignment concept as it has been developed in the sp theory ; and the overarching role of information compression via the matching and unification of patterns .despite some limitations in how the sp theory is currently realised in computer models , it has what i believe are some useful things to say about several aspects of vision : * low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in a manner that is analogous to the run - length encoding technique for information compression , and comparable with the effect of lateral inhibition in the visual systems of animals . *the concept of _ multiple alignment _ in the sp theory may be applied to the recognition of objects , and to scene analysis , with a hierarchy of parts and sub - parts , and at multiple levels of abstraction .* the theory has potential for the unsupervised learning of visual objects and classes of objects , and suggests how coherent concepts may be derived from fragments .it provides an account of how we may discover objects via stereo matching and via motion . * as in natural vision , both recognition and learning in the sp system is robust in the face of errors of omission , commission and substitution .* the theory suggests how , via vision , we may piece together a knowledge of the three - dimensional structure of objects and of our environment that is good enough for many practical purposes , despite ambiguities in geometry .* the theory provides an account of how we may see things that are not objectively present in an image , and how we may recognise something despite variations in the size of its retinal image .* the theory has things to say about the phenomena of lightness constancy and colour constancy , about the role of context in recognition , and about ambiguities in visual perception .a strength of the sp theory is that it is not simply a theory of vision .it provides for the integration of vision with other sensory modalities and with other aspects of intelligence such as reasoning , planning , and problem solving .h. b. barlow .sensory mechanisms , the reduction of redundancy , and intelligence . in hmso ,editor , _ the mechanisation of thought processes _ , pages 535559 .her majesty s stationery office , london , 1959 . c. farabet , c. couprie , l. najman , and y. lecun .scene parsing with multiscale feature learning , purity trees , and optimal covers . in _ proceedings of the 29th international conference on machine learning , edinburgh , scotland , uk , 2012 _ , 2012. w. l. gehringer and e. engel .effect of ecological viewing conditions on the ames distorted room illusion ._ journal of experimental psychology : human perception and performance _ , 120 ( 2):0 181185 , 1986 .a. glennerster , s. j. gilson , l. tcheang , and a. j. parker . perception of size in a ` dynamic ames room ' ._ journal of experimental psychology : human perception and performance _ , 30 ( 9):0 490a , 2003 .doi : 10.1167/3.9.490 .bottom - up / top - down image parsing by attribute graph grammar . in _ proceedings of the tenth ieee international conference on computer vision ( iccv 2005 ) , 17 - 21 oct .2005 _ , volume 2 , pages 17781785 , 2005 .d. marr ._ vision : a computational investigation into the human representation and processing of visual information_. the mit press , london , england , 2010 .this book was originally published in 1982 by w. h. freeman and company .m. j. tarr .rotating objects to recognize them : a case study of the role of viewpoint dependency in the recognition of three - dimensional objects ._ psychonomic bulletin and review _ , 20 ( 1):0 5582 , 1995 .j. g. wolff . learning syntax and meanings through optimization and distributional analysis . in y. levy , i. m. schlesinger , and m. d. s. braine , editors , _ categories and processes in language acquisition _ , pages 179215 .lawrence erlbaum , hillsdale , nj , 1988 .see : http://bit.ly/zigjyc[bit.ly/zigjyc ] .j. g. wolff ._ unifying computing and cognition : the sp theory and its applications_. cognitionresearch.org , menai bridge , 2006 .isbns : 0 - 9550726 - 0 - 3 ( ebook edition ) , 0 - 9550726 - 1 - 1 ( print edition ) .distributors , including amazon.com , are detailed on http://bit.ly/wmb1rs[bit.ly/wmb1rs ] .the publisher and its website was previously cognitionresearch.org.uk .
the _ sp theory of intelligence _ aims to simplify and integrate concepts in computing and cognition , with information compression as a unifying theme . this article discusses how it may be applied to the understanding of natural vision and the development of computer vision . the theory , which is described quite fully elsewhere , is described here in outline but with enough detail to ensure that the rest of the article makes sense . low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in a manner that is comparable with the run - length encoding technique for information compression . the concept of _ multiple alignment _ in the sp theory may be applied to the recognition of objects , and to scene analysis , with a hierarchy of parts and sub - parts , and at multiple levels of abstraction . the theory has potential for the unsupervised learning of visual objects and classes of objects , and suggests how coherent concepts may be derived from fragments . as in natural vision , both recognition and learning in the sp system is robust in the face of errors of omission , commission and substitution . the theory suggests how , via vision , we may piece together a knowledge of the three - dimensional structure of objects and of our environment , it provides an account of how we may see things that are not objectively present in an image , and how we recognise something despite variations in the size of its retinal image . and it has things to say about the phenomena of lightness constancy and colour constancy , the role of context in recognition , and ambiguities in visual perception . a strength of the sp theory is that it provides for the integration of vision with other sensory modalities and with other aspects of intelligence . _ keywords _ : vision , information compression , artificial intelligence , perception , cognition , representation of knowledge , learning , pattern recognition , natural language processing , reasoning , planning , problem solving .
wu and vos introduce a parameter - free distribution estimation framework and utilize the kullback leibler ( kl ) divergence as a loss function .they show that the kl risk of a distribution estimator obtained from an i.i.d .sample decomposes in a fashion parallel to the mean squared error decomposition for a parameter estimator , and that an estimator is distribution unbiased , or simply unbiased , if and only if its distribution mean is equal to the true distribution .distribution unbiasedness can be defined without using any parameterization .we call this approach parameter - free even though there may be applications where it is desirable to use a particular parameterization .when the distributions are , in fact , parametrically indexed , distribution unbiasedness handles multiple parameters simultaneously and is consistent under reparametrization .wu and vos also show that the mle for distributions in the exponential family is always distribution unbiased .the kl expectation and variance functions and are defined by minimizing over the space of all distributions .these functions completely describe an estimator in terms of its kl divergence around any distribution . in this paper, we introduce distribution expectation and variance functions and that are defined by minimizing over a smaller space of distributions . for exponential and mixture families , the expected kl risk is a function only of these quantities . even though the focus of this paper is on parametric exponential families ,our approach is parameter - free in that the definitions and results are provided without regard to the parameterization of the family .there are three advantages to this approach : one , the lack of invariance of bias across parameter transformations is avoided ; two , we can allow for estimators taking values outside of the exponential family ; three , the case where the true distribution does not belong to the family is easily addressed .section [ sec : kullback - leibler - risk ,- variance , ] introduces the distribution expectation and variance functions and shows how these are a generalization of the mean and expectation functions for mean square error .exponential families and their extension are discussed in section [ sec : exponential - family ] .the fundamental properties of the distribution mean and variance functions allow using the ideas of rao blackwell to show that the mle is the unique uniformly minimum distribution variance unbiased estimator ( umv ) .this result is proved in section [ sec : rao - blackwell - and - the ] .three examples are given in section [ sec : examples ] and section [ sec : discussions ] contains further remarks .the parametric version of the rao blackwell theorem can be proved using a pythagorean relationship that holds for mean square error ( mse ) and the expectation operator . to prove the distribution version of the rao blackwell theorem , we use a similar relationship that holds for kl risk and the kl expectation along with a second pythagorean relationship that holds in exponential families for kl divergence and the kl projection .basic properties of the expectation operator for real - valued random variables used in the proof can be extended to distribution - valued random variables .we begin with the property that the expectation minimizes the mse . for (real - valued ) random variable and we can define the average behavior of relative to using the risk function ,\ ] ] where is a loss function , that is , a nonnegative convex function on . when <\infty ] is a probability measure that is absolutely continuous with respect to , that is , , is unique up to measure zero ( ) , and has a density \quad\quad \mbox{for } y\in \mathbb{x}.\ ] ] in addition , when is replaced with the random variable , ] for some , we define \ ] ] and \ ] ] if the minimum exists , in which case , .\ ] ] for kl risk , that is , when , we have = \int\r_{\mathbf{x}}(y)r_{0}^{n}(\mathbf{x})\ , \mathrm{d}\lambda^{n}(\mathbf{x})\defeq e\r , \\\label{eq : vequiv-1}v_{d}\r&\defeq&\inf_{r_{1}\in\mathcal{r}}e\bigl[d ( \r , r_{1})\bigr ] = ed(\r , e\r)\defeq v\r.\end{aligned}\ ] ] the middle equalities in equations ( [ eq : eequiv-1 ] ) and ( [ eq : vequiv-1 ] ) are established in wu and vos . since these are equal when is the kl divergence and we consider no other divergence functions on , we will simply write and for the kl mean and variance .furthermore , and completely characterize the average behavior of the -valued random variable relative to any distribution because of the relationship =d ( \e\r , r ) + \v\r \quad\quad\forall r\in\mathcal{r}.\ ] ] this means the kl risk for an -valued random variable , having any distribution function , is completely determined by knowing its argmin , , and minimum , . when , equation ( [ eq : klr1 ] ) gives the decomposition of the kl risk in terms of bias and variance .the relationship in ( [ eq : klr1 ] ) will not hold for general nonnegative convex functions .in this paper we only consider kl divergence .furthermore , a conditional expectation on -valued random variables can be defined so that the following conditional properties hold , \\ \label{eq : klr3}v\r & = & v\e[\r|\s]+\e\bigl[v(\r|\s)\bigr],\end{aligned}\ ] ] where could be -valued but could also be real or other valued since values of will only be used to generate sub sigma fields .let have support and let be an -valued random variable such that the kl mean and the kl variance exist and are finite .then for any the mean divergence between and depends only on the kl mean and kl variance .furthermore , the kl mean and kl variance satisfy the classical conditional equalities ( [ eq : klr2 ] ) and ( [ eq : klr3 ] ) . equation ( [ eq : klr1 ] ) follows from the definition of kl variance and theorem 5.2 in who show that the expected kl loss ] have densities with respect to and the order of integration can be interchanged .the steps are the same as those that establish ] into in ( [ eq : v1 ] ) and using =\e\r ] for .we add the regularity condition that the support of each distribution in is .equation ( [ eq : klr1 ] ) shows that and give the kl risk for any .however , generally even if takes values only in .we consider whether an expectation can be defined that takes values in and so that ( [ eq : klr1 ] ) holds. we will define this expectation as a minimum over .we define \ ] ] and \ ] ] if the minimum exists , in which case .\ ] ] equation ( [ eq : klr1 ] ) now becomes =d\bigl(\ed\r , p\bigr)+\vd\r+\delta\bigl(\e\r,\ed\r , p\bigr ) \quad\quad\forall p\in \mathcal{p},\ ] ] where if vanishes for all then the argmin and the min completely characterize in terms of kl risk .when is small these functions can be used to approximate the kl risk of .we will show the term vanishes when is an exponential family .the relationship between the expectations and can be expressed by using the kl projection onto by equation ( [ eq : klr1 ] ) , for any we have that since .these results are summarized in the following theorem .let such that the support of is and let be an -valued random variable such that the distribution mean and the distribution variance exist and are finite .then for any the mean divergence between and is given by ( [ eq : klp0 ] ) .the term measures the extent to which the kl mean , distribution mean , and depart from forming a dual pythagorean triangle .the kl variance is less than or equal to the distribution variance , , and the distribution mean is the kl projection of the kl mean onto , .wu and vos show that for all an exponential family .for mixture families .hence , vanishes when is either an exponential family or mixture family .while we do nt know how to write as an integral and the expectation property ( [ eq : klexpectationproperty ] ) does not hold for in general , we show equations ( [ eq : klr2 ] ) and ( [ eq : klr3 ] ) hold with replaced with and replaced with when is either an exponential or mixture family . furthermore , the expectation property will hold for when is an exponential family and is the canonical statistic .for a general subspace the distribution mean and distribution variance do not characterize ] and the classical equalities relating conditional mean and variance hold .a standard reference for exponential families is brown , but the approach we take here is slightly different since our emphasis is on the distributions without regard to any particular parameterization. an exponential family will be defined by selecting a point and statistic taking values in .the defining property of an exponential family is that for any the log of the density of with respect to is a linear combination of and the constant function .we start with some definitions and basic properties . is an _ exponential family on _ if there exists such that the support of is and a function such that for any the distribution is called a _ base point _ and is called the _ canonical statistic _ of . the _ canonical parameter space _ is without loss of generality , we can choose a base point such that .we ll refer to exponential families using base points that belong to the family .let be an exponential family with base point , canonical statistic , and set .the _ cumulant function _ has domain and is defined as the density with respect to for any is the family is _ regular _ if is open and is _ full _ if . by the factorization theorem , is sufficient. it will often be useful to restrict the choice of so that it is complete for the full exponential family .a statistic is _ complete _ for if the following theorem shows that the projection operator on behaves like the expectation operator on ( theorem [ thm : expectation - property - on ] ) and will be used to show that the classical conditional expectation equation holds for .[ thm : expmu ] if is the kl projection onto , where is an exponential family having canonical statistic and , then for any such that , where is the mean parameter space of .this result follows from the relationship between the natural and expectation parameters for an exponential family .let for some .then the natural parameter of this distribution satisfies \ ] ] and since parameterizes , .\ ] ] the result now follows for exponential family by simple calculation where by ( [ eq : dual1 - 1 ] ) .[ cor : pythagorean - property - for ] let be an exponential family and let such that exists . for all this is a well - known result .see , for example , or .we define an extended projection to be any distribution in such that expectation and pythagorean properties hold and it belongs to the `` boundary '' of ; that is , note that satisfies these three equalities , and that the last two equalities imply the extended projection allows us to define the extended mle in the next section .the pythagorean property allows us to improve -valued random variables by the projection or , more generally , by . if exists a.e ., then \ge e\bigl[d(\overline{\pi}\r , p)\bigr]\ ] ] with equality holding if and only if a.e . replacing with in equation ( [ eq : extended pythagorean ] ) and taking expectations shows =e\bigl[d(\r,\overline{\pi}\r)\bigr]+e\bigl[d(\overline{\pi } \r , p)\bigr ] \quad\quad\forall p\in\mathcal{p}\ ] ] and the result follows from the fact that \ge0 ] into in ( [ eq : v1 - 1 ] ) and using =\ed\r ] will have the same distribution mean and have distribution variance less than or equal to that of .if is sufficient then ] will have smaller variance than unless they are equal with probability one .this conditional expectation is enough to establish a rao blackwell result for distribution estimators if these were restricted to .however , since we are allowing -valued estimators we also need to project the distributions onto using . for an exponential family having mean parameter and discrete sample space we typically have that while where is the closure of . in this case , the mle does not always exist. however , the characterization theorem applies to -valued estimators so we can define an estimator that equals the mle when it exists and as a distribution such that and if .the _ extended mle _ as distribution estimator is unbiasedness of follows from the following theorem .[ thm : unbiasedestimatorsinexp ] let be an exponential family with complete sufficient statistic and let be a -valued random variable .the estimator is distribution unbiased for if and only if )=t ] a.e .for all .consider the following equivalencies each of which holds for all : \bigr ) & = & \mu(p_{0 } ) \\\iff\quad e\bigl[\mu\bigl(e[\r|t]\bigr)\bigr ] & = & \mu(p_{0 } ) \\\iff\quad e\bigl[\mu\bigl(e[\r|t]\bigr)\bigr ] & = & e(t).\end{aligned}\ ] ] the first equivalence follows because the expectation of parameterizes , the second equivalence follows from the projection property for exponential families , the third equivalence follows from the conditional expectation defined for the kl mean , the fourth equivalence follows from the expectation property for the kl mean , and the fifth equivalence follows from the definition of the function . clearly , )=t i\le s_{n } ] . for exponential family have while for all =\e\bigl[d(\r,\e\r)\bigr]+\e\bigl[d(\e\r , r)\bigr]\ ] ] so that the expectation operator defined on -valued random variables for the kl risk plays the role of the projection operator for the kl divergence .each operator is a map from a more complicated space to a simpler space , from -valued random variables to a distribution in and from distributions in to a distribution in , that preserve the kl risk and kl divergence , respectively .the restriction to exponential families is essentially required by the criterion of having a sufficient statistic of fixed dimension for all sample sizes .specifically , the darmois koopman pitman theorem which follows from independent works of darmois , koopman andpitman shows that when only continuous distributions are considered , the family of distributions of the sample has a sufficient statistic of dimension less than if and only if the population distribution belong to the exponential family .denny shows that for a family of discrete distributions , if there is a sufficient statistic for the sample , then either the family is an exponential family or the sufficient statistic is equivalent to the order statistics .the mle is parameter - invariant which means that the same distribution is named by the parametric ml estimate regardless of the parameter chosen to index the family .one approach to studying parameter - invariant quantities is to use differential geometry ( e.g. , amari or kass and vos ) .the parameter - invariant approach does not work well for parameter - dependent quantities such as bias and variance of parametric estimators .our approach allows for the definition of parameter - free versions of bias and variance .furthermore , the distribution version of the rao blackwell provides two extensions : ( 1 ) minimum variance is taken over a larger class of estimators that includes estimators that are not required to take values in the model space , ( 2 ) the true distribution need not belong to .the fact that the mle is the unique uniformly minimum distribution variance unbiased estimator for exponential families distinguishes the mle from other estimators .this is in contrast to asymptotic methods applied to mse that can be used to show superior properties of the mle but , being asymptotic results , do not apply uniquely to the mle .asymptotically , mse and kl risk are the same and the mse can be viewed as an approximation to kl risk for large .the distribution version of the rao blackwell theorem [ thm : optimality - of - the ] provides support for fisher s claim of the superiority of the mle even in small samples .we thank the associate editor and external reviewers for their insightful comments and suggestions which have made great improvement on this paper .
we employ a parameter - free distribution estimation framework where estimators are random distributions and utilize the kullback leibler ( kl ) divergence as a loss function . wu and vos [ _ j . statist . plann . inference _ * 142 * ( 2012 ) 15251536 ] show that when an estimator obtained from an i.i.d . sample is viewed as a random distribution , the kl risk of the estimator decomposes in a fashion parallel to the mean squared error decomposition when the estimator is a real - valued random variable . in this paper , we explore how conditional versions of distribution expectation ( ) can be defined so that a distribution version of the rao blackwell theorem holds . we define distributional expectation and variance ( ) that also provide a decomposition of kl risk in exponential and mixture families . for exponential families , we show that the maximum likelihood estimator ( viewed as a random distribution ) is distribution unbiased and is the unique uniformly minimum distribution variance unbiased ( umv ) estimator . furthermore , we show that the mle is robust against model specification in that if the true distribution does not belong to the exponential family , the mle is umv for the kl projection of the true distribution onto the exponential families provided these two distribution have the same expectation for the canonical statistic . to allow for estimators taking values outside of the exponential family , we include results for kl projection and define an extended projection to accommodate the non - existence of the mle for families having discrete sample space . illustrative examples are provided . ./style / arxiv - general.cfg
there is considerable interest in estimating age and transit times of elements in a physical system or , equivalently , of individuals in a population , in disciplines as diverse as population dynamics and demography , chemical engineering , and hydrology and geophysics ( e.g. , among many others ) .it has in fact become increasingly clear that the age , survival time , and the total time spent by each element in a system may provide additional key insights into specific aspects of a system s behavior .this viewpoint , which can be considered as a time - integrated lagrangian perspective , has been especially emphasized in groundwater systems and in the hydrological response of watersheds , using both theoretical and field approaches .recent discussions in the literature about the role of internal variability and external forcing ( e.g. , rainfall ) on the properties of age distributions , as well as the differences between age and survival time distributions , their degree of statistical dependence , and their symmetry under time reversal ( e.g. , ) have made evident that a comprehensive theory of age and related concepts is still missing . toward this goal , in this contribution we focus on the linkage between age and survival time distribution in both transient and steady - state conditions . differently from what was assumed in , we show that age and survival time are in general statistically dependent quantities ( the only case of independence being the one of time- and age - independent loss ( or input ) in steady state ) .the theoretical framework afforded by the evolution equation of the joint distributions of age and survival also provides a means to easily understand the time symmetries between age and survival , and the derivation of the general properties of the transit - time distribution .we should warn the readers unfamiliar with the previously cited literature that , perhaps because of the contributions from many disciplines , the terminology which identifies these variables is hardly unified .for example , apart from the age , the definition of which seems uncontroversial ( in what follows ) , the survival time ( here indicated as ) is often also indicated as life expectancy , while input and output rates are often also called birth and death functions .the variable with possibly the most appellations is the so - called transit time ( ) , the sum of age and survival time , which is also indicated as travel time , life span , total life time , and sojourn time .as long as the mathematical formalism is clear and the notation kept consistent , as we have hopefully done here , we trust that these different names will not confuse the readers .the paper is organized as follows .the evolution equation for the joint distribution of age and survival time is introduced and solved in section [ sec : joint ] , with boundary conditions given by the survival time distribution at birth and the related age distribution at death .the solution is used to derive the transit time distribution by a simple integration in section [ sec : transit ] .the steady state conditions are discussed in section [ sec : steadystate ]. finally , we present some applications in section [ sec : applications ] with the purpose of showing some interesting details of the theory . while most of these applications have a close connection to hydrological and fluid mechanic systems , they are by necessity highly idealized to allow us to focus on the novel theoretical results , avoiding the additional complications that more realistic applications with random external forcing ( e.g , rainfall ) and spatial heterogeneities would add .the transit time ( ) of an element of a system is the sum of the time spent since the entrance / birth , called the age ( ) , and the time that it will spend before exit / death , called the survival time ( ) . at a given time , each element is characterized by a certain age and survival ( and thus transit ) time , which globally can be described by the joint distribution . in words, represents the ( infinitesimal ) amount of elements ( e.g. , a mass or population number having age between and and survival time and at time .the balance equation for the joint distribution can be obtained considering that , as the system evolves in time , is conserved along the lines orthogonal to the bisector in the plane , which are characterized by having constant . based on these considerations ,one can readily write the equation is controlled by the boundary conditions , which is the survival time distribution at input / birth , and , which is the age distribution at output / death .an example of the evolution of the joint distribution is shown in figure [ fig : translationbcs ] , showing how is simply the input boundary condition on the axis , translating in time along lines of constant until it crosses the axis where .the figure also clearly shows how the two boundary conditions can not be independent , as will be seen more precisely later .it is interesting that the contribution of input and output to the system is entirely felt through the boundary conditions . in more general cases, elements could also enter with age different from zero ( immigration ) or exit with a non - zero survival time ( emigration ) , in which case equation ( [ eq : jointeq ] ) should also contain corresponding source and sink terms ; these generalization however will not be pursued in this paper .more formally , moving along the characteristic curves , defined by , and , which are obviously also lines of constant transit time , it is possible to re - express equation ( [ eq : jointeq ] ) as so that the solution is then or equivalently , from this it is evident how the joint distribution at a given time is simply the time shift of the boundary conditions and that , if time is reversed , the whole process is flipped , with the age playing the role of the survival time and vice versa .this type of time symmetry will appear frequently in the following .results from a simple translation of the boundary conditions . ] by integrating equation ( [ eq : jointeq ] ) over , one obtains the mkendrick - von foerster ( mkvf ) equation , describing the dynamics of an age - structured population equation , where is the age distribution ( mass over time ( age ) ) , quantifying the amount of substance having age at time . the sink term , , is the age distribution at output / death , previously introduced as a boundary condition for ( [ eq : jointeq ] ) .it can be written as where is the age and mass specific output rate . with initial condition and boundary condition , where is the input / birth rate , the solution of equation ( [ eq.mkvf ] ) is on the other hand , by integrating ( [ eq : jointeq ] ) over , a corresponding equation for the survival time distribution is obtained , where quantifies the amount of substance having survival at time . the source term is the survival time distribution at input / birth , which can be expressed as with being the survival time and mass specific birth rate .the boundary condition is the overall output and the initial condition is .as for the mkvf equation , the solution is obtained with the method of characteristics as by integrating again either equation ( [ eq.mkvf ] ) over or equation ( [ eq.mkvfsurv ] ) over , the familiar form of the balance equation is obtained , with and where the input and output can also be written as and . it should be noted that , when the solution of ( [ eq : jointeq ] ) is available , the age and survival distributions , and , and the evolution of can be directly obtained by integrating the joint distribution , without need to go through the corresponding equations ( [ eq.mkvf ] ) , ( [ eq.mkvfsurv ] ) , and ( [ eq : balance ] ) . looking at these equations , it is also worth noting again the symmetry of the problem with respect to the time reversal , upon which age and survival time exchange their roles , with the output becoming the input and the age - specific loss function playing the part of the survival - specific birth function and vice versa .it is easy to see , in fact , that with these substitutions and , equation ( [ eq.mkvf ] ) and ( [ eq.mkvfsurv ] ) are interchangeable .it is possible at this point to establish a relationship between the age - specific output and the survival - specific input and , in turn , discuss the conditional distributions between age and survival times .the latter will be essential in deriving the residence time statistics in section [ sec : steadystate ] . to this purpose ,we begin by returning to equation ( [ eq : jointsol ] ) which immediately furnishes the relationship between the boundary conditions by setting either or ( see figure [ fig : translationbcs ] ) , these equations , when expressed in terms of their probability density functions ( pdfs ) normalized to have area one , become a relationship already obtained by , the subscripts and , in particular , refer to the variables age at death and survival time at birth , respectively .they will be used explicitly when it is necessary to refer to them as random variables to distinguish them from age and survival time of the entire population , as in section [ sec : meanvalues ] . from ( [ eq : relbc ] ) , a relationship between the birth and loss function is then obtained from the definition of and , in equations ( [ eq : defloss ] ) and ( [ eq : defbirth ] ) , in which the respective solutions for and can be substituted from ( [ eq : mkvfsol ] ) and ( [ eq : mkvfsurvsol ] ) , giving this clearly shows that the age- and survival - specific birth and loss functions are not independent .coming back to equation ( [ eq : jointsol2 ] ) , the joint distribution can be expressed in terms of the conditional distribution between age and survival time . to this purpose , it is more useful to consider the pdfs , instead of distributions , and define where and are the marginal pdfs of age and survival time , respectively . focusing , as an example , on the conditional pdf of survival given age, an expression can be derived combining ( [ eq : conditional ] ) and ( [ eq : jointsol2 ] ) , substituting in ( [ eq : conddistr ] ) the solutions for and given by ( ) and ( [ eq : defloss ] ) , one readily obtains only when this expression is equal to the marginal distribution of age , are the age and survival time statistically independent . thus comparing with the marginal one sees that , in general , marginal and conditional probability distributions are differentthis implies that ( when elements are sampled at random in the system ) age and survival are typically statistically dependent variables .this remains true , in transient conditions , even when the loss and birth functions are constant , because the two distributions remain different for and .further considerations on ( [ eq : condsol ] ) will be given , for steady state conditions in section [ sec : steadystate ] .as already said , the transit time , , is the total time spent by an element in the system , given by the sum of age and survival time , its distribution can thus be obtained as the distribution of the sum of the two random variables , the age and survival time , e.g. , which can also be written as as shown in figure [ fig : linestconst ] , equation ( [ eq : intjoint2 ] ) is the integral along lines of constant , which are orthogonal to the bisector on the plane .using equation ( [ eq : jointsol ] ) one also has which tells us that we can know the transit time distribution at time by summing up the amount leaving the system with age within the time window .only when and are statistically independent is equation ( [ eq : transdistr ] ) a simple convolution integral .an alternative expression can be similarly obtained from equation ( [ eq : jointsol ] ) , which instead looks back to the time window ( , ) and sums all the elements entering with survival time . .the boundary conditions and are also indicated on the and axes , respectively . ]it may be useful to note that the transit time distribution refers in general to any element of the control volume ( or population ) . if one instead only focuses on the elements entering the system ( or the newborns ) , their transit time is equal to their survival time , because for them , and their distribution is .analogously , focusing on the elements leaving the system ( or dying ) , equals the age and their residence time distribution is the equal to .several of the previous relationships assume an interesting , simplified form at steady state , a necessary condition for which is that and are time independent .in such a case , the balance equation for a steady state system is simply and from equation ( [ eq : jointsol ] ) it follows that meaning that not only the overall input equals the overall output , but also that an input with a fixed age must be balanced by an output of equal survival time . as a result , the joint distribution is constant , and equal to along lines of constant . in steady state conditions , age and survival time distributionsare derived by taking in ( [ eq : mkvfsol ] ) and ( [ eq : mkvfsurvsol ] ) , that is and in particular , from ( [ eq : jointsol ] ) , the integral of over is the same as the integral over , suggesting that in turn , it follows immediately that , at steady state , birth and loss functions must be equal , we note that the so - called survivor function , defined as the exceedance probability of survival at steady state , can be obtained by dividing either the age distribution or the survival distribution by the input ( or by the output ) , so that , as well known , the transit - time distribution at steady state can also be easily obtained by solving the integral in equation ( [ eq : transdistr ] ) and substituting equation ( [ eq : agesteady ] ) , or , normalized as a pdf , finally , regarding the conditional probabilities , from equation ( [ eq : conddistr ] ) , and because of ( [ eq : equallossbirth ] ) , the two are obviously equal .comparing for example the distribution ( [ eq : ageconddistrsteady ] ) with the corresponding marginal pdf it becomes clear that only for constant , are ( [ eq : survconddistrsteady ] ) and ( [ eq : ageconddistrsteady ] ) equal to their marginal distributions , thereby implying that age and survival time are statistically independent .this is essentially due to the rescaling ( or memoryless ) property of the resulting exponential distributions , which is the form taken by all these distributions in this special case ( see section 5.1 ) . in general , however , when the input and loss functions depend respectively on age and survival time , the two variables are statistically dependent , as will be shown in detail in the applications . in steady state ,because of ( [ eq : equaldistr ] ) , the age distribution and survival distribution have same mean the mean age at death and mean survival time at birth are also equal , while the mean transit time is then with regard to the mean transit time , by definition , where the last equality has been obtained by multiplying and dividing by the output .now , remembering that where is the variance of the respective variable , then one obtains the exact relationship thus , in general , , so that represents an upper bound for both mean age and mean survival time . in particular , equals only when the loss function is a dirac delta function , for which the variance of is zero , as in the case of a plug - flow system .in addition , substituting expression ( [ eq : ageand transit ] ) into the equation above , an exact link between and is also obtained as the same condition was obtained in a somewhat different way by . only in the case of and , then and , and , while for a dirac delta loss function , .the manner in which input and loss functions depend on age and survival time plays a key role in determining whether the mean of the age and survival time , and , are greater or lower than the mean age at death and the mean survival time at birth , and . for example , as already discussed by and , in the case of a loss function which selects preferably young elements leaving older element to age in the system , the resulting mean age at death is lower than the mean age in the system , i.e. , . on the contrary, the case is true whenever older elements tend to be chosen by , leaving young ones to keep the mean age low in the system , compared to the mean age at death ( see section [ app4 ] ) .we present four examples to illustrate the previously discussed theory .the first is a simple steady state system with constant birth and loss functions .the second consists of a plug - flow system in which all the elements have the same transit time .the third application is characterized by a periodicity of the age - independent loss function , while the fourth one focuses on the role of age - dependence in the loss function .this simplest case , which represents a well - mixed system , serves as a point of reference for the more complex cases presented later .it is characterized by constant and equal birth and loss function , so that the balance equation gives while the normalized age and survival time distributions are with mean ( see figure [ fig : linstead ] ) .it is easy to show that the same exponential function results from ( [ eq : survconddistrsteady ] ) and ( [ eq : ageconddistrsteady ] ) for the conditional pdfs , so that in these special case age and survival are statistical independent and their joint distribution is simply the product of the two distributions . the age distribution at death ( survival time distribution at birth )is simply obtained by multiplying by ( by ) , which in its normalized form is equal to the age distribution , , and similarly with and .the mean values are the transit time distribution ( fig .[ fig : linstead ] ) is calculated from equation ( [ eq : transdistrsteady ] ) , which , written as a pdf , becomes an erlang-2 distribution for the sum of two independent , exponentially distributed random variables ( see figure [ fig : linstead ] ) , with mean . .the distributions are calculated for and . ]while in the previous example age and survival time were statistically independent , this application takes the opposite extreme of complete ( i.e. , deterministic ) dependence . to this purpose , we assume that the survival distribution at birth is a dirac delta on a specific survival time , modulated sinusoidally in time , with . because all the elements exit after the prescribed time , this example is representative of a time - dependent plug - flow system . at the outlet ,the age distribution at death is the time shifted , i.e. , while inside the system , the age distribution is given by where is the heaviside function .the survival time distribution is while the joint distribution is given by see figure [ fig : deltaapp ] .the transit time distribution can be calculated from ( [ eq : transdistr2 ] ) , and the conditional distribution , which reflects a deterministic relationship between age and survival , imposed by the survival distribution at birth ( see figure [ fig : deltaapp ] ) .imposed as a dirac delta modulated by the sinusoidal amplitude in ( [ eq : impsurv ] ) , with , and .the marginal distributions of age and survival time are also plotted on the corresponding vertical planes . ]we now consider an extension of the first application , in which the input is constant but the loss function , although still independent of age , is now time periodic , from a hydrological point of view , this case is representative of a system with rainfall homogeneously distributed during the year but with seasonally modulated potential evapotranspiration and negligible other losses , the system is still well - mixed , although the transient conditions bring about additional complications that result in statistical dependence between age and survival . considering so that the system has forgotten the initial conditions and has settled on a periodic steady state , the age distribution is from ( [ eq : mkvfsol ] ) while the joint distribution can be obtained through equations ( [ eq : jointsol2 ] ) , the joint distribution is plotted in figure ( [ fig : jointappperiod ] ) for different days of the year corresponding to the four seasons .they show that the systems has low values of age and survival for the season with high losses ( summer ) , while when the losses diminish ( winter ) , the age and survival start increasing again .also visible is the asymmetry with respect to the bisector indicating statistical dependence between age and survival induced by the time - varying conditions .the age , survival time and transit time distributions are plotted in figure [ fig : pdfsperiod ] .the joint distribution integrated with respect to recovers the survival time distribution .however , the integral does not appear to be elementary and here it was only solved numerically . finally , regarding the mean values ( figure [ fig : applperiod ] ) , the loss function being age - independent , the mean age at death is equal to the mean age , the mean survival time at birth was computed analytically , while the mean survival time was computed numerically , and they appeared to be equal . ), calculated at four different times of the year . d ( a ) , d ( b ) , ( c ) , d ( d ) .the parameters , , and . ] ) , calculated at four different times of the year . d ( a ) , d ( b ) , ( c ) , d ( d ) .the parameters are the same as in figure [ fig : jointappperiod ] . ] ).the parameters are the same as in figure [ fig : jointappperiod ] . ] in this last example , we analyze the role of the loss function under conditions of steady state . as shown in section [ sec : steadystate ] , in steady state , the age and survival time distributions are the same , and therefore similar considerations also apply to a survival - time dependence of the birth function .we consider the age - dependent loss function with .for , is a decreasing function , thus selecting younger elements for output , while it is an increasing function of age for with preference for older elements .the age distributions is obtained from ( [ eq : agesteady ] ) whereas the age distributions at death is for , the age pdf at death is a stretched exponential distribution and , as can be verified through ( [ eq : relageedeath ] ) , the mean age at death is lower than the mean age . when , the age pdf at death is a weibull distribution and the mean age at death is greater than the mean age . in the limiting case , is a constant and the well - mixed system of section [ sec : linesteady ] is recovered .in addition , the plug - flow system of section [ sec : appdelta ] is recovered when taking . for different values of , the age , age at death and transit time distributionsare plotted in figure [ fig : distrappl2 ] , while the joint distributions are plotted in figure [ fig : applossjoints ] .the mean values are shown in figure [ fig : medieofc ] , where the mean age at death is lower than when , while it is larger than when , and tends asymptotically to for .as shown in equation ( [ eq : reltransdeath ] ) , serves as an upper bound for both mean age and mean age at death . ) with parameter ( a ) , ( b ) , ( c ) and ( d ) . ]( [ eq : imposedlossufunction2 ] ) , for ( a ) , ( b ) and ( c ) . ] , given the age dependent loss function ( [ eq : imposedlossufunction2 ] ) . ]our main results regard the evolution equation of the joint distribution , equation ( [ eq : jointeq ] ) , which in turn allowed us to obtain the corresponding evolution equation for age ( mkvf ) and survival time , given by equations ( [ eq.mkvf ] ) and ( [ eq.mkvfsurv ] ) , respectively .the theory naturally led us to consider the conditional distributions of age and survival , equations ( [ eq : conditional ] ) , ( [ eq : conddistr ] ) , and ( [ eq : condsol ] ) , which helped us clarify some of the statements in the literature about the statistical dependence of these two quantities .we also obtained general relationships for the transit time distribution , equations ( [ eq : transdistr ] ) and ( [ eq : transdistr2 ] ) , and discussed the simplifications induced by the steady state conditions ; see equations ( [ eq : equaldistr ] ) , ( [ eq : equalcond ] ) , ( [ eq : equallossbirth ] ) , ( [ eq : transdistrsteadynorm ] ) and ( [ eq : ageconddistrsteady ] ) .furthermore , we derived exact relationships among the means ( [ eq : reltransdeath ] ) and ( [ eq : relageedeath ] ) , although the latter was already known to .the present theory is spatially implicit in that it considers , globally , entire populations or finite amounts of substance in a control volume .interesting future work will consist of connecting it with the spatially explicitly formulation pioneered by ( and further developed by ) , as done in the case of the mkvf equation by and .it also seems promising to explore the use of nonlinear formulations ( see and related contributions ) , in which mortality and birth functions also may depend on the total amount , that is .such dependence of birth and mortality on global quantities may be linked to the nonlocal nature of the pressure equation in fluid mechanics .indeed hydrological systems are known to behave such that the loss function at a point is non - local .this is often tacitly assumed in spatially implicit , event - based formulations of rainfall - runoff , which use closure assumptions that depend on the total system storage ( _ bartlett et al ., 2015 _ , in preparation ) .finally , for realistic hydrologic applications it is necessary to include the effect of stochasticity from the external environment on the input and output functions . in this case ,even the mean quantities become random variables with statistical distributions that may be of great theoretical and practical interest .birkel , c. , d. tetzlaff , s. m. dunn , and c. soulsby ( 2011 ) , using time domain and geographic source tracers to conceptualize streamflow generation processes in lumped rainfall - runoff models , _ water resources research _ , _47_(2 ) .cornaton , f. , and p. perrochet ( 2006 ) , groundwater age , life expectancy and transit time distributions in advective dispersive systems : 1 .generalized reservoir theory , _ advances in water resources _ , _ 29_(9 ) , 12671291 .ginn , t. r. ( 1999 ) , on the distribution of multicomponent mixtures over generalized exposure time in subsurface flow and reactive transport : foundations , and formulations for groundwater age , chemical heterogeneity , and biodegradation , _ water resources research _ , _35_(5 ) , 13951407 .hrachowitz , m. , c. soulsby , d. tetzlaff , j. dawson , s. dunn , and i. malcolm ( 2009 ) , using long - term data sets to understand transit times in contrasting headwater catchments , _ journal of hydrology _ , _367_(3 ) , 237248 .maoszewski , p. , and a. zuber ( 1982 ) , determining the turnover time of groundwater systems with the aid of environmental tracers : 1 .models and their applicability , _ journal of hydrology _ , _57_(3 ) , 207231 .mcdonnell , j. j. , and k. beven ( 2014 ) , debates the future of hydrological sciences : a ( common ) path forward ?a call to action aimed at understanding velocities , celerities and residence time distributions of the headwater hydrograph , _ water resources research_.
although the concepts of age , survival and transit time have been widely used in many fields , including population dynamics , chemical engineering , and hydrology , a comprehensive mathematical framework is still missing . here we discuss several relationships among these quantities by starting from the evolution equation for the joint distribution of age and survival , from which the equations for age and survival time readily follow . it also becomes apparent how the statistical dependence between age and survival is directly related to either the age - dependence of the loss function or the survival - time dependence of the input function . the solution of the joint distribution equation also allows us to obtain the relationships between the age at exit ( or death ) and the survival time at input ( or birth ) , as well as to stress the symmetries of the various distributions under time reversal . the transit time is then obtained as a sum of the age and survival time , and its properties are discussed along with the general relationships between their mean values . the special case of steady state case is analyzed in detail . some examples , inspired by hydrologic applications , are presented to illustrate the theory with the specific results .
after the completion of the human genome project has provided a blueprint of the dna present in each human cell , genomics research is now focusing on the study of dna variations that occur between individuals , seeking to understand how these variations confer susceptibility to common diseases such as diabetes or cancer .the most common form of genomic variation are the so called _ single nucleotide polymorphisms _ ( snps ) , i.e. , the presence of different dna nucleotides , or _ alleles _ , at certain chromosomal locations .the vast majority of snps are _ bi - allelic _ ,i.e. , only two of the four possible dna bases are observed at the snp locus .since human cells contain two copies of each chromosome ( with the exception of sex chromosomes in males ) , both snp alleles may be present in the dna of an individual .determining the identity of alleles present in a dna sample at a given set of snp loci is called _snp genotyping_. the continuous progress in high - throughput genomic technologies has resulted in numerous snp genotyping platforms combining a variety of allele discrimination techniques ( sequencing , direct hybridization , primer extension , allele - specific pcr , ligation , and cleavage , etc . ) , detection mechanisms ( fluorescence , mass spectrometry , etc . ) and reaction formats ( solution phase , solid support , bead arrays ) , see , e.g. , for comprehensive reviews . however , current technologies still offer an insufficient degree of multiplexing ( below 10,000 snps per assay ) for fully - powered genome wide disease association studies that require genotyping of large sets of user - selected snps .the highest throughput is currently achieved by high - density mapping arrays produced by affymetrix , which can simultaneously genotype a fixed set of about 250,000 _ manufacturer selected _ snps per array . genotyping a comparable number of user - specified set of snpswould require an expensive and time - consuming re - design of array probes as well as a difficult re - engineering of the primer - ligation amplification protocol . among technologies that allowgenotyping of custom sets of snps one of the most successful ones is the use of dna tag arrays .dna tag arrays consist of a set of dna strings called _ tags _ , designed such that each tag hybridizes strongly to its own _ antitag _ ( watson - crick complement ) , but to no other antitag .the flexibility of tag arrays comes from combining solid - phase hybridization with the high sensitivity of single - base extension reactions , which has also been used for snp genotyping in combination with maldi - tof mass spectrometry .a typical assay based on tag arrays performs snp genotyping using the following steps : ( 1 ) a set of _ reporter probes _ is synthesized by ligating antitags to the end of primers complementing the genomic sequence immediately preceding the snps of interest .( 2 ) reporter probes are hybridized in solution with the genomic sample .( 3 ) the hybridized ( primer ) end of reporter probes is extended by a single base in a reaction using the polymerase enzyme and dideoxynucleotides fluorescently labeled with 4 different dyes .( 4 ) reporter probes are separated from the template dna and hybridized to a tag array .( 5 ) finally , fluorescence levels are used to determine the identity of the extending dideoxynucleotides .commercially available tag arrays have between 2,000 and 10,000 tags .the number of snps that can be genotyped per array is typically smaller than the number of tags since some of the tags must remain unassigned due to cross - hybridization with the primers .another factor limiting the wider use of tag arrays is the relatively high cost of synthesizing the reporter probes , which have a typical length of 40 nucleotides . in the -mer array format ,all dna probes of length are spotted or synthesized on the solid array substrate ( values of of up to are feasible with current high - density in - situ synthesis technologies ) .this format was originally proposed for performing _sequencing by hybridization ( sbh ) _ , which seeks to reconstruct an unknown dna sequence based on its -mer spectrum .however , the sequence length for which unambiguous reconstruction is possible with high probability is surprisingly small , and , despite several suggestions for improvement , such as the use of gapped probes and pooling of target sequences , the sbh scheme has not become practical so far . in this paperwe propose a new genotyping assay architecture combining multiplexed solution - phase single - base extension ( sbe ) reactions with sequencing by hybridization ( sbh ) using universal dna arrays such as all -mer arrays .snp genotyping using sbe / sbh assays requires the following steps ( see figure [ k - mer ] ) : ( 1 ) synthesizing primers complementing the genomic sequence immediately preceding snps of interest ; ( 2 ) hybridizing primers with the genomic dna ; ( 3 ) extending each primer by a single base using polymerase enzyme and dideoxynucleotides labeled with 4 different fluorescent dyes ; and finally ( 4 ) hybridizing extended primers to a universal dna array and determining the identity of the bases that extend each primer by hybridization pattern analysis . to the best of our knowledge the combination of the two technologies in the context of snp genotyping has not been explored thus far .the most closely related genotyping assay is the generic polymerase extension assay ( pea ) recently proposed in . in pea ,short amplicons containing the snps of interest are hybridized to an all -mers array of _ primers _ that are subsequently extended via single - base extension reactions . hence ,in pea the sbe reactions take place on solid support , similar to _ arrayed primer extension _ ( apex ) assays which use snp specific primers spotted on the array . as in , the sbe / sbh assay leads to high array probe utilization since we hybridize to the array a large number of short extended primers .however , the main power of the method lies in the fact that the sequences of the labeled oligonucleotides hybridized to the array are a priori known ( up to the identity of extending nucleotides ) .while genotyping with sbe / sbh assays uses similar general principles as the pea assays proposed in , there are also significant differences .a major advantage of sbe / sbh is the much shorter length of extended primers compared to that of pcr amplicons used in pea .a second advantage is that _ all _ probes hybridizing to an extended primer are informative in sbe / sbh assays , regardless of array probe length ( in contrast , only probes hybridizing with a substring containing the snp site are informative in pea assays ) . as shown by the experimental results in section [ sec.results ] these advantages translate into an increase by orders of magnitude in multiplexing rate compared to the results reported in .we further note that pea s effectiveness crucially depends on the ability to amplify very short ( preferably 40bp or less ) genomic fragments spanning the snp loci of interest .this limits the achievable degree of multiplexing in pcr amplification , making pcr amplification the main bottleneck for pea assays .full flexibility in picking pcr primers is preserved in sbe / sbh assays .( a ) ( b ) + ( c ) ( d ) the rest of the paper is organized as follows . in section [ sec.formulations ]we formalize two problems that arise in genotyping large sets of snps using sbe / sbh assays : the problem of partitioning a set of snps into the minimum number of `` decodable '' subsets , i.e. , subsets of snps that can be unambiguously genotyped using a single sbe / sbh assay , and that of finding a maximum decodable subset of a given set of snps .we also establish hardness results for the latter problem .in section [ sec.algos ] we propose several efficient heuristics . finally , in section [ sec.results ] we present experimental results on both randomly generated datasets and instances extracted from the ncbi dbsnp database , exploring achievable tradeoffs between the type / number of array probes and primer length on one hand and number of snps that can be assayed per array on the other .our results suggest that the sbe / sbh architecture provides a flexible and cost - effective alternative to genotyping assays currently used in the industry , enabling genotyping of up to hundreds of thousands of user - selected snps per assay .a set of snp loci can be unambiguously genotyped by sbe / sbh if every combination of snp genotypes yields a different hybridization pattern ( defined as the vector of dye colors observed at each array probe ) . to formalize the requirements of unambiguous genotyping ,let us first consider a simplified sbe / sbh assay consisting of four parallel _ single - color _ sbe / sbh reactions , one for each possible snp allele . under this scenario ,only one type of dideoxynucleotide is added to each sbe reaction , corresponding to the complement of the tested snp allele .therefore , a primer is extended in such a reaction if the tested allele is present at the snp locus probed by the primer , and is left un - extended otherwise .let be the set of primers used in a single - color sbe / sbh reaction involving dideoxynucleotide , c , g , t . from the resulting hybridization pattern we must be able to infer for every whether or not was extended by .the extension of by will result in a fluorescent signal at all array probes that hybridize with .however , some of these probes can give a fluorescent signal even when is not extended by , due to hybridization to other extended primers .since in the worst case _ all _ other primers are extended , it must be the case that at least one of the probes that hybridize to does not hybridize to any other extended primer .formally , let be the set of array probes .for every string , let the _ spectrum of in _ , denoted , be the set of probes of that hybridize with .under the assumption of perfect hybridization , consists of those probes of that are watson - crick complements of substrings of .then , a set of primers is said to be _decodable _ with respect to extension if and only if , for every , decoding constraints ( [ 1-color - weak ] ) can be directly extended to 4-color sbe / sbh experiments , in which each type of extending base is labeled by a different fluorescent dye . as before ,let be the set of primers , and , for each primer , let be the set of possible extensions of , i.e. , watson - crick complements of corresponding snp alleles .if we assume that any combination of dyes can be detected at an array probe location , unambiguous decoding is guaranteed if , for every and every extending nucleotide , in the following , we refine ( [ 4-color - weak ] ) to improve practical reliability of sbe / sbh assays .more precisely , we impose additional constraints on the set of probes considered to be _ informative _ for each snp allele .first , to enable reliable genotyping of genomic samples that contain snp alleles at very different concentrations ( as a result of uneven efficiency in the pcr amplification step or of pooling dna from different individuals ) , we require that a probe that is informative for a certain snp locus must not hybridize to primers corresponding to different snp loci , _ regardless of their extension_. second , since recent studies by naef et al . suggest that fluorescent dyes can significantly interfere with oligonucleotide hybridization on solid support , possibly destabilizing hybridization to a complementary probe on the array , in this paper we use a conservative approach and require that each probe that is informative for a certain snp allele must hybridize to a strict substring of the corresponding primer . on the other hand ,informative probes are still required not to hybridize with any other extended primer , even if such hybridizations involve fluorescently labeled nucleotides .finally , we introduce a _ decoding redundancy _ parameter , and require that each snp have at least informative probes , i.e. , probes that hybridize to the corresponding primer but do not hybridize to any other extended primer .such a redundancy constraint facilitates reliable genotype calling in the presence of hybridization errors .clearly , the larger the value of , the more hybridization errors that can be tolerated .if a simple majority voting scheme is used for making allele calls , the assay can tolerate up to hybridization errors involving the informative probes of each snp .furthermore , since the informative probes of a snp are required to hybridize _ exclusively _ with the primer corresponding to the snp , the redundancy requirement provides a powerful mechanism for detecting and gauging the extent of hybridization errors .indeed , each unintended hybridization at an informative probe for a bi - allelic snp has a dye complementary to one of the snp alleles with probability of only 1/2 , and the probability that such errors pass undetected decreases exponentially in .the refined set of constraints is captured by the following definition , where , for every primer and set of extensions , we let [ def.strong-primer ] a set of primers is said to be _ strongly -decodable _ with respect to extension sets , , if and only if , for every , note that testing whether or not a given set of primers is strongly -decodable can be easily accomplished in time linear in the total length of the primers . genotyping a large set of snpswill , in general , require more than one sbe / sbh assay .this rises the problem of partitioning a given set of snps into the smallest number of subsets that can each be genotyped using a single sbe / sbh assay . for each snp locusthere are typically two different primers that can be used for genotyping . as shown in for the case of snp genotyping using tag arrays , exploiting this degree of freedom significantly increases achievable multiplexing rates .therefore , we next extend our definitions to capture this degree of freedom .let be the _ pool of primers _ that can be used to genotype the snp at locus .similarly to definition [ def.strong-primer ] , we have : [ def.strong-pool ] a set of primer pools is said to be _ strongly -decodable _ if and only if there is a primer in each pool such that is strongly -decodable with respect to the respective extension sets , . primers above are called the _ representative primers _ of pools , respectively .the snp partitioning problem can then be formulated as follows : * minimum pool partitioning problem ( mppp ) : * _ given primer pools , associated extension sets , , probe set , and redundancy , find a partitioning of into the minimum number of strongly -decodable subsets ._ a natural strategy for solving mppp , similar to the well - known greedy algorithm for the set cover problem , is to find a maximum strongly -decodable subset of pools , remove it from , and then repeat the procedure until no more pools are left in .this greedy strategy for solving mppp has been shown to empirically outperform other algorithms for solving the similar partitioning problem for pea assays . in the case of sbe / sbh ,the optimization involved in the main step of the greedy strategy is formalized as follows : * maximum -decodable pool subset problem ( mdpsp ) : * _ given primer pools , associated extension sets , , probe set , and redundancy , find a strongly -decodable subset of maximum size .in addition , for each pool , find its representative primer . _ unfortunately , as shown in next theorem , mdpsp is np - hard even for the case when the redundancy parameter is 1 and each pool has exactly one primer .[ theorem.mdpsphard ] mdpsp is np - hard , even when restricted to instances with and for every .we will use a reduction from the _ maximum induced matching _ problem in bipartite graphs , which is defined as follows : * maximum induced matching ( mim ) problem in bipartite graphs : * _ given a bipartite graph , find maximum size subsets , , with such that the subgraph of induced by is a matching . _the mim problem in bipartite graphs is known to be np - hard even for graphs with maximum degree 3 .let be such a bipartite graph with maximum degree 3 .without loss of generality we may assume that every vertex in has degree at least 1 .we will denote by the _ neighborhood _ of vertex , i.e. , the set of vertices adjacent with in .we construct an instance of mdpsp as follows : let and . for every add to a distinct probe , t ; note that this can be done since , t by our choice of . for every , with neighborhood , we construct a primer and set .we use a similar construction for vertices with only 1 or 2 neighbors . note that in each case the pool consists of a single primer of length at most . for each constructed primer ,the set of possible extensions is defined as , c . since the probes of contain only a s and t s , for every primer , , let , , , be subsets of vertices such that induces a matching in .let .for every , exactly one of s neighbors , denoted , appears in , because induces a matching .furthermore , for each , , and therefore .thus , for every , which means that is a strongly 1-decodable subset of pools of the same size as the induced matching of .conversely , let be a strongly 1-decodable subset of , and let .since is 1-decodable , for every primer with , there must exist a probe such that and for every . because , it follows that every vertex has a neighbor that is not a neighbor of any other .let be such a neighbor ( pick arbitrarily if more than one vertex in satisfies above property ) , and let .it is clear that induce a matching of size in .thus , for every integer , there is a one - to - one correspondence between induced matchings of size in and strongly 1-decodable subsets of pools in the constructed instance of mdpsp , and np - hardness of mdpsp follows .the reduction in the proof of theorem [ theorem.mdpsphard ] preserves the size of the optimal solution , and therefore any hardness of approximation result for the mim in bipartite graphs will also hold for mdpsp , even when restricted to instances with and for every . since duckworth et al . proved that it is np - hard to approximate mim in bipartite graphs with maximum degree 3 within a factor of 6600/6659 , we get : [ corol.apx-hard ] it is np - hard to approximate mdpsp within a factor of 6600/6659 , even when restricted to instances with and for every .in this section we describe three heuristic approaches to mdpsp .the first one is a naive greedy algorithm that sequentially evaluates the primers in the given pools in an arbitrary order .the algorithm picks a primer to be the representative of pool if together with the representatives already picked satisfy condition ( [ 4-color - strong ] ) .the pseudocode of this algorithm , which we refer to as sequential greedy , is given in figure [ fig.seq ] .the next two algorithms are inspired by the min - greedy algorithm in , which approximates mim in -regular graphs within a factor of .for the mim problem , the min - greedy algorithm picks at each step a vertex of minimum degree and a vertex , which is a minimum degree neighbor of .all the neighbors of and are deleted and the edge is added to the induced matching .the algorithm stops when the graph becomes empty .each instance of mdpsp can be represented as a bipartite _ hybridization graph _ , with the left side containing all primers in the given pools and the right side containing the array probes , i.e. , .there is an edge between primer and probe iff . as discussed in section [ sec.formulations ] , we need to distinguish between the hybridizations that involve fluorescently labeled nucleotides and those that do not .thus , for every primer , we let and . similarly , for each probe , we let and . we considered two versions of the min - greedy algorithm whenrun on the bipartite hybridization graph , depending on the side from which the minimum degree vertex is picked . in the first version , referred to as minprimergreedy , we pick first a minimum degree node from the primers side , while in the second version , referred to as minprobegreedy , we pick first a minimum degree node from the probes side .thus , minprimergreedy greedy picks at each step a minimum degree primer and pairs it with a minimum degree probe .minprobegreedy greedy , selects at each step a minimum degree probe and pairs it with a minimum degree primer in . in both algorithms , all neighbors of and andtheir incident edges are removed from . also , at each step , the algorithms remove all vertices , for which .these deletions ensure that the primers selected at each step satisfy condition ( [ 4-color - strong ] ) .both algorithms stop when the graph becomes empty . as described so far , the minprimergreedy and minprobegreedy algorithms work when each pool contains only one primer and when the redundancy is 1 .we extended the two variants to handle pools of size greater than 1 by simply removing from the graph all primers when picking primer from pool .if the redundancy is greater than 1 , then whenever we pick a primer , we also pick it s probe neighbors from with the smallest degrees ( breaking ties arbitrarily ) .the primer neighbors of all these probes will then be deleted from the graph . moreover, the algorithm maintains the invariant that for every primer and for every probe by removing primers / probes for which the degree decreases below these bounds .figures [ fig.min-primer ] and [ fig.min-probe ] give the pseudocode for the minprimergreedy , respectively the minprobegreedy greedy algorithms . for the sake of clarity , they use two subroutines for removing a primer vertex , respectively a probe vertex , which are described in figures [ fig.remove-primer ] and [ fig.remove-probe ] . algorithms minprimergreedy and minprobegreedy can be implemented efficiently using a fibonacci heap for maintaining the degrees of primers , respectively of probes .let be the total number of primers in the pools , be the number of probes in , and be the size of the -decodable set returned by the algorithm .since each primer has bounded degree , the sorting of probe degrees requires total time .the total number of edges in the hybridization graph is . by using a fibonacci heap , finding a minimum degree primer ( probe )can be done in ( respectively ) and each primer degree update can be done in amortized time .thus , the total runtime for minprimergreedy algorithm is , and the total runtime for minprobegreedy algorithm is .we considered two types of data sets : * randomly generated datasets containing between 1,000 to 200,000 pools with 1 or 2 primers of length between 10 and 30 . * two - primer pools representing over 9 million reference snps in human chromosomes 1 - 22 ,x , and y extracted from the ncbi dbsnp database build 125 .we disregarded reference snps for which available flanking sequence was insufficient for determining two non - degenerate primers of desired length ( due , e.g. , to the presence of degenerate bases near the snp locus ) .we used two types of array probe sets .first , we used probe sets containing all -mers , for between 8 and 10 .all -mer arrays are well studied in the context of sequencing by hybridization . however , a major drawback of all -mer arrays is that the -mers have a wide range of melting temperatures , making it difficult to ensure reliable hybridization results . for short oligonucleotides , a good approximation of the melting temperatureis obtained using the simple 2 - 4 rule of wallace , according to which the melting temperature of a probe is approximately twice the number of a and t bases , plus four times the number of c and g bases . as in , we define the _ weight _ of a dna string to be the number of a and t bases plus twice the number of c and g bases . for a given integer , a dna string is called a -token if it has a weight or more and all its proper suffixes have weight strictly less than . since the weight of a -token is either or , it follows that the 2 - 4 rule computed melting temperature of all -tokens varies in a range of about . in our experiments we used probe sets consisting of all -tokens , with varying between 11 and 13 .the considered values of and were picked such that the resulting number of probes is representative of current array manufacturing technologies : there are roughly 65,000 8-mers , 262,000 9-mers , 1 million 10-mers , 86,000 11-tokens , 236,000 12-tokens , and 645,000 13-tokens the smaller probe sets can be spotted using current oligonucleotide printing robots , while the larger probe sets can be synthesized in situ using photolithographic techniques . in a first set of experiments on the randomly generated datasets we compared the three mdpsp algorithms on instances with primer length set to 20 , which is the typical length used , e.g. , in genotyping using tag arrays . in these experimentsthe set of possible extensions was considered to be , c , t , g for all primers .such a conservative choice gives an estimate of multiplexing rates achievable by sbe / sbh assays in more demanding genomic analyses such as microorganism identification by dna barcoding , in which a primer ( typically referred to as a _ distinguisher _ in this context ) may be extended by any of the dna bases in different microorganisms .the results of these experiments for all -mer and all -token probe sets are presented in tables [ table.set1.k ] and [ table.set1.c ] , respectively .the results show that using the flexibility of picking primers from either strand of the genomic sequence yields an improvement of up to 10% in the number of -decodable pools .the minprobegreedy algorithm typically produces better results compared to the minprimergreedy variant . on the other hand , neither sequential greedy nor minprobegreedy dominates the other algorithms for all range of instance parameters sequential greedy generally gives the better results for -mer experiments with high redundancy values , while minprobegreedy generally gives better results for -mer experiments with large number of pools and low redundancy and for -token experiments . in the second set of experiments we ran the three mdpsp algorithms on datasets with the same primer length of 20 , pool size of 2 , and with the number of possible extensions of each primer set to 4 as in dna - barcoding applications , and to 2 as in snp genotyping .the results for all -mer and all -token probe sets are given in tables [ table.set2.k ] and [ table.set2.c ] .the relative performance of the algorithms is similar to that observed in the first set of experiments .as expected , taking into account the reduced number of possible extensions increases the size of computed decodable pool subsets , often by more than 5% . in the third set of experiments we explored the degree of freedom given by the primer length .for any fixed array probe set and redundancy requirement , we need a minimum primer length to be able to satisfy constraints ( [ 4-color - strong ] ) .increasing the primer length beyond this minimum primer length is often beneficial , as it increases the number of array probes that hybridize with the primer .however , if primer length increases too much , an increasing number of these probes become non - specific , and the multiplexing rate starts to decline .figure [ fig.length ] gives the tradeoff between primer length and the size of the strongly -decodable pool subsets computed by the three mdpsp algorithms for pools with 2 primers , 2 possible extensions per primer and all 10-mers , respectively all 13-tokens , as array probes .we notice that the optimal primer length increases with the redundancy parameter ..[table.set1.k ] size of the strongly -decodable pool subset computed by the three mdpsp algorithms for primer length 20 and set of possible extensions , c , t , g , with redundancy and all -mer probe sets for ( averages over 10 test cases ) .[ cols="^,^,^,^,^,^,^,^,^ " , ] a. bendor , t. hartman , b. schwikowski , r. sharan , and z. yakhini . towardsoptimally multiplexed applications of universal dna tag systems . in _ proc .7th annual international conference on research in computational molecular biology _ , pages 4856 , 2003 .heath and f.p . preparata . enhanced sequence reconstruction with dna microarray application . in _ proc .7th annual international conference on computing and combinatorics ( cocoon ) _ , pages 6474 , 2001 .hirschhorn , p. sklar , k. lindblad - toh , y .- m .lim , m. ruiz - gutierrez , s. bolk , b. langhorst , s. schaffner , e. winchester , and e. lander .: an array - based method for efficient single - nucleotide polymorphism genotyping . , 97(22):1216412169 , 2000 .konwar , i.i .mndoiu , a.c .russell , and a.a .improved algorithms for multiplex pcr primer set selection with amplification length constraints . in y .-phoebe chen and l. wong , editors , _ proc .3rd asia - pacific bioinformatics conference ( apbc ) _ , pages 4150 , london , 2005 .imperial college press .n. tonisson , a. kurg , e. lohmussaar , and a. metspalu .arrayed primer extension on the dna chip - method and application . in markschena , editor , _ microarray biochip technology _ , pages 247263 .eaton publishing , 2000 .wallace , j. shaffer , r.f .murphy , j. bonner , t. hirose , and k. itakura .hybridization of synthetic oligodeoxyribonucleotides to phi chi 174 dna : the effect of single base pair mismatch ., 6(11):63536357 , 1979 .
despite much progress over the past decade , current single nucleotide polymorphism ( snp ) genotyping technologies still offer an insufficient degree of multiplexing when required to handle user - selected sets of snps . in this paper we propose a new genotyping assay architecture combining multiplexed solution - phase single - base extension ( sbe ) reactions with sequencing by hybridization ( sbh ) using universal dna arrays such as all -mer arrays . in addition to pcr amplification of genomic dna , snp genotyping using sbe / sbh assays involves the following steps : ( 1 ) synthesizing primers complementing the genomic sequence immediately preceding snps of interest ; ( 2 ) hybridizing these primers with the genomic dna ; ( 3 ) extending each primer by a single base using polymerase enzyme and dideoxynucleotides labeled with 4 different fluorescent dyes ; and finally ( 4 ) hybridizing extended primers to a universal dna array and determining the identity of the bases that extend each primer by hybridization pattern analysis . under the assumption of perfect hybridization , unambiguous genotyping of a set of snps requires selecting primers upstream of the snps such that each primer hybridizes to at least one array probe that hybridizes to no other primer that can be extended by a common base . our contributions include a study of multiplexing algorithms for sbe / sbh genotyping assays and preliminary experimental results showing the achievable tradeoffs between the number of array probes and primer length on one hand and the number of snps that can be assayed simultaneously on the other . we prove that the problem of selecting a maximum size subset of snps that can be unambiguously genotyped in a single sbe / sbh assay is np - hard , and propose efficient heuristics with good practical performance . our heuristics take into account the freedom of selecting primers from both strands of the genomic dna as well as the presence of disjoint allele sets among genotyped snps . in addition , our heuristics can enforce user - specified redundancy constraints facilitating reliable genotyping in the presence of hybridization errors . simulation results on datasets both randomly generated and extracted from the ncbi dbsnp database suggest that the sbe / sbh architecture provides a flexible and cost - effective alternative to genotyping assays currently used in the industry , enabling genotyping of up to hundreds of thousands of user - specified snps per assay .
it is known that the spatial derivative of the solution to the ( backward ) kolmogorov equation can be represented as an expectation of a functional of the solution of an sde with some weight , namely the so - called bismut - elworthy - li ( bel ) formula as shown in and extended in . in authors use techniques from malliavin calculus to prove bel formula and employ it for the computation of sensitivities of financial options , also known as _greeks_. in many applications , it is very natural to expect that the coefficients of a stochastic differential equation ( sde ) may depend on properties of the law of the solution , such as dependence on its moments .here , we want to extend the formula to mean - field type sdes following the essence of and show that such generalisation is actually non - trivial , requiring more regularity of the solution in the sense of malliavin .first , we give a relationship between the malliavin derivative and the spatial derivative of the solution with respect to the initial condition .already here we see that such generalisation involves an extra factor which is no longer adapted , thus requiring more ( malliavin ) regularity on the solution which is not immediate .fortunately , if and are lipschitz continuous in space , then the solution is twice malliavin differentiable , as it is shown in , and hence a formula using the skorokhod integral may be expected .using such relation one can find the bel formula in this context .some merely illustrative examples are provided in order to give a better insight on the effect of mean - field sdes in the bel formula . in the last examples we carry out some simulations to show that the malliavin method is more efficient compared to a finite difference method , especially when the function involved is discontinuous .the paper is organised as follows : in section [ frame ] we collect some summarised basic facts on malliavin calculus needed for the derivation of the main results of the paper . in section [ sectionbel ] we include all intermediate steps towards the main result which is the bismut - elworthy - li formula in the context of mean - field sdes .finally , section [ sectionappl ] is devoted to provide some illustrative examples of this generalised bismut - elworthy - li formula with simulations .the findings are similar to those in , the use of bismut - elworthy - li s formula serves as a much more efficient method to compute sensitivities with respect to initial data , especially when the function involved in the expectation is highly irregular .* notations : * let denote the non - negative real numbers . denote by the euclidean norm in , .given a banach space , denote by its associated norm .let integers and be the space of times malliavin differentiable random variables with all -moments .denote by , the malliavin derivative as introduced in ( * ? ? ?* chapter 1 , section 1.2.1 ) and its dual operator ( skorokhod integral ) . denote by the domain of ( skorokhod integrable processes ) .denote the trace of a matrix by and by its transpose .for a ( weakly ) differentiable function , , denote by , respectively by , ( weak ) differentiation with respect to the first ( space ) variable , respectively the second ( space ) variable .our main results centrally rely on tools from malliavin calculus .we here provide a concise introduction to the main concepts in this area . for deeper information on malliavin calculusthe reader is referred to i.e. .let \} ] is the -augmented natural filtration .denote by the set of simple random variables in the form ) , \ \f\in c_0^{\infty } ( { \mathbb r}^n).\ ] ] the malliavin derivative operator acting on such simple random variables is the process \} ] defined by the following norm on : ))}=e[|f|^2]^{1/2}+ e\left [ \int_0^t |d_t f|^2 dt\right]^{1/2}.\end{aligned}\ ] ] we denote by the closure of the family of simple random variables with respect to the norm given in and we will refer to this space as the space of malliavin differentiable random variables in with malliavin derivative belonging to . in the derivation of the probabilistic representation for the delta , the following chain rule for the malliavin derivative will be essential : [ chainrule ] let be continuously differentiable with bounded partial derivatives .further , suppose that is a random vector whose components are in .then and .\ ] ] the malliavin derivative operator ) ] such that for all we have \leq c \|f\|_{1,2},\ ] ] where is some constant depending on . for a stochastic process ( not necessarily adapted to ) we denote by the action of on .the above expression ( [ skorokhod ] ) is known as the skorokhod integral of and it is an anticipative stochastic integral .it turns out that all -adapted processes in ) ] and < \infty.\ ] ] then is skorokhod integrable and = e\left [ \int_0^t u(t)^2 dt+ \int_0^t \int_0^t d_t u(s ) d_s u(t ) dsdr\right].\ ] ] the dual relation between the malliavin derivative and the skorokhod integral implies the following important formula : [ duality ] let be -measurable and . then = e\left [ \int_0^t u(t ) d_t f dt\right].\end{aligned}\ ] ] the following is the corresponding integration by parts formula for the skorokhod integral .see e.g. ( * ? ? ?* theorem 3.15 . ) . [ ibp ]let and such that . then object of study is a _mean - field _ type _ stochastic differential equation _ ( sde ) of the form \\ \rho_t : = & \ , e[\varphi(x_t ) ] , \quad \pi_t : = e[\psi(x_t ) ] \end{split}\end{aligned}\ ] ] where , , \times { \mathbb r}^d \times { \mathbb r}^d \rightarrow { \mathbb r}^d ] , , , are measurable functions and \} ] .we will usually consider the solution as a function of and hence write to stress this fact .otherwise , we will just write . moreover , we will assume the following conditions as in * the functions and , are continuously differentiable with bounded lipschitz derivatives uniformly with respect to ] be the unique global strong solution of ( [ sde ] ) .then the function is continuously differentiable , -a.s .see .next proposition shows that the first variation process is invertible for every .[ detyt ] let \} ] .we want to show that < \infty\ ] ] for every integer .indeed , in virtue of ( stochastic ) liouville s formula which can be found in , one has observe that since the processes are in ) ] .thus , using cauchy - schwarz inequality we have for every \leq e\left[\exp \left\{(2p^2-p)\sum_{k=1}^m \int_0^t { \mathrm{tr}}b_u^k dw_u^k-2p \int_0^t { \mathrm{tr}}a_u du\right\ } \right].\end{aligned}\ ] ] in particular , the claim is reduced to showing that } \left|e\left[\exp\left\{\lambda \int_0^t { \mathrm{tr}}a_u du\right\}\right]\right| + \sup_{t\in [ 0,t ] } \left|e\left[\exp\left\{\lambda \int_0^t \sum_{k=1}^m ( { \mathrm{tr}}b_u^k)^2 du\right\}\right]\right|<\infty\ ] ] for every which clearly holds since and , are uniformly bounded .the following statement is one of the main observations for the derivation of the bismut - elworthy - li formula in the mean - field context .it can be seen as a generalisation of the well - known relation between the first variation process in the non mean - field context and the malliavin derivative , see e.g. ( * ? ? ?* ch.2 , sec.2.3.1 ) .[ mallsob ] let \} ] , one has the following relationship between the spatial derivative and the malliavin derivative of where denotes the right pseudo - inverse of , \} ] , \} ] , \} ] is the fundamental matrix satisfying and .\ ] ] by the well - known classical relation , see e.g. ( * ? ? ?* chapter 2 , section 2.3.1 ) , it is true that , where denotes the right pseudo - inverse of and hence the relation follows . for the relation , to hold in the mean - field setting one also needs the property that defines a stochastic semiflow . in the mean - field casewe point out that the fact that , , and are continuously differentiable with bounded lipschitz derivative implies this fact in virtue of .it is shown in that sde ( [ sde ] ) is twice malliavin differentiable when the vector field does not depend on the law of and one has additive noise . nevertheless , using the same method one can prove the same result since the dependence on ] does not bring stochasticity to the equation . in the sense that the malliavin derivative of for every fixed ] , such that the function \times { \mathbb r}^d\times { \mathbb r}^d \times { \mathbb r}^d \rightarrow { \mathbb r}^d ] .the reason of the above condition is to use it s formula on the process so that satisfies an sde with additive noise for which the results from can be applied .although , it might seem that the class of such processes is small , it covers a wide variety of models which are relevant in applications , such as for instance geometric - type models .[ malldiff ] let \} ] .see .[ skorokhodi ] let ] we have together with proposition [ detyt ] that for every ] be the unique global strong solution of ( [ sde ] ) .let be a measurable function such that .define the function .\ ] ] then ^\ast\delta w_s\right]^\ast\end{aligned}\ ] ] where denotes transposition , is the fundamental matrix obtained in theorem [ mallsob ] and here \rightarrow { \mathbb r} ] we have as a consequence , \\ = & \ , e\left [ \phi ' ( x_t^{x } ) \int_0^t a(s ) d_s x_t \sigma^{-1}(s , x_s^x,\pi_s^x)y_s u(t ) ds\right ] \\= & \ , e\left [ \int_0^t a(s ) d_s \phi(x_t ) \sigma^{-1}(s , x_s^x,\pi_s^x)y_s u(t ) ds\right]\\ = & \ , e\left [ \phi(x_t ) \int_0^t a(s ) \left[\sigma^{-1}(s , x_s^x,\pi_s^x)y_s u(t)\right]^\ast \delta b_s\right]^\ast\end{aligned}\ ] ] where we have used relation ( [ mallintrel ] ) , the chain rule for the malliavin derivative ( backwards ) and the duality formula for the malliavin derivative which is justified by corollary [ skorokhodii ] . *step 2 : * assume is bounded and continuous , in particular , .we can approximate by a sequence of smooth functions with compact support such that a.e .as .define ^\ast\delta w_s\right]^\ast.\ ] ] to make reading clearer introduce the notation ^{1/2} ] and are well - defined since and using cauchy - schwarz inequality we have ds + \int_0^t \int_0^t d_s \xi_r d_r \xi_s dr ds\right]^{1/2}\ ] ] where we used it s isometry property for skorokhod integrals , see theorem [ duality ] or e.g. ( * ? ? ?* theorem 6.17 . ) .observe that the first term is bounded since and are uniformly bounded and has integrable trajectories .the second term is bounded since is malliavin differentiable for every ] in virtue of proposition [ detyt ] in connection with proposition [ malldiff ] as for , due to corollary [ skorokhodii ] .now , we approximate by ] .then by cauchy - schwarz inequality and it s isometry we know ^{1/2},\ ] ] for any compactum and some finite constant . finally , observe that clearly one has ^{1/2 } \xrightarrow{n\to \infty } 0\ ] ] thus proving the result .in this section we wish to give a rather simple but illustrative example of how the dependence on the expectation of the solution may give rise to more complicated terms when deriving the bismut - elworthy - li formula . in one of the examples we adopt the context of finance where the formula has a broad use for the computation of the so - called delta sensitivities which, in short , is the sensitivity of prices of contracts with respect to the initial value of the price of the stock taken into consideration. we will consider the price of an option written on a stock whose dynamics depend on the expectation of the price process .then we provide two numerical examples in order to demonstrate that the bismut - elworthy - li formula , or the so - called malliavin method for computing the delta is numerically more efficient than the usual finite difference method even when the function is discontinuous .let \} ] , with be the risk - less asset and a pay - off function .then the price of a european option at current time with maturity ( under the risk - neutral valuation approach ) is given by \ ] ] where is the risk - neutral measure , i.e. \ ] ] where \ ] ] is the market price of risk process .it follows that obtained as the solution of a riccati equation . also , we have and . then the -sensitivity of an option written on is given by .\ ] ] let us find a simpler expression for the stochastic integral . using the integration by parts formula for the skorokhod integral , see theorem [ ibp ] , we find that and hence , taking we find that under the risk - neutral measure , the -sensitivity is given by \ ] ] with malliavin weight finally , observe that if we ignore the dependence on ] for an irregular function .this special `` geometric - type '' case shows that whenever is deterministic then the delta is an rescaled version of the classical delta and they coincide when is constant , indeed .let , for instance , for some fixed , also known as a _european call option _ in the context of finance .then we use a monte carlo method to compute the above expression and compare it to the following finite difference method scheme hereunder -e[\phi(x_t^{x})]}{h } , \quad h\approx 0.\ ] ] , , , and with to the right and to the left . on bottom parameters set to , , , and with to the right and to the left ] in the upper left figure , for the finite difference method and the two methods are seemingly giving similar accurate results , although the malliavin method is more efficient in number of iterations .if one wishes to decrease in order to gain precision we can see how the finite difference method becomes unstable ( upper right and lower right figures ) . in conclusion, the integration by parts formula seems to be a much more efficient tool for the computation of sensitivities for mean - field sdes , at least , in this setting . , , , and with to the right and to the left . on bottom parametersset to , , , and with to the right and to the left ] the conclusions here are clear .the regularity of the function plays an important role .we see that the bias in the finite difference method seems high and it becomes unstable when decreasing the values of . on the contrary, the bismut - elworthy - li formula gives a better approximation of the sensitivity , even when the function is discontinuous .d. baos , t. nilssen , _ malliavin and flow regularity of sdes .application to the study of densities and the stochastic transport equation _ , ( to appear in stochastics an international journal of probability and stochastic processes ) .
we generalise the so - called bismut - elworthy - li formula to a class of stochastic differential equations whose coefficients might depend on the law of the solution . we give some examples of where this formula can be applied to in the context of finance and the computation of greeks and provide with a simple but rather illustrative simulation experiment showing that the use of the bismut - elworthy - li formula , also known as malliavin method , is more efficient compared to the finite difference method .
modified gravity has become one of the most popular mechanisms to generate a late accelerated expansion in the universe without the need of introducing new fields .it was also one of the first consistent models for early inflation . during the past fifteen years or so, several specific models have been thoroughly analyzed in many scenarios , but only a few of them can survive the classical tests ( e.g. solar system , binary pulsar ) while predicting the correct accelerating expansion , and in general , a successful cosmological model , both at the background and at the perturbative level .therefore , it is still unclear until what extent this kind of alternative theories of gravity can recover _ all _ the successes of general relativity ( gr ) while making new testable predictions . as concernblack hole solutions , the situation of gravity can , in some sense , differ from gr , and in other sense be almost the same .the last statement is related with the content of section [ sec : theory ] about the existence of the same kind of vacuum black - hole solutions found in gr , while the former concerns the existence of hairy solutions , or lack thereof , that we analyze in all the rest of the paper .perhaps the first and simplest theorem concerning bh solutions in gr was the birkhoff s theorem ( bt ) . roughly speaking, this theorem establishes that in vacuum all spherically symmetric ( ss ) spacetimes are also static , and those that are asymptotically flat ( af ) are represented by a one - parameter family of solutions , namely , the ubiquitous schwarzschild solution , where the parameter is interpreted as the ( adm ) _ mass _ of the spacetime ( see for a discussion ) .remarkably , when including an electric field , the ss solution can be extended as to include the charge of the bh , it is the well known two - parameter reissner nordstrm ( rn ) solution . in the af case ,both the schwarzschild and the rn solution are solutions of the einstein s field equations ( also termed _ricci flat _ solutions ) .when including a cosmological constant , the schwarzschild and rn black holes become a two and three - parameter family respectively , and the solutions are asymptotically de sitter ( ads ) or asymptotically anti de sitter ( aads ) , depending if or , respectively . in the decade of 1960 s , motivated by the discovery of the kerr solution , several theorems (notably , the _ uniqueness theorems _ ) were established for stationary and axisymmetric spacetimes both in vacuum and with an electromagnetic field ( see for details and reviews ) .one of the main consequences of those theorems is that in the af case the solutions of the einstein field equations under such symmetries are characterized _ only _ by three parameters , the mass , the charge , and the angular momentum of the bh .these solutions are known as the kerr newman family , which extends the schwarzschild and rn black holes to more general spacetimes : stationary and axisymmetric . due to the apparent simplicity of such solutions ,wheeler established the so called _ no hair conjecture _ , a statement that `` doomed '' all possible stationary afbh solutions from having any other parameters than those three ( and ) .this conjecture has been reinforced thereafter by the elaboration of several _ no - hair _ theorems ( nht s ) that forbid the existence of bh solutions with more parameters associated with other kinds of matter fields ( see for a review ) . among such theoremsone can mention those that include several kinds of scalar fields . eventually , this conjecture proved to be `` false '' , for instance , within the einstein yang mills system , and einstein - scalar - field system with `` exotic '' potentials that can be negative . or when including rotation like in the einstein - boson - field system .nonetheless , since most , if not all of such _ hairy _ solutions are unstable , the community ( or at least part of it ) consider those solutions as _ weak _ counterexamples to the wheeler s no - hair conjecture .therefore , it has been tantalizing to extend the conjecture in the following more precise , although still informal , statement : _ the only stable stationary afbh s are within the kerr newman family _ .the proposal of alternative theories of gravity as a possible solution to the dark - matter and dark - energy problems and to other theoretical problems ( e.g. inflation , gravity renormalization ) has motivated people to generalize several of the theorems and conjectures mentioned above , and which pertain to gr , to the realm of other modified - gravity proposals .while it is out of the scope of the present paper to review all such attempts , we shall simply focus on metric gravity . unless otherwise stated , by gravity we mean a theory that departs from the gr function . as concernsthis kind of theory , a large amount of analysis has been devoted to establish an analogue of the bt for the ss situation . however , the reality is that no rigorous bt exists today in gravity , as far as we are aware . in fact, if one such theorem were proved , certainly should be restricted to some specific models .moreover , the theorem should establish at least four things , upon fixing the boundary conditions ( i.e. regularity and asymptotic conditions ) : 1 ) _ staticity : _ the only spherically symmetric solutions in vacuum [ i.e. without any matter field associated with the standard model of particle physics or any other field that is not associated with the lagrangian ] are necessarily static ( i.e. the existence of a static killing field should be proved from the spherically symmetric assumptions ) ; 2 ) _ existence : _ the existence of an exact static spherically symmetric ( sss ) solution in vacuum ; 3 ) _ uniqueness : _ the sss solution found in point ` 2 ) ' is the only solution in vacuum ( or prove otherwise ) ; 4 ) the conditions under which the solution in point ` 2 ) ' matches or not the exterior solution of an sss extended body .so far , in vacuum only a few bh exact solutions exist in gravity , and those that are genuine af , ads or aads correspond simply to the same kind of solutions found in gr , where everywhere in the spacetime ( with , or , respectively ) .it is unclear if other solutions exist with the same kind of asymptotics but with a varying in the domain of outer communication of the bh .we shall elaborate more about this point below to be more precise .furthermore , in the presence of matter ( i.e. a star - like object ) , it is possible to find sss solutions where can vary in space , however , those solutions are not exact , but given only numerically , and it is unclear if the exterior part ( i.e. the vacuum part ) of those solutions is the same solution found when matter is totally absent in the spacetime , if it exists at all , as it happens in gr where the exterior solution of extended objects under such symmetries is always given by the vacuum schwarzschild solution . now , despite the absence of such bt , some nht s have been proved in this kind of theories . in order to do so, people have resorted to the equivalence between a certain class of models ( notably , those where and , where the subindex indicates differentiation ) with scalar - tensor theories ( stt ) .the point is that one performs a conformal transformation from the original _ jordan frame _ to the so - called _ einstein frame _ where the conformal metric appears to be coupled minimally to gravity and a new scalar - field emerges , where , which is also coupled minimally to the conformal metric but endowed with an `` exotic '' potential .thus , the available nht s constructed for the einstein - scalar - field system in gr can be applied for these theories as well ( see section [ sec : stt ] ) , notably in vacuum , and when the spacetime is af and the potential satisfies the condition for the potential .this proof is similar to bekenstein s which assumes only _ stationarity _ as opposed to _staticity_. ] .we stress that the applicability of such nht s is possible because the non - minimal coupling between the scalar field and the matter fields that usually appears under the einstein frame obviously vanishes in the absence of the matter .the only caveat of this method is that the potential is not given a priori but is the result of the specific model considered ab initio , and thus , can be negative or even not well defined ( i.e. it can be multivalued ) , which in turn can jeopardize the use of the nht s .consequently , the existing nht s in gravity can reduce the kind of afsssbh solutions that are available in some specific models , but do not rule out completely the absence of _ geometric hair_. in this context , by ( geometric ) _ hairy _ solutions within gravity we mean afsssbh solutions where the ricci scalar is not _ trivial _( i.e. constant ) , but rather a function that interpolates non - trivially between the horizon and spatial infinity .thus , when the condition fails and the nht s are not applicable one can resort to a numerical analysis for evidence about the existence of such hair or its absence thereof . at this respect it is important to stress that regularity conditions have to be imposed at the inner boundary , namely , at the bh horizon in order to prevent the presence of singularities there . in section [ sec : regcond ] andappendix [ sec : regcond2 ] we obtain such regularity conditions and then in sections [ sec : expmodels ] and [ sec : numerical ] we present analytical and numerical evidence , respectively , showing that hairy solutions are absent in several specific models proposed as dark - energy alternatives in cosmology .in particular , the models considered in section [ sec : numerical ] are precisely those for which the nht s can not be applied as the corresponding potential can be negative or is not even well defined . on the other hand ,when such hairy solutions are absent , one may still find the trivial solution for which the field equations reduce to the einstein field equations with an effective cosmological constant , and an effective gravitational constant where is a solution of an algebraic equation involving and .this includes the case where .therefore , in such circumstances , all the best known bh solutions found in gr exist also in gravity simply by replacing the usual cosmological constant by , and the newton s gravitational constant by .in view of this we shall argue in section [ sec : theory ] that such solutions are so trivial ( i.e. trivial in the context of gravity ) that almost nothing new arise from them . finally , we mention that some `` non - trivial '' exact sssbh solutions have been reported in the literature as a result of very ad hoc models .notwithstanding such solutions _ can not _ be considered as hairy solutions because they have unusual asymptotics , and therefore , the corresponding `` hairless '' solution ( including the ricci - flat solution ) does not even exist with the same kind of asymptotics .we shall discuss one such solution in section [ sec : exactsols ] .the article is organized as follows : in section [ sec : theory ] we discuss in a general setting the conditions for the existence of several trivial bh solutions . in section [ sec : sss ] we focus on sss spacetimes and provide the corresponding differential equations to find bh solutions .we also discuss some exact solutions that will be used later to test a numerical code constructed to solve the equations .the boundary conditions appropriate to solve these equations with the presence of a bh are given in section [ sec : sss ] in form of _ regularity conditions _ at the horizon .no - hair theorems and the properties of gravity formulated in the einstein frame are analyzed in section [ sec : stt ] . in that sectionwe also provide strong numerical evidence about the absence of hair for several models when the nht s do not apply .our conclusions and final remarks are presented in section [ sec : concl ] .several appendices at the end of the article complement the ideas of the main sections .the general action for a theory of gravity is given by = \!\ !\frac{f(r)}{2\kappa } \sqrt{-g } \ : d^4 x + i_{\rm matt}[g_{ab } , { \mbox{\boldmath{ } } } ] \ ; , \ ] ] where ( we use units where ) , we extend our units so that as well . ] , and is a sufficiently smooth ( i.e. ) but otherwise an a priori arbitrary function of the ricci scalar .the first term corresponds to the modified gravity action , while the second is the usual action for the matter , where represents schematically the matter fields .the field equation arising from the action ( [ f(r ) ] ) under the metric approach is where stands for ( we shall use similar notation for higher derivatives ) , is the covariant dalambertian and is the energy - momentum tensor of matter resulting from the variation of the matter action in ( [ f(r ) ] ) .it is straightforward , although a non - trivial result , to show that the conservation equation holds also in this case ( see appendix [ sec : consemt ] for a proof ) . in turn , this latter leads to the geodesic equation for free - fall particles .therefore , the weak - equivalence principle ( for point test particles ) is also incorporated in this theory as well .actually metric gravity preserves all the axioms of gr but the one that assumes that the field equations for the metric must be of second order .clearly the only case where this happens is for , which leads to gr plus a cosmological constant ( hereafter gr ) .now , taking the trace of eq .( [ fieldeq1 ] ) yields \,\,\,,\ ] ] where . when using ( [ tracer ] ) in ( [ fieldeq1 ] ) andafter some elementary manipulations we obtain } \ ; .\ ] ] equations ( [ fieldeq3 ] ) and ( [ tracer ] ) are the basic equations that we have used systematically in the past to tackle several problems in cosmology and astrophysics , and that we plan to use in this article as well . now , apart from the gr theory for which , , and , for more general models , one imposes the conditions , for a positive , and , for stability , in this paper we shall sometimes relax these two assumptions in order to explore its consequences for the sake of finding bh solutions . in vacuum , that is when , or more generally , in the presence of matter fields where , like in electromagnetism or yang mills theory , eq .( [ tracer ] ) admits in principle the trivial exact solution where is a solution of the algebraic equation /f_{rr}=0 ] , and is another parameter of the model which is related to an effective cosmological constant as we show below .this model and a variant of it was considered in the past by several authors . in the sss scenario the metric with the mass function , ricci scalar , and given , respectively , by eqs .( [ tracersss])([deltasss ] ) exactly , as one can verify by straightforward substitutions . here, is an integration constant .taking into account electromagnetic and yang mills fields , this solution was extended in ref .when this solution was part of a more general class of solutions associated with the model , for the asymptotic behavior of those solutions was not analyzed by those authors as we do here .the coordinates are defined such that , , , with if and if , where and corresponds to the location of the event and cosmological horizons , respectively , that we analyze below .the metric ( [ gfrsz ] ) possesses a deficit angle , , `` charge '' and a cosmological constant ( see appendix [ sec : afdefang ] for more details ) . in the current case where .the divergent terms ( linear and cubic in ) appear in this renormalization of mass since , as we remarked before , the spacetime has a deficit angle ( associated with the linear term ) and also a cosmological constant ( associated with the cubic term ) . using ( [ mrexact ] )we conclude , thus , .examples of spacetimes with a deficit angle and with zero - mass bh are not new . the metric with a deficit angle given by eq .( [ gfrsz ] ) is a solution with a cosmological constant , .for instance , taking we can introduce for convenience .the location of the black hole horizon depends on the value of .there are three possibilities : a ) if , , the event horizon of the black hole is located at ; b ) if ( ) and , there are two horizons which are located at , and .in particular , for an _ extreme _ black hole with the horizon is given by ; c ) if , then .the aads spacetime with a deficit angle is a solution with a negative cosmological constant ( with ) and .notice that for an horizon does not exist like in the usual anti - de sitter ( i.e. anti - de sitter spacetime without the deficit angle ) .however , for an horizon exists and is located at .when , the spacetime turns to be af except for a deficit angle . in this case only for there is an horizon at . finally when , the spacetime is simply the minkowski spacetime with a deficit angle .all other cases correspond to naked singularities . in absence of naked singularitiesa straightforward calculation of other scalars , like and , show that the only _ physical _ singularity appear at . even if , such scalars are and while , and thus , the singularity at is entirely due to the deficit angle .since the coordinates used so far do not cover the entire manifold one can look for analytic extensions using kruskal - like coordinates .these extensions and the construction of penrose diagrams are out of the scope of the current paper and will be reported elsewhere .now , for , there is no cosmological horizon , and thus , one can analyze the solution as .it is then interesting to note that the asymptotic value of the ricci scalar is not an algebraic solution of which is but rather a pole of ^{-3/2}/2 ] , ^{-3/2}/2 ] blow up at , the quantities that appear in eq . ( [ tracersss ] ) behave well asymptotically ( i.e. they are finite ) as : moreover the quantities that appear in the r.h.s of eq .( [ fieldeq3 ] ) behave also well asymptotically and give rise to the cosmological constant as we show next . take for instance the component of eq .( [ fieldeq3 ] ) in vacuum . for our purposesit is more convenient to take the mixed components .then now let us analyze the asymptotic behavior of each of the terms at the r.h.s of eq .( [ grr ] ) : therefore , to leading order we obtain so the r.h.s of eq .( [ fieldeq3 ] ) is well behaved asymptotically , and it is just a constant which we can precisely identify with effective cosmological constant , with . notice that this constant emerged not only from the last two terms at the r.h.s of eq .( [ grr ] ) but also from the contribution of the first two , which one would naively think that they do not contribute as asymptotically .however , as mentioned in sec .[ sec : theory ] , a closer look shows that one actually has in those two terms something like asymptotically .this is why it was necessary to perform the correct asymptotic analysis which leads then to contribution due to the first two terms of the r.h.s . of eq .( [ grr ] ) . of course one can perform the same asymptotic analysis in the full set of equations ( [ tracersss])([deltasss ] ) to find that all of them behave well and consistently as , since from both the l.h.s and the r.h.s one obtains exactly the same behavior .this is otherwise expected as we have explicitly the exact solution from which one can compute and and to confirm that nothing diverges as .the definition of this cosmological constant is consistent with the canonical form that the metric coefficients and take in ( [ gfrsz ] ) in these coordinates .for instance , in terms of they read where .hence , we conclude that when has a pole precisely at the anti - de sitter point , the cosmological constant does not arise simply from the last term of eq .( [ grr ] ) , like in the analysis performed in sec .[ sec : theory ] where .that analysis was valid provided that as the following two necessary conditions were satisfied : and , which as emphasized above , is not the actual case for this exact solution .finally , we mention that for , which corresponds to a null cosmological constant , the quantity that appears in eq .( [ tracersss ] ) vanishes as the solution approaches the asymptotic value , even if .moreover , the term also vanishes asymptotically since and .therefore , a posteriori one can understand why eq .( [ tracersss ] ) is well behaved asymptotically . it is important to stress that in the previous works the above physical and geometric interpretation of the metric ( [ gfrsz ] ) was completely absent and therefore , its meaning was rather unclear . in the nonvacuum case , some but not all of the aspects discussed above for this exact solution were elucidated .so far we have mainly discuss the case .regarding , the ricci scalar will tend to asymptotically but will never reach this value since well before the cosmological horizon is reached by the solution .therefore , and .that is , the possible solution with is never reached asymptotically due to the presence of the cosmological horizon . now , as concerns the trivial solution , the model ( [ frsz ] ) admits the solution , which solves .notice that ^{-3/2}/2 ] , which will be given explicitly when the model be provided ( see section [ sec : expmodels ] ) .if we include the matter action then the scalar field will be coupled non minimally to the matter fields . in this paperwe are only interested in the vacuum case , so the field equations obtained from the action ( [ einsteinf ] ) are simply those of the einstein- system : \,\,\,,\\ \label{kgef } \box^{\!\!\!\tilde { } } \,\,\phi & = & \frac{d\mathscr{u}}{d\phi } \,\,\,.\end{aligned}\ ] ] as we emphasized previously , based on this equivalence and in view of our interest in finding sss and af non - trivial black holes in gravity , we have to take into account the nht s which are valid when .the theorems roughly establish that _ whenever the condition holds , given an afsss spacetime containing a black - hole ( with a regular horizon ) within the einstein- system , the only possible solution is the hairless schwarzschild solution ._ here by _hairless _ we mean that the scalar field , i.e. , the scalar field is constant everywhere in the domain of outer communication of the bh and it is such that , in order to prevent the presence of a cosmological constant which would spoil the af condition . the nht s can be avoided if the potential has negative branches , notably at the horizon .so in our case , given an model , we have only to check if the corresponding potential satisfies or not the condition . in the affirmative case , we conclude that sss and af hairy black holes are absent .nevertheless , when this condition fails , one usually need to resort to a numerical treatment in order to analyze if a black hole can support scalar hair or not . before concluding this section ,a final remark is in order .there is an important relationship between the critical points of the potential , notably the extrema , and those of the `` potential '' defined such that (see also appendix [ sec : bdidentification ] ) in section [ sec : theory ] we denoted the extrema of by .it is straightforward to verify therefore , provided and ( i.e. the conditions for a well defined map to the einstein frame ) , we see that the extrema of correspond precisely to .however , care must be taken when or , as it may happen in some models that we will encounter in the next section .when this happens , the conformal transformation ( [ conft ] ) becomes singular or ill defined as . finally , for models where is not strictly positive , notably , where can vanish at some , called _ weak singularity _model 5 in section [ sec : expmodels ] ) it will be useful to introduce the `` potential '' defined via the finite or divergent behavior of also provides insight about . for instance , if is finite , it means that vanishes like .furthermore , can supply further information about the possible trivial solutions . as we discussed in section[ sec : theory ] and also in section [ sec : exactsols ] , if at where and vanishes , can be one trivial solution of eq .( [ tracer ] ) in vacuum or more generally , when the matter has a traceless energy - momentum tensor , a solution that can be different from if . in this sectionwe focus on some models that also admit the trivial solution , and check whether or not they satisfy the condition for which the nht s apply .the results are summarized in table [ tab : models ] at the end of this section . in order to obtain the ef potential from us recall the relationships ) \,\,\,,\end{aligned}\ ] ] the first one is obtained from eq .( [ phichi ] ) , while the second one is a definition .* model 1 : * , where , and are positive parameters of the theory . fixes the scale for each and can be chosen to be proportional to .this model has been thoroughly analyzed in the past in several scenarios ( see and references therein ) . for this model, if we focus in the domain , then for , . in fact the value corresponds to a degenerate situation that we discuss below . for , as andvice versa .so , for . by inverting the relationship ( [ mod1chir ] ) and using eq .( [ jfpot ] ) followed by the use of eqs .( [ chiphi ] ) and ( [ efpot ] ) , the two potentials read defined for , and defined for .the condition holds if , while for , the potential is , but in both cases the potential does not have minima , at least not for a finite .moreover , we do not consider because which can give rise to an effective negative gravitational constant .for , corresponding to gr , as one can see directly from eq .( [ jfpot ] ) . for this modelthe nht s a priori apply since .the fact that the potential is strictly positive and have no minima imply that the solution can not even exist in the ef .the point is that for the solution corresponds to ( ) , while for the same solution corresponds to ( ) .we see then that in both cases the mapping to the stt in the ef is ill defined precisely at where , while as is not even well defined as it turns to be multivalued . ] .this problem at exacerbates for that we discuss below .we conclude that for this model afsss simply can not exist under the ef . in the original formulation, the theory also degenerates at for since .afsss solutions exist but they are no unique as we are about to see .as we remarked briefly at the end of section [ sec : theory ] , for this class of models a quite degenerate situation may occur .to fix the ideas , let us focus on the case in the original formulation , since in the stt approach the maps breakdown at as we just mentioned , given that .so in this case the field eqs .( [ fieldeq3 ] ) and ( [ tracer ] ) in vacuum reduce to } \,\,\,,\\\label{field2r2 } \box r & = & 0 \,\,\,.\end{aligned}\ ] ] where we used , and in eqs .( [ field1r2 ] ) and ( [ field2r2 ] ) model and .thus we can expect degenerate solutions ( in the sense described in the main text ) for as well . ] .therefore we see that is a trivial solution of eq .( [ field2r2 ] ) . on the other hand , for such trivial solution eq .( [ field1r2 ] ) is satisfied for any compatible with . for instance, this can be satisfied for any solution of the metric satisfying the einstein equation , with .this _ degeneracy _ is somehow remarkable as shows that solutions of the field equations in gravity may not be unique , as illustrated by this simple model . in the afsss scenario ,one bh solution is clearly the schwarzschild solution , but other solutions are possible , which would be interesting to know to what kind of matter content they correspond in pure gr . in this example , the afsssbh solutions are unique as concerns the solution , since at the horizon , but they are highly non unique as concerns the metric .that is , to the trivial solution of eq .( [ tracersss ] ) , one can associate any solution for the metric satisfying , with , like the schwarzschild solution , the rn solution , a solution within the einstein yang mills system , etc . as emphasized in , this model provides a specific example showing that a generalization of the birkhoff s theorem similar to the one elucidated in the introduction , simply can not exist in general .it is enlightening to stress that this degenerate situation in vacuum is in a way similar , but opposite , to the `` degeneracy '' that appears in gr with matter sources : given , eq .( [ fieldeq1 ] ) or eq .( [ fieldeq3 ] ) reduce to the einstein field equation , whereas ( [ tracer ] ) reduce to , with .this means that this equation is satisfied identically regardless of the value , which in general , is not zero because . in other words , in gr the metricis constrained to satisfy the einstein s field equation , but is not constrained to satisfy any differential equation like eq .( [ fieldeq1 ] ) .finally we mention that when , the model with also admits the trivial solution eqs .( [ sds])([derpotcond ] ) the potential is constant in the ef , thus the trivial solution gives rise to the schwarzschild de sitter solution just like in the original variables . ] .incidentally , for this model the algebraic condition ( [ derpotcond ] ) is satisfied for any .that is , emerges as an integration constant independent of the parameters of the model .therefore the value depends on the assigned value for .in particular , taking we just recover the usual schwarzschild solution as mentioned above . for any other model admits the trivial solution , but the solution for the metric is not unique either as the degeneracy emerge as well in a similar way to the case .* model 2 : * , where is a positive dimensionless constant and is a positive parameter that fixes the _ scale_.this model was proposed by starobinsky as an alternative to explain the early inflationary period of the universe . for this model , in principlethe model is defined for . however ,if we impose , then we require .in fact if we allow the transformation to stt is not well defined , and the model degenerates at in the original variables as . if we focus on solutions with then .furthermore , and are both finite at for any . proceeding like in the previous model ,this one has associated the following potentials the potential is defined for .the region corresponds to , while corresponds to .clearly , with a global minimum located at where the potential vanishes ( see figure [ fig : staroinf ] ) .the model admits the solution corresponding to and .the trivial solution is the only root of . for this modelthe nht s apply , and therefore afsssbh hairy solutions are absent .although the model is defined for in order to be compatible with inflation , in the context of bh s one can in principle consider as a way to evade the nht s because then ( in that instance the condition , implies and the global minimum becomes a global maximum ) .it turns out , however , that potentials that are negative around a maximum but vanishes there ( in this case ) sometimes admit non - trivial solutions that seem hairy and af would be unstable and never settle into a stationary configuration . ] .we shall discuss a numerical example of this kind later , but suffice is to say that the bh solutions that one finds may vanish asymptotically but can have an _ oscillatory _ behavior that make them not genuinely af . in order to illustrate this ,suppose that asymptotically the metric component behaves like ( where and are some constants , and ) , then the mass function oscillates ( it may even diverge if ) , and thus , it does not really converge to a finite value in the limit , a value that one would identify with the adm mass .yet as .thus , for this kind of solutions the spacetime is not authentically af . these arguments can be justified using the following heuristic analysis .let us consider eq .( [ kgef ] ) and neglect the non - flat spacetime contributions from the metric .moreover , expanding around its maximum ( which is equivalent to expand around its minimum ) gives . with these simplifying assumptionsit is easy to see that the sss solution of eq .( [ kgef ] ) is where , and , are constants .now , at leading order when , we can take the conformal factor in eq .( [ conft ] ) and both metrics ( the jordan and the einstein frame metrics ) coincide asymptotically .thus the energy - density contribution is given by }{r^2 } \,\,\,,\ ] ] which is not positive definite .as a consequence , the mass functions is not positive definite either .in fact , the mass function behaves asymptotically as /m ] where is a dimensionless constant , a dimensionless parameter and is a positive parameter that fixes the scale .this model was proposed by starobinsky as a mechanism for generating the late accelerating expansion while satisfying several _ local _ observational tests .we analyzed this and the model 3 in the past in the cosmological setting and for constructing star - like objects using the approach of section [ sec : theory ] . in this paperwe take and explore several values of ( see section [ sec : numerical ] ) . for this modelthe conditions ( for a positive ) and do not hold in general .in fact , vanishes at .since appears in the denominator of eq .( [ tracersss ] ) , the vanishing of was termed by starobinsky a _weak singularity_. one can appreciate these features from figure [ fig : starou ] ( right panel ) where the `` potential '' is depicted .we see that at where vanishes .thus , the _ weak singularities _ at can not be `` cured '' by the term in because such term does not vanish there , and which otherwise could have lead to a finite . therefore any solution intending to interpolate between and such that or will irremediably encounter the _weak singularities _ at where we expect a singular behavior in eq .( [ tracersss ] ) . as a consequence ,our search for a numerical bh solutions with non - trivial was limited mostly in the range ( see section [ sec : numerical ] ) ( as opposed to the minkowski `` point '' ) in order to recover an effective cosmological constant asymptotically ( in _ time _ ) , and thus , to mimic the dark energy . in that scenariothe actual numerical solution is always positive and larger than , thus , the solution never crosses the _ weak _singularity .something similar takes place for the model 6 . ] . for this modelthe potential has several extrema ( see the middle panel of figure [ fig : starou ] ) , in particular , a global minimum at with , , which allows one to recover the schwarzschild solution .notice that the global minimum at corresponds to the global maximum of .this is because and is positive , whereas , and so is negative . on the other hand at well behaved there .the other extrema , a local maximum and minimum , lead to two schwarzschild de sitter solutions with positive now , the inversion required to recover the potential and then the potential demands or . that is , the inversion is possible when is a monotonic function of , which is not the case for this model . in principleone could perform the inversion picewise in very specific domains of the model but not in all the domain where the model is defined . in view of this drawbackthe potential is not well defined .in fact it is multivalued as we are about to see .its expression can not be given in closed form but only in parametric representation through the equations ^{1+q } } \,\,\,,\\\phi ( r ) & = & \sqrt{\frac{3}{2\kappa } } { \rm ln}\chi ( r ) \,\,\,,\\\mathscr{u}(\phi(r ) ) & : = & v(\chi[\phi(r ) ] ) \,\,\,.\end{aligned}\ ] ] the form of the potential is shown in figure [ fig : starou ] ( left panel ) . given that is not single valued it is a priori unclear how to establish a method to solve the differential equations in the efstt approach and decide unambiguously which value of to assign for a given .hence we conclude that one can not obtain any rigorous result from this frame using this potential , let alone trying to implement the nht s . buteven if we tried to do so , the lower branch of the potential does not satisfy the condition required by the theorem to prevent the existence of hair . in view of this , any strong conclusion about the existence or absence of hair must be obtained from the original formulation of the theory that was presented in section [ sec : sss ] .furthermore , due to the complexity of the model itself and of the differential equations , a numerical analysis is in order . in the next section we provide the numerical results that show evidence about the absence of hairy afsss black holes in this model .( in units of ; is given in units of ) associated with the starobinsky model 5 with and .the potential is multivalued and has negative branches , therefore , the nht s can not be applied .the arrows indicate the trajectory of the parametric plot for increasing values of ( is in units of ) .one of the marks ( in gray ) indicate the value corresponding to at which . the second mark ( in blue ) indicates the starting point of the parametric plot at . incidentally , for the field [ cf .( [ f_rstaro ] ] and the potential returns to its starting point .middle panel : potential showing explicitly the extrema ( maximum or minimum ) where the trivial solutions exist .the global minimum at leads to the schwarzschild solution with .the other extrema are associated with the schwarzschild de sitter solutions . for this particular modelany non - trivial solution interpolating between ( the value at the bh horizon ) and ( the asymptotic value ) is confined within the range which corresponds to , i.e. , values of close to the global minimum at . outside this range, the _ weak singularities _ can be reached by ( see the right panel ) .right panel : the function is depicted showing the places where diverges . these places called _ weak singularities _ are located at , where ( these values are denoted generically by in the main text ) . at such values( [ tracersss ] ) blows up.,title="fig:",width=222,height=188 ] ( in units of ; is given in units of ) associated with the starobinsky model 5 with and .the potential is multivalued and has negative branches , therefore , the nht s can not be applied .the arrows indicate the trajectory of the parametric plot for increasing values of ( is in units of ) .one of the marks ( in gray ) indicate the value corresponding to at which . the second mark ( in blue ) indicates the starting point of the parametric plot at . incidentally , for the field [ cf .( [ f_rstaro ] ] and the potential returns to its starting point .middle panel : potential showing explicitly the extrema ( maximum or minimum ) where the trivial solutions exist .the global minimum at leads to the schwarzschild solution with .the other extrema are associated with the schwarzschild de sitter solutions . for this particular modelany non - trivial solution interpolating between ( the value at the bh horizon ) and ( the asymptotic value ) is confined within the range which corresponds to , i.e. , values of close to the global minimum at . outside this range, the _ weak singularities _ can be reached by ( see the right panel ) .right panel : the function is depicted showing the places where diverges . these places called _ weak singularities _ are located at , where ( these values are denoted generically by in the main text ) . at such values( [ tracersss ] ) blows up.,title="fig:",width=222,height=188 ] ( in units of ; is given in units of ) associated with the starobinsky model 5 with and .the potential is multivalued and has negative branches , therefore , the nht s can not be applied .the arrows indicate the trajectory of the parametric plot for increasing values of ( is in units of ) .one of the marks ( in gray ) indicate the value corresponding to at which . the second mark ( in blue ) indicates the starting point of the parametric plot at . incidentally , for the field [ cf .( [ f_rstaro ] ] and the potential returns to its starting point .middle panel : potential showing explicitly the extrema ( maximum or minimum ) where the trivial solutions exist .the global minimum at leads to the schwarzschild solution with .the other extrema are associated with the schwarzschild de sitter solutions . for this particular modelany non - trivial solution interpolating between ( the value at the bh horizon ) and ( the asymptotic value ) is confined within the range which corresponds to , i.e. , values of close to the global minimum at . outside this range, the _ weak singularities _ can be reached by ( see the right panel ) .right panel : the function is depicted showing the places where diverges . these places called _ weak singularities _ are located at , where ( these values are denoted generically by in the main text ) . at such values( [ tracersss ] ) blows up.,title="fig:",width=222,height=188 ] * model 6 : * , where and are two dimensionless constants , and like in previous models , fixes the scale .this model was proposed by hu and sawicky , and it is perhaps one of the most thoroughly studied models ( see ref . for a review ) . in the cosmological context , and were fixed as to obtain adequate cosmological observables , like the actual dark and matter content in the universe .for instance taking , their values are , and . notice that the model 5 with and this model with are essentially the same . like in the previous model, the conditions and are not met in general , therefore , the potential is multivalued and has negative branches as well .it can be plotted using a parametric representation as in the model 5 : ^{2 } } \,\,\,,\\\phi ( r ) & = & \sqrt{\frac{3}{2\kappa } } { \rm ln}\chi ( r ) \,\,\,,\\\mathscr{u}(\phi(r ) ) & : = & v(\chi[\phi(r ) ] ) \,\,\,.\end{aligned}\ ] ] figure [ fig : hsu ] depicts the potential ( left panel ) where one can appreciate the pathological features .in fact , in this model a _ weak _ singularity is located precisely at , i.e. , the value that should reach asymptotically in the af scenario , and it is also the value corresponding to the ( hairless ) schwarzschild solution that we should be able to recover . nevertheless , and unlike model 5 , this singularity in eq .( [ tracer ] ) or in eq .( [ tracersss ] ) disappears for some values of because is finite or vanishes at in this model .namely , vanishes at for .thus , in this range of , the model admits the trivial solution .we do not consider the case as this model reduces to , which amounts to gr plus a cosmological constant .for we find , which does not even vanish .therefore , this means that does not solve eq .( [ tracersss ] ) , not trivially nor asymptotically . for and , there is indeed a _weak _ singularity at where ( cf . the right panel of figure [ fig : hsu ] ) . for quantity blows up , thus we consider only , notably , for the numerical analysis of section [ sec : numerical ] .in this model the potential has an minimum at for any , which allows for the schwarzschild solution whenever at ( i.e. for ) .however , when at and for which the schwarzschild solution may not even exist . in those situations , non - trivial solutions where vanishes asymptotically will encounter such singularity ( see section [ sec : numerical ] ) . like in the model 5 , any analysis using the ill - defined potential for the hu sawicky model is not robust .we then turn to a numerical analysis using the original formulation of the theory .this is presented in the next section .( in units of ; is given in units of ) associated with the hu sawicky model 6 for with the values and as in the main text .the potential is multivalued and has negative branches and the nht s can not be applied .the arrows and marks have the same meaning as in figure [ fig : starou ] .middle panel : potential showing explicitly the extrema where the trivial solutions may exist . at the local minimum occurs a _ weak _ singularity where ( see the right panel ) .right panel : potential .two weak singularities where are located at and . in particular , the singularity at precludes the search for numerical af hairy solutions with as since the `` singularity '' is encountered at finite .,title="fig:",width=222,height=188 ] ( in units of ; is given in units of ) associated with the hu sawicky model 6 for with the values and as in the main text .the potential is multivalued and has negative branches and the nht s can not be applied .the arrows and marks have the same meaning as in figure [ fig : starou ] .middle panel : potential showing explicitly the extrema where the trivial solutions may exist . at the local minimum occurs a _ weak _ singularity where ( see the right panel ) .right panel : potential .two weak singularities where are located at and . in particular , the singularity at precludes the search for numerical af hairy solutions with as since the `` singularity '' is encountered at finite .,title="fig:",width=222,height=188 ] ( in units of ; is given in units of ) associated with the hu sawicky model 6 for with the values and as in the main text .the potential is multivalued and has negative branches and the nht s can not be applied .the arrows and marks have the same meaning as in figure [ fig : starou ] .middle panel : potential showing explicitly the extrema where the trivial solutions may exist . at the local minimum occurs a _ weak _ singularity where ( see the right panel ) .right panel : potential .two weak singularities where are located at and . in particular , the singularity at precludes the search for numerical af hairy solutions with as since the `` singularity '' is encountered at finite .,title="fig:",width=222,height=188 ] [ cols="^,^,^,^ " , ] as we discussed in the previous section , in some circumstances it is possible to formulate the original model as a stt in the ef where the scalar - field turns to be coupled minimally to the ef metric but it is subject to a potential .if this potential verifies the condition , then the nht s apply and , at least in the region where and , we can assert that hair ( where or equivalently are not trivial solutions ) is absent , in which case , the only possible af solutions are at best and .this conclusion follows for the models 1 - 4 in the sectors where their parameters allow for the solution and led to . on the other hand, we mentioned that models 2 and 4 can have potentials with negative branches if we allow for the parameters and to be negative .negative values of such parameters are not usually considered in cosmology , but for the sake of finding hairy solutions , we can contemplate them . because the nht s do not apply when is negative , notably at the horizon ,the problem of hair reopens when this happens . at this regard ,several strategies are available to solve it : 1 ) show an explicitly exact afsss black hole solution with hair ; 2 ) prove analytically the absence of it ( i.e. extend the nht s ) ; 3 ) show numerical evidence about one or the other . given that the differential equations presented in section [ sec : sss ] are very involved , strategies 1 or 2 might lead to a dead end , thus we opted for option three . in particular , this strategy seems even the most adequate as concerns the models 5 and 6 , where the potential is not even well defined .we proceed to solve numerically eqs .( [ tracersss])([deltasss ] ) subject to the regularity conditions at the horizon provided in sec .[ sec : regcond ] and in the appendix [ sec : regcond2 ] .the only free conditions are the value and .the methodology is roughly as follows .one starts by fixing the size of the black hole , and then looks for so that as .this `` boundary - value '' problem is solved using a _ shooting method _ within a runge kutta algorithm .we have implemented a similar methodology for constructing star - like objects in gravity in the past .numerical solutions with non - trivial hair with _ asymmetric _( non positive definite ) potentials have been found previously within the einstein- system using similar techniques . as we will see in the next section , for certain modelsis not even necessary to perform a shooting as the dynamics of naturally drives asymptotically for a given .now , as we mentioned previously , for the af solutions to exist , it is not sufficient that as . in section [ sec : exactsols ]we analyzed one exact solution where this happens precisely , and yet , the solution is not af but has a deficit angle . in that casethe mass function diverges at least linearly with .it is then crucial to ensure that the mass function converges to a constant value ( that we assume to be the komar or , equivalently , the adm mass ) in order to claim for a genuinely af solution . as a matter of fact, we used that exact solution as a testbed for our code .that is , we took the model eq .( [ frsz ] ) as _ input _ and recovered numerically the exact solution provided by eqs .( [ gfrsz])([deltafrsz ] ) , notably for .notice that in this case is not trivial .figure ( [ fig : frsz ] ) depicts the analytic and the numerical solutions superposed , showing an excellent agreement between the two .typical numerical errors are depicted in figure [ fig : relerr ] .we also checked that the trivial solutions that exist in several of the models 1 - 5 were recovered numerically when starting with and which lead to the _ hairless _ kottler schwarzschild de sitter solutions , including the plain af schwarzschild solution when .additionally we devised other _internal _ tests to verify the consistency of our code .these tests are similar to those implemented in our analysis of star - like objects , and are independent of the fact that exact solutions are available or not .let us turn our attention to the specific models that deserved a detailed numerical exploration .( the exact and numerical solutions are superposed ) computed using the model eq .( [ frsz ] ) for ( i.e. null cosmological constant ) .the ricci scalar is not trivial and vanishes asymptotically , however , the spacetime is not exactly asymptotically flat but has a deficit angle [ see eqs .( [ gfrsz])([deltafrsz ] ) ] . at the horizon the ricci scalar satisfies the regularity conditions .middle - panel : the mass function is not constant but grows linearly with the coordinate due to the deficit angle .right - panel : metric components , and their product . in the middle and right panelsthe exact and numerical solutions are superposed as well ( cf . figure [ fig : relerr]).,title="fig:",width=222,height=188 ] ( the exact and numerical solutions are superposed ) computed using the model eq .( [ frsz ] ) for ( i.e. null cosmological constant ) .the ricci scalar is not trivial and vanishes asymptotically , however , the spacetime is not exactly asymptotically flat but has a deficit angle [ see eqs .( [ gfrsz])([deltafrsz ] ) ] . at the horizon the ricci scalar satisfies the regularity conditions .middle - panel : the mass function is not constant but grows linearly with the coordinate due to the deficit angle .right - panel : metric components , and their product . in the middle and right panelsthe exact and numerical solutions are superposed as well ( cf . figure [ fig : relerr]).,title="fig:",width=222,height=188 ] ( the exact and numerical solutions are superposed ) computed using the model eq .( [ frsz ] ) for ( i.e. null cosmological constant ) .the ricci scalar is not trivial and vanishes asymptotically , however , the spacetime is not exactly asymptotically flat but has a deficit angle [ see eqs .( [ gfrsz])([deltafrsz ] ) ] . at the horizon the ricci scalar satisfies the regularity conditions .middle - panel : the mass function is not constant but grows linearly with the coordinate due to the deficit angle .right - panel : metric components , and their product . in the middle and right panelsthe exact and numerical solutions are superposed as well ( cf . figure [ fig : relerr]).,title="fig:",width=222,height=188 ] as depicted in figure [ fig : frsz ] ( left panel ) .similar relative errors ( not depicted ) are found for the mass function and the metric components . , width=245,height=188 ] * model 4 : * we consider the model 4 with . in this sector of potential is not positive definite and the model may admit hairy solutions because the nht s do not apply .notwithstanding , the only solutions with a non - trivial ricci scalar that we find numerically are not exactly af .the ricci scalar vanishes asymptotically in an oscillating fashion as , but the mass function does not converge but oscillates as well and grows unboundedly as with ( see figure [ fig : frexpnohair ] ) . this behavior is similar to the one provided by the heuristic analysis within the model 2 , except that here we take into account the full system of equations . despite such behaviorthe metric components , which depend on , remain bounded in the asymptotic region .this can be partially understood by looking to , and realize that for the non - oscillating part of this metric component behaves like .so if , that part of may converge to very slowly , so slowly that one can not even notice it by looking to the numerical outcome .this behavior seems to be generic for any and any .the conclusion is that we do not find any genuinely afsss black hole solution in this model . .left - panel : ricci scalar for three different values of .the solutions vanish asymptotically .middle panel : the mass function corresponds to the solution of with shown in the left panel . similar plots for are found for the other two solutions of . the non - converging behavior of as indicates that the spacetime is not af .right panel : metric components for the solution depicted in the middle panel . the metric components and and their product are bounded but oscillate as corroborating that the resulting spacetime is not af.,title="fig:",width=222,height=188 ] .left - panel : ricci scalar for three different values of .the solutions vanish asymptotically .middle panel : the mass function corresponds to the solution of with shown in the left panel . similar plots for are found for the other two solutions of . the non - converging behavior of as indicates that the spacetime is not af .right panel : metric components for the solution depicted in the middle panel . the metric components and and their product are bounded but oscillate as corroborating that the resulting spacetime is not af.,title="fig:",width=222,height=188 ] .left - panel : ricci scalar for three different values of .the solutions vanish asymptotically .middle panel : the mass function corresponds to the solution of with shown in the left panel . similar plots for are found for the other two solutions of . the non - converging behavior of as indicates that the spacetime is not af .right panel : metric components for the solution depicted in the middle panel . the metric components and and their product are bounded but oscillate as corroborating that the resulting spacetime is not af.,title="fig:",width=222,height=188 ] finally , let us focus on the models 5 and 6 that led to pathological potentials in the efstt description . * model 5 : * for the starobinsky model we limit our search for a shooting value first in the region and then in in order to avoid crossing the _ weak _ singularities at when tries to reach the asymptotic value .we never found a successful shooting parameter leading to an authentic asymptotically flat solution .two examples of this kind of solutions are depicted in figures [ fig : starono - hairoscillating ] and [ fig : starono - hair ] .figure [ fig : starono - hairoscillating ] shows that for the solutions are similar to the exponential model 4 with depicted in figure [ fig : frexpnohair ] .thus , the asymptotic behavior does not correspond to an af spacetime . for , we find situations where the ricci scalar decreases monotonically to a constant value without oscillating as one can see in the left panel of figure [ fig : starono - hair ] . however , this constant is not related with the trivial solution which is the solution of the algebraic equation . in fact, what happens is that as , as we can see from the middle panel of figure [ fig : starono - hair ] , and also , therefore , by looking at eq .( [ tracersss ] ) , we appreciate that the combination /(1 - 2m / r) ] , and , where , and in this appendix a __ indicates differentiation with respect to the argument of the corresponding function .so , assuming to be a convex function , i.e. , , then clearly if , and the inverse of this function allows to define . in this way we see that is no other than the legendre transformation of , and .moreover , defines in turn the legendre transformation of ( at least in the region where ) which in this case the condition , simply leads to .2 ) the condition for the second legendre transformation can be imposed in the action by considering as a lagrange multiplier , so that the variation with respect to leads to .this is the formal construction when treating theories as stt , and in practice it is achieved by taking the action ( [ jordanf ] ) . in the followingwe perform explicitly the transformation between the original theory and the special class of brans dicke model supplemented with a potential .the brans dicke action with a potential is given by = \frac{1}{2\kappa}\int \!\ !d^4 x \ :\sqrt{-g } \big [ \phi r - \frac{\omega_{\rm bd}(\phi)}{\phi}g^{ab}(\nabla_a\phi)(\nabla_b\phi ) -w(\phi)\big ] + i_{\rm matt}[g_{ab } , { \mbox{\boldmath{ } } } ] \,\,\,.\ ] ] for comparison with the model we shall focus only on the case .thus , the field equations read on the other hand when introducing eq . ( [ fieldeq1 ] ) simply reads where .moreover , this equation can be written as \,\,\,,\ ] ] where we have made explicit the functional dependence , which means that if , one can in principle invert the definition , and obtain , and thus . therefore ,if we choose the potential and identify , then eq .( [ ebdfield ] ) becomes exactly eq .( [ fieldeq1jf2 ] ) . moreover , the trace of eq .( [ fieldeq1jf2 ] ) reads with the above identification of the fields and the potential we appreciate that eq . ( [ phibdfield ] ) also becomes eq .( [ chifield ] ) where one can easily verify that the expression coincides exactly with .henceforth , we conclude that theory is equivalent to a brans dicke theory with and a potential .sss spacetimes with zero charge that are asymptotically flat except for a deficit angle have the asymptotic form after a redefinition of coordinates , and the metric acquires the standard angle - deficit form under this parametrization the coefficient is then identified with the adm mass associated with this kind of spacetimes . in the same way, an sss metric which is ads or aads with a deficit angle can be transformed into notice that the cosmological constant did not require to be redefined in order to obtain the standard metric ( [ ssslambda2 ] ) .we then need , and in order to recover the metric ( [ gfrsz ] ) in the standard form when .+ finally , the metric of an sss that is ads or aads with a deficit angle and endowed with a charge is transformed into taking .the quantity is presumably the actual charge when .again , taking , and we recover the metric ( [ gfrsz ] ) written in the standard form but now with given by .s. nojiri and s. d. odintsov , int . j. geom .phys . 4 , 115 ( 2007 ); ; s. capozziello , m. de laurentis , and v. faraoni , arxiv : 0909.4672 ; s. capozziello , and m. de laurentis , arxiv : 1108.6266 ; t. clifton , p. g. ferreira , a. padilla , and c. skordis , ; s. capozziello , and m. francaviglia , ; t. p. sotiriou and v. faraoni , ; a. de felice , and s. tsujikawa ,. p. t. chrusciel , contemp . math . * 170 * , 23 ( 1994 ) ; m. heusler , _ black holes uniqueness theorems _ , cambridge univ . press , cambridge ( uk ) , 1996 ;d. c. robinson , in _kerr spacetime : rotating black holes in general relativity _ ,d. l. wiltshire , m. visser , and s. m. scott , cambridge univ . press ,cambridge ( uk ) , 2009 , p.115 - 143 .t. multamki , and i. vilja , ; ; k. kainulainen , j. piilonen , v. reijonen , and d. sunhede , ; k. kainulainen , and d. sunhede , ; k. henttunen , t. multamki , and i. vilja , ; t. kobayashi , and k. maeda , ; ; e. babichev , and d. langlois , ; ; a. upadhye , and w. hu , ; s. capozziello , m. de laurentis , s. d. odintsov , and a. stabile , ; s. s. yazadjiev , d. d. doneva , k. d. kokkotas , and kalin v. staykov , arxiv:1402.4469 s. capozziello , a. stabile , and a. troisi , ; ; a. de la cruz dombriz , a. dobado , and a. l. maroto , ; ; a. larraaga , pramana journal of physics , * 78 * , 697 ( 2012 ) [ arxiv : 1108.6325 ] ; j. a. r. cembranos , a. de la cruz dombriz , and p. jimeno romero , arxiv : 1109.4519 ; t. moon , y. s. myung , and e. j. son , ; a. sheykhi , ; s. habib mazharimousavi , m. halilsoy , and t. tahamtan , . l. f. abbott , and s. deser , ; g. w. gibbons , s. w. hawking , g. t. horowitz , and m. j. perry , ; a. ashtekhar , and a. magnon , ; w. boucher , g. w. gibbons , and g. t. horowitz , ; m. henneaux , and c. teitelboim , ; v. balasubramanian , and p. krauss , ; g. w. gibbons , ; a. ashtekhar , and s. das , ; p. chruciel , and g. nagy , ; adv* 5 * , 697 ( 2002 ) .g. cognola , e. elizalde , s. nojiri , s. d. odintsov , l. sebastiani , s. zerbini , ; e. linder , ; l. yang , c. c. lee , l. w. luo , c. q. geng , ; k. bamba , c. q. geng , c. c. lee , ; e. elizalde , s. nojiri , s. d. odintsov , l. sebastiani , s. zerbini , ; e. elizalde , s. d. odintsov , l. sebastiani , s. zerbini , .
we discuss with a rather critical eye the current situation of black hole ( bh ) solutions in gravity and shed light about its geometrical and physical significance . we also argue about the meaning , existence or lack thereof of a birkhoff s theorem in this kind of modified gravity . we focus then on the analysis and quest of _ non - trivial _ ( i.e. hairy ) _ asymptotically flat _ ( af ) bh solutions in static and spherically symmetric ( sss ) spacetimes in vacuum having the property that the ricci scalar does _ not _ vanish identically in the domain of outer communication . to do so , we provide and enforce the _ regularity conditions _ at the horizon in order to prevent the presence of singular solutions there . specifically , we consider several classes of models like those proposed recently for explaining the accelerated expansion in the universe and which have been thoroughly tested in several physical scenarios . finally , we report analytical and numerical evidence about the _ absence _ of _ geometric hair _ in afsssbh solutions in those models . first , we submit the models to the available no - hair theorems , and in the cases where the theorems apply , the absence of hair is demonstrated analytically . in the cases where the theorems do not apply , we resort to a numerical analysis due to the complexity of the non - linear differential equations . within that aim , a code to solve the equations numerically was built and tested using well know exact solutions . in a future investigation we plan to analyze the problem of hair in de sitter and anti - de sitter backgrounds . = 1
consider the classical regression quantile model : given independent observations , with fixed ( for fixed ) , the conditional quantile of the response given is let be the koenker bassett regression quantile estimator of . provides definitions and basic properties , and describes the traditional approach to asymptotics for using a bahadur representation : where is a brownian bridge and is an error term .unfortunately , is of order [ see , e.g. , and ] .this might suggest that asymptotic results are accurate only to this order . however , both simulations in regression cases and one - dimensional results [ ( ) ] justify a belief that regression quantile methods should share ( nearly ) the accuracy of smooth statistical procedures ( uniformly in ) .in fact , as shown in , has a limit with zero mean and that is independent of .thus , in any smooth inferential procedure ( say , confidence interval lengths or coverages ) , this error term should enter only through .nonetheless , this expansion would still leave an error of ( coming from the error beyond the term in the bahadur representation ) , and so would still fail to reflect root- behavior .furthermore , previous results only provide such a second - order expansion for fixed .=1 it must be noted that the slower error rate arises from the discreteness introduced by indicator functions appearing in the gradient conditions . in fact , expansions can be carried out when the design is assumed to be random ; see and , where the focus is on analysis of the bootstrap .specifically , the assumption of a smooth distribution for the design vectors together with a separate treatment of the lattice contribution of the intercept does permit appropriate expansions .unfortunately , the randomness in means that all inference must be in terms of the average asymptotic distribution ( averaged over ) , and so fails to apply to the generally more desirable conditional forms of inference .specifically , unconditional methods may be quite poor in the heteroscedastic and nonsymmetric cases for which regression quantile analysis is especially appropriate .the main goal of this paper is to reclaim increased accuracy for conditional inference beyond that provided by the traditional bahadur representation .specifically , the aim is to provide a theoretical justification for an error bound of nearly root- order uniformly in .define we first develop a normal approximation for the density of with the following form : for , where .we then extend this result to the densities of a pair of regression quantiles in order to obtain a `` hungarian '' construction [ ( ) ] that approximates the process by a gaussian process to order , where ( uniformly for ) .section [ sec2 ] provides some applications of the results here to conditional inference methods in regression quantile models .specifically , an expansion is developed for coverage probabilities of confidence intervals based on the [ ] difference quotient estimator of the sparsity function .the coverage error rate is shown to achieve the rate for conditional inference , which is nearly the known `` optimal '' rate obtained for a single sample and for unconditional inference .section [ sec3 ] lists the conditions and main results , and offers some remarks .section [ sec4 ] provides a description of the basic ingredients of the proof ( since this proof is rather long and complicated ) .section [ sec5 ] proves the density approximation for a fixed ( with multiplicative error ) .section [ sec6 ] extends the result to pairs of regression quantiles ( theorem [ den2d ] ) , and section [ sec7 ] provides the `` hungarian '' construction ( theorem [ hung ] ) with what appears to be a somewhat innovative induction along dyadic rationals .as the impetus for this work was the need to provide some theoretical foundation for empirical results on the accuracy of regression quantile inference , some remarks on implications are in order . [ remark1 ] clearly , whenever published work assesses the accuracy of an inferential method using the error term from the bahadur representation , the present results will immediately provide an improvement from to the nearly root- rate here .one area of such results is methods based directly on regression quantiles and not requiring estimation of the sparsity function [ .there are several papers giving such results , although at present it appears that their methods have theoretical justification only under location - scale forms of quantile regression models .specifically , introduced confidence intervals ( especially for fitted values ) based on using pairs of regression quantiles in a way analogous to confidence intervals for one - sample quantiles .they showed that the method was consistent , but the accuracy depended on the bahadur error term .thus , results here now provide accuracy to the nearly root- rate of theorem [ th2 ] . a second approach directly using the dual quantile processis based on the regression ranks of .again , the error terms in the theoretical results there can be improved using theorem [ th1 ] here , though the development is not so direct . for a third application, showed that the regression quantile process interpolated along a grid of mesh strictly larger than is asymptotically equivalent to the full regression quantile process to first order , but ( because of additional smoothness ) will yield monotonic quantile functions with probability tending to 1 .however , their development used the bahadur representation , which indicated that a mesh of order balanced the bias and accuracy and bounded the difference between and its linear interpolate by nearly . with some work , use of the results here would permit a mesh slightly larger than the nearly root- rate here to obtain an approximation of nearly root- order .[ remark2 ] inference under completely general regression quantile models appears to require either estimation of the sparsity function or use of resampling methods .the most general methods in the ` quantreg ` package [ ] use the `` difference quotient '' method with the [ ] bandwidth of order , which is known to be optimal for coverage probabilities in the one - sample problem .as noted above , expansions using the randomness of the regressors can be developed to provide analogous results for unconditional inference .the results here ( with some elaboration ) can be used to show that the hall sheather estimates provide ( nearly ) the same rates of accuracy for coverage probabilities under the conditional form of the regression quantile model . to be specific ,consider the problem of confidence interval estimation for a fixed linear combination of regression parameters : .the asymptotic variance is the well - known sandwich formula where is the sparsity , ( with being the gradient ) , and where is the design matrix .following , the sparsity may be approximated by the difference quotient .standard approximation theory ( using the taylor series ) shows that the sparsity may be estimated by and the sparsity ( [ sadef ] ) may be estimated by inserting in .then , as shown in the , the confidence interval has coverage probability , which is within a factor of of the optimal hall sheather rate in a single sample .furthermore , this rate is achieved at the ( optimal ) -value , which is the optimal hall sheather bandwidth except for the term . since the optimal bandwidth depends on , the optimal constant for the not be determined , as it can when is allowed to be random [ and for which the term is explicit ] .this appears to be an inherent shortcoming for using inference conditional on the design .note also that it is possible to obtain better error rates for the coverage probability by using higher order differences .specifically , using the notation of ( [ deldef ] ) , as a consequence , the optimal bandwidth for this estimator is of order , and the coverage probability is accurate to order ( except for logarithmic factors ) .[ remark3 ] a third approach to inference applies resampling methods .as noted in the , while the bootstrap is available for unconditional inference , the practicing statistician will generally prefer to use inference conditional on the design .there are some resampling approaches that can obtain such inference .one method is that of , which simulates the binomial variables appearing in the gradient condition .another is the `` markov chain marginal bootstrap '' of [ see also ] . however, this method also involves sampling from the gradient condition .the discreteness in the gradient condition would seem to require the error term from the bahadur representation , and thus leads to poorer inferential approximation : the error would be no better than order even if it were the square of the bahadur error term . while some evidence for decent performance of these methods comes from ( rather limited ) simulations , it is often noticed that these methods perform perhaps somewhat more poorly than the other methods in the ` quantreg ` package of . clearly , a more complete analysis of inference for regression quantiles based on the more accurate stochastic expansions here would be usefulunder the regression quantile model of section [ sec1 ] , the following conditions will be imposed : let denote the coordinates of except for the intercept ( i.e. , the last coordinates , if there is an intercept ) .let denote the conditional characteristic function of the random variable , given .let and denote the conditional density and c.d.f. of given .[ co1 ] for any , there is such that uniformly in .[ co2 ] are uniformly bounded , and there are positive definite matrices and such that for any ( as ) uniformly in .[ cof ] the derivative of is uniformly bounded on the interval .two fundamental results will be developed here .the first result provides a density approximation with multiplicative error of nearly root- rate . a result for fixed given in theorem [ th5 ] , but the result needed here is a bivariate approximation for the joint density of one regression quantile and the difference between this one and a second regression quantile ( properly normalized for the difference in -values ) .let for some , and let with for some .here , one may want to take near 1 [ see remark ( 1 ) below ] , though the basic result will often be useful for , or even smaller .define .\end{aligned}\ ] ] [ th1 ] [ den2d ] under conditions [ co1 ] , [ co2 ] and [ cof ] , there is a constant such that for and at and , respectively , satisfies where is a normal density with covariance matrix having the form given in ( [ diffcov ] ) .the second result provides the desired `` hungarian '' construction : [ th2 ] [ hung ] assume conditions [ co1 ] , [ co2 ] and [ cof ] .fix with , and let be dyadic rationals with denominator less than . define to be the piecewise linear interpolant of [ as defined in ( [ bndef ] ) ] .then for any , there is a ( zero - mean ) gaussian process , , defined along the dyadic rationals and with the same covariance structure as ( along ) such that its piecewise linear interpolant satisfies almost surely .some remarks on the conditions and ramifications are in order : \(1 ) the usual construction approximates by a `` brownian bridge '' process .theorem [ hung ] really only provides an approximation for the discrete processes at a sufficiently sparse grid of dyadic rationals . that the piecewise linear interpolants converge to the usual brownian bridge follows as in .the critical impediment to getting a brownian bridge approximation to with the error in theorem [ hung ] is the square root behavior of the modulus of continuity .this prevents approximating the piecewise linear interpolant within an interval of length greater than ( roughly ) order if a root- error is desired . in order to approximate the density of the difference in over an interval between dyadic rationals ,the length of the interval must be at least of order ( for ) .clearly , it will be possible to approximate the piecewise linear interpolant by a brownian bridge with error , and thus to get arbitrarily close to the value of for the exponent of . for most purposes, it might be better to state the final result as for any ( where is the appropriate brownian bridge ) ; but the stronger error bound of theorem [ hung ] does provide a much closer analog of the result for the one - sample ( one - dimensional ) quantile process .\(2 ) the one - sample result requires only the first power of , which is known to give the best rate for a general result .the extra addition of in the exponent is clearly needed for the density approximation , but this may be only a technical assumption .nonetheless , i conjecture that some extra amount is needed in the exponent .\(3 ) conditions [ co1 ] and [ co2 ] can be shown to hold with probability tending to one under smoothness and boundedness assumptions of the distribution of .nonetheless , the condition that be bounded seems rather strong in the case of random .it seems clear that this can be weakened , though probably at the cost of getting a poorer approximation .for example , having exponentially small tails might increase the bound in theorem [ hung ] by an additional factor of , and algebraic tails are likely worse .however , details of such results remain to be developed .\(4 ) similarly , it should be possible to let , which defines the compact subinterval of -values , tend to zero . clearly ,letting be of order would lead to extreme value theory and very different approximations . for slower rates of convergence of ,bahadur expansions have been developed [ e.g. , see ] and extension to the approximation result in theorem [ hung ] should be possible .again , however , this would most likely be at the cost of a larger error term .\(5 ) the assumption that the conditional density of the response ( given ) be continuous is required even for the usual first order asymptotics .however , one might hope to avoid condition [ cof ] , which requires a bounded derivative at all points .for example , the double exponential distribution does not satisfy this condition .it is likely that the proofs here can be extended to the case where the derivative does not exist on a finite set ( or even on a set of measure zero ) , but dropping differentiability entirely would require a rather different approach .furthermore , the apparent need for bounded derivatives in providing uniformity over in bahadur expansions suggests the possibility that some differentiability is required .\(6 ) theorem [ den2d ] provides a bivariate normal density approximation with error rate ( nearly ) when and are fixed . when , of course , the error rate is larger .note , however , that the slower convergence rate when does not reduce the order of the error in the final construction since the difference is of order .the development of the fundamental results ( theorems [ den2d ] and [ hung ] ) will be presented in three phases . the first phase provides the density approximation for a fixed , since some of the more complicated features are more transparent in this case .the second phase extends this result to the bivariate approximation of theorem [ den2d ] .the final phase provides the `` hungarian '' construction of theorem [ th2 ] . to clarify the development ,the basic ingredients and some preliminary results will be presented first .[ ingredient1 ] begin with the finite sample density for a regression quantile [ , ] : assume has a density , , and let be fixed . note that is defined by having zero residuals ( if the design is in general position ) .specifically , there is a subset , , of integers such that , where has rows for and has coordinates for .let denote the set of all such -element subsets .define as described in , the density of evaluated at the argument is given by here , the event in the probability above is the event that the gradient condition holds for a fixed subset , , where , with the rectangle that is the product of intervals [ see theorem 2.1 of ] , and where [ ingredient2 ] since is approximately normal , and is bounded , the probability in ( [ finite ] ) is approximately a normal density evaluated at . to get a multiplicative bound, we may apply a `` cramr '' expansion ( or a saddlepoint approximation ) .if had a smooth distribution ( i.e. , satisfied cramr s condition ) , then standard results would apply .unfortunately , is discrete .the first coordinate of is nearly binomial , and so a multiplicative bound can be obtained by applying a known saddlepoint formula for lattice variables [ see ] .equivalently , approximate by an exact binomial and ( more directly , but with some rather tedious computation ) expand the logarithm of the gamma function in stirling s formula . using either approach, one can show the following result : [ th3 ] [ bin ] let , be any interval of length containing , and let .then where . a proof based on multinomial expansions is given for the bivariate generalization in theorem [ den2d ] .note that this result includes an extra factor of .this will allow the bounds to hold except with probability bounded by an arbitrarily large negative power of .this is clear for the limiting normal case ( by standard asymptotic expansions of the normal c.d.f . ) . to obtain such bounds forthe distribution of will require some form of bernstein s inequality .such inequalities date to bernstein s original publication in 1924 [ see ] , but a version due to may be easier to apply .[ ingredient3 ] using theorem [ bin ] , it can be shown ( see section [ sec4 ] ) that the probability in ( [ finite ] ) may be approximated as where the first coordinate of is a sum of i.i.d . variables , the last coordinates are those of , and . since we seek a normal approximation for this probability with multiplicative error , at this point one might hope that a known ( multidimensional ) `` cramr '' expansion or saddlepoint approximation would allow to be replaced by a normal vector ( thus providing the desired result ) .however , this will require that the summands be smooth , or ( at least ) satisfy a form of cramr s condition .let denote the last coordinates of .one approach would be to assume has a smooth distribution satisfying the classical form of cramr s condition . however , to maintain a conditional form of the analysis , it suffices to impose a condition on , which is designed to mimic the effect of a smooth distribution and will hold with probability tending to one if has such a smooth distribution .condition [ co1 ] specifies just such an assumption .note that the characteristic functions of the summands of , say , , will also satisfy condition [ co1 ] [ equation ( [ x - cond1 ] ) ] and so should allow application of known results on normal approximations .unfortunately , i have been unable to find a published result providing this and so section [ sec5 ] will present an independent proof . clearly , some additional conditions will be required .specifically , we will need conditions that the empirical moments of converge appropriately , as specified in condition [ co2 ] .finally , the approach using characteristic functions is greatly simplified when the sums , , have densities .again , to avoid using smoothness of the distribution of ( and thus to maintain a conditional approach ) , introduce a random perturbation which is small and has a bounded smooth density ( the bound may depend on ) .section [ sec4 ] will then prove the following : [ th4 ] [ sina ] assume conditions [ co1 ] and [ co2 ] and the regression quantile model of section [ sec1 ] .let be the argument of the density of , and suppose for some constant .then a constant can be chosen so that where has mean and covariance , can be arbitrarily large , and is a small perturbation [ see ( [ vbound ] ) ] . following the proof of this theorem , it will be shown that the effect of can be ignored , if is bounded by , where may depend on ( but not on ) .[ ingredient4 ] expanding the densities in ( [ finite ] ) is trivial if the densities are sufficiently smooth .the assumption of a bounded first derivative in condition [ cof ] appears to be required to analyze second order terms ( beyond the first order normal approximation ) .[ ingredient5 ] finally , summing terms involving in ( [ finite ] ) over the summands will require vinograd s theorem and related results from matrix theory concerning adjoint matrices [ see ] .the remaining ingredients provide the desired `` hungarian '' construction .[ ingredient6 ] extend the density approximation to the joint density for and ( when standardized ) .a major complication is that one needs , making the covariance matrix tend to singularity .thus , we focus on the joint density for standardized versions of and .clearly , this requires modification of the proof for the univariate case to treat the fact that converges at a rate depending on .the result is given in theorem [ den2d ] .[ ingredient7 ] extend the density result to obtain an approximation for the quantile transform for the conditional distribution of differences ( between successive dyadic rationals ) .this will provide ( independent ) normal approximations to the differences whose sums will have the same covariance structure as the regression quantile process ( at least along a sufficiently sparse grid of dyadic rationals ) .[ ingredient8 ] finally , the hungarian construction is applied inductively along the sparse grid of dyadic rationals .this inductive step requires some innovative development , mainly because the regression quantile process is not directly expressible in terms of sums of random variables ( as are the empiric one - sample distribution function and quantile function ) .let be the last coordinates of and be the interval . then , where is the set shifted as indicated above .note that by hoeffding s inequality [ ] , for any fixed , the shift satisfies except with probability bounded by .thus , we may apply theorem [ bin ] [ equation ( [ binbd ] ) ] with equal to the shift above to obtain the following bound ( to within an additional additive error of ) : where and is a bound on , which may be taken to be of the form ( by hoeffding s inequality ) .finally , we obtain where the first coordinate of is a sum of i.i.d . random variables and the last coordinates are those of . to treat the probability involving , standard approaches using characteristic functions can be employed . in theory , exponential tilting ( or saddlepoint methods ) should provide better approximations , but since we require only the order of the leading error term , we can proceed more directly . as in ,the first step is to add an independent perturbation so that the sum has an integrable density : specifically , for fixed let be a random variable ( independent of all observations ) with a smooth bounded density and for which ( for each ) where will be chosen later .define we now allow to be any ( arbitrary ) set , say , .thus , has a density and we can write [ with where denotes the characteristic function of the random variable .break domain of integration into 3 sets : , , and . on ,expand .for this , compute \\ & = & x_i x_i ' \tau ( 1 - \tau ) + { { \mathcal}{o } } \bigl ( \| x_i \|^3 \| \delta\|^2 / n \bigr).\end{aligned}\ ] ] hence , using the boundedness of , and ( on this first interval ) , where and are defined in condition [ co2 ] [ see ( [ gdef ] ) and ( [ hdef ] ) ] . for the other two intervals on the -axis , the integrands will be bounded by an additive error times since . on ,the summands are bounded and so their characteristic functions satisfy for some constant .thus , on , for some constant . therefore , integrating times provides an additive bound of order , where and ( for any ) can be chosen sufficiently large so that .finally , on , condition [ co1 ] [ see ( [ x - cond1 ] ) ] gives an additive bound of directly and , again ( as on the previous interval ) , an additive error bounded by can be obtained .therefore , it now follows that we can choose ( depending on , , and ) so that from which theorem [ sina ] follows .finally , we show that the contribution of can be ignored : where denotes the symmetric difference of the sets .since is bounded and , this symmetric difference is contained in a set , , which is the union of ( boundary ) parallelepipeds each of the form , where is a rectangle one of whose coordinates has width and all other coordinates have length 1 .thus , applying theorem [ sina ] ( as proved for the set ) , where and are constants , and may be chosen arbitrarily large .[ th5 ] [ den1d ] assume conditions [ co1 ] , [ co2 ] , [ cof ] and the regression quantile model of section [ sec1 ] .let be the argument of the density of and suppose for some constant .then , uniformly in ( for ) , where denotes the normal density with covariance with and given by ( [ gdef ] ) and ( [ hdef ] ) .recall the basic formula for the density ( [ finite ] ) : by theorem [ sina ] , ignoring the multiplicative and additive error terms given in this result and setting , since is bounded by a constant times on and the last integral equals . by ingredient [ ingredient4 ] ,the product is this gives the main term of the approximation as the penultimate step is to apply results from matrix theory on adjoint matrices [ specifically , the cauchy binet theorem and the `` trace '' theorem ; see , e.g. , , pages 9 and 87 ] : the sum above is just the trace of the adjoint of , which equals .the various determinants combine ( with the factor ) to give , which provides the asymptotic normal density we want .finally , we need to combine the multiplicative and additive errors into a single multiplicative error .so consider ( for some constant ) .then , the asymptotic normal density is bounded below by for some constant .thus , since the constant ( which depends on , , and ) can be chosen so that the additive errors are smaller than , the error is entirely subsumed in the multiplicative factor : .we first prove theorem [ den2d ] , which provides the bivariate normal approximation .proof of theorem [ den2d ] the proof follows the development in theorem [ den1d ] .the first step treats the first ( intercept ) coordinate .since the binomial expansions were omitted in the proof of theorem [ bin ] , details for the trinomial expansion needed for the bivariate case here will be presented . the binomial sum in the first coordinate of ( [ sndef ] )will be split into the sum of observations in the intervals , and . the expected number of observations in each interval is within of times the length of the corresponding interval .thus , ignoring an error of order , we expand a trinomial with observations and and .let be the ( trinomially distributed ) number of observation in the respective intervals and consider .we may take \\[-8pt ] k_2 & = & { \mathcal{o } } \bigl ( { a_n ( \log n)^{1/2 } } \bigr),\nonumber\end{aligned}\ ] ] since these bounds are exceeded with probability bounded by for any ( sufficiently large ) .so , where expanding ( using sterling s formula and some computation ) , & & \hspace*{37.8pt } { } - \biggl(n p_1 + k_1 + \frac{1}{2}\biggr ) \log\biggl ( np_1 + \frac{k_1 + 1}{n p_1 } \biggr ) \\[-1pt ] & & \hspace*{37.8pt } { } - \biggl(n p_2 + k_2 + \frac{1}{2}\biggr ) \log \biggl ( n p_2 + \frac { k_2 + 1}{n p_2 } \biggr)\\[-1pt ] & & \hspace*{37.8pt } { } - \biggl(n(1-p_1-p_2)-k_1-k_2 + \frac{1}{2}\biggr ) \\[-1pt ] & & \hspace*{48.8pt } { } \times\log\biggl ( n(1-p_1-p_2 ) - \frac{k_1+k_2 -1}{n(1-p_1-p_2 ) } \biggr ) + { \mathcal{o } } \biggl ( { \frac { 1}{np_2 } } \biggr ) \biggr\ } \\[-1pt ] & = & \frac{1}{2\pi } \exp\biggl\{\frac{1}{2}\log n - n p_1 \log p_1 - \biggl(k_1 + \frac{1}{2}\biggr ) \log(n p_1)\\[-1pt ] & & \hspace*{37.8pt } { } - n p_2 \log p_2 - \biggl(k_2 + \frac{1}{2}\biggr ) \log(n p_2)\\[-1pt ] & & \hspace*{37.8pt } { } - n ( 1 - p_1 - p_2 ) \log(1 - p_1 - p_2 ) -\biggl(k_1 + k_2 + \frac{1}{2 } \biggr ) \\[-1pt ] & & \hspace*{48.8pt } { } \times\log\bigl(n(1 - p_1 - p_2)\bigr ) - \frac{k_1 ^ 2}{np_1 } - \frac{k_2 ^ 2}{np_2 } \\[-1pt ] & & \hspace*{125.1pt}{}- \frac { ( k_1+k_2)^2}{n(1-p_1-p_2 ) } + { \mathcal{o } } \biggl ( { \frac{k_2 ^ 3}{(np_2)^2 } } \biggr ) \biggr\ } \\[-1pt ] & = & \frac{1}{2\pi } \exp\biggl\{- \log n - \biggl(np_1 + k_1 + \frac{1}{2}\biggr ) \log p_1 - \biggl(np_2 + k_2 + \frac{1}{2}\biggr ) \log p_2 \\[-1pt ] & & \hspace*{37pt } { } - \biggl(n(1-p_1-p_2 ) -k_1 - k_2 + \frac{1}{2}\biggr ) \log(1-p_1 - p_2 ) \\[-1pt ] & & \hspace*{73.5pt } { } - \frac{k_1 ^ 2}{np_1 } - \frac{k_2 ^ 2}{np_2 } - \frac { ( k_1+k_2)^2}{n(1-p_1-p_2 ) } + { \mathcal{o } } \biggl ( { \frac { ( log n)^{3/2 } } { n a_n^2 } } \biggr ) \biggr \ } , \\[-1pt ] b & = & \exp\bigl\{(np_1+k_1 ) \log p_1 + ( np_2 + k_2 ) \logp_2 \\[-1pt ] & & \hspace*{18.5pt } { } + \bigl(n(1-p_1-p_2 ) -k_1 - k_2 \bigr ) \log(1-p_1-p_2 ) \bigr\}.\end{aligned}\ ] ] therefore , & & \hspace*{19.3pt } { } - \frac{k_1 ^ 2}{np_1 } - \frac{k_2 ^ 2}{np_2 } - \frac { ( k_1+k_2)^2}{n(1-p_1-p_2 ) } + { \mathcal{o } } \biggl ( { \frac { ( log n)^{3/2 } } { n a_n^2 } } \biggr ) \biggr \}.\end{aligned}\ ] ] some further simplification shows that gives the usual normal approximation to the trinomial with a multiplicative error of [ when and satisfy ( [ ki ] ) ] .the next step of the proof follows that of theorem [ sina ] ( see ingredient [ ingredient3 ] ) .since the proof is based on expanding characteristic functions ( which do not involve the inverse of the covariance matrices ) , all uniform error bounds continue to hold .this extends the result of theorem [ sina ] to the bivariate case : & & \qquad= p \bigl\ { z_1 \in a_{h_1 } / \sqrt{n } , z_2 \in a_{h_2 } / \sqrt{n } \bigr\ } \\[-1pt ] & & \qquad= p \bigl\ { z_1 \in a_{h_1 } / \sqrt{n } \bigr\ } \times p \bigl\{(z_2 - z_1 ) / \sqrt{n } \in ( a_{h_2 } - z_2 ) / \sqrt{n } | z_1 \bigr\}\nonumber\end{aligned}\ ] ] for appropriate normally distributed ( depending on ) . this last equation is needed to extend the argument of theorem [ den1d ] , which involves integrating normal densities .the joint covariance matrix for is nearly singular ( for small ) and complicates the bounds for the integral of the densities .the first factor above can be treated exactly as in the proof of theorem [ den1d ] , while the conditional densities involved in the second factor can be handled by simple rescaling .this provides the desired generalization of theorem [ den1d ] .thus , the next step is to develop the parameters of the normal distribution for [ see ( [ bndef ] ) , ( [ rndef ] ) ] in a usable form .the covariance matrix for has blocks of the form where with and given in condition [ co2 ] [ see ( [ gdef ] ) and ( [ hdef ] ) ] . expanding about ( using the differentiability of the densities from condition [ cof ] ) , where are derivatives of at (note that ) .straightforward matrix computation now yields the joint covariance for : where are uniformly bounded matrices .thus , the conditional distribution of given has moments & = & ( \tau_2 - \tau_1 ) \lambda_{11}^{-1 } \delta_{12 } / \bigl(\tau_1 ( 1 - \tau_1)\bigr ) , \\[-2pt ] \label{covcond } \operatorname{cov } \bigl [ r_n | b_n(\tau_1 ) \bigr ] & = & ( \tau_2 - \tau_1 ) \biggl [ \delta_{22}^ * - \frac{\tau_2 - \tau_1}{\tau_1 ( 1 - \tau_1 ) } \delta_{21}^ * \lambda_{11}^{-1 } \delta_{12}^ * \biggr]\end{aligned}\ ] ] and analogous equations also hold for .finally , recalling that , the second term in ( [ zcond ] ) can be written thus , since the conditional covariance matrix is uniformly bounded except for the factor , the argument of theorem [ den1d ] also applies directly to this conditional probability .finally , the above results are used to apply the quantile transform for increments between dyadic rationals inductively in order to obtain the desired `` hungarian '' construction .the proof of theorem [ hung ] is as follows : proof of theorem [ hung ] ( i ) following the approach in , the first step is to provide the result of theorem [ den2d ] for conditional densities one coordinate at a time . using the notation of theorem [ den2d ] ,let and be successive dyadic rationals ( between and ) with denominator .so .let be the coordinate of [ see ( [ rndef ] ) ] , let be the vector of coordinates before the one , and let .then the conditional density of satisfies for , , and , and where and are easily derived from ( [ econd ] ) and ( [ covcond ] ) .note that has the form where can be bounded ( independent of ) and can be bounded away from zero and infinity ( independent of ) .this follows since the conditional densities are ratios of marginal densities of the form ( with satisfying theorem [ den2d ] ) .the integral over has the multiplicative error bound directly .the remainder of the integral is bounded by , which is smaller than the normal integral over ( see the end of the proof of theorem [ den1d ] ) .\(ii ) the second step is to develop a bound on the ( conditional ) quantile transform in order to approximate an asymptotic normal random variable by a normal one .the basic idea appears in .clearly , from ( [ condden ] ) , for , , and . by condition [ cof ] , the conditional densities ( of the response given )are bounded above zero on .hence , the inverse of the above versions of the c.d.f.s also satisfy this multiplicative error bound , at least for the variables bounded by .thus , the quantile transform can be applied to show that there is a normal random variable , , such that so long as and the quantile transform of are bounded by . using the conditional mean and variance[ see ( [ mean1 ] ) ] , and the fact that the random variables exceed with probability bounded by ( where can be made large by choosing large enough ) , there is a random variable that can be chosen independently so that except with probability bounded by .\(iii ) finally , the `` hungarian '' construction will be developed inductively .let and consider induction on .first consider the case where ; the argument for is entirely analogous .define , where bounds the big - o term in any equation of the form ( [ couple ] ) .let be a bound [ uniform over on in ( [ couple ] ) .the induction hypothesis is as follows : there are normal random vectors such that except with probability , where for each , has the same covariance structure as , and where note : since the earlier bounds apply only for intervals whose lengths exceed ( for some positive ) , must be taken to be smaller than .thus , the bound in ( [ epsdef ] ) becomes , as stated in theorem [ den2d ] . to prove the induction result , note first that theorem [ den2d ] ( or theorem [ den1d ] ) provides the normal approximation for for .the induction step is proved as follows : following , take two consecutive dyadic rationals and with odd .so /2^{\ell-1 } = \tau\bigl([k/2 ] , \ell-1\bigr).\ ] ] condition each coordinate of on previous coordinates and on , \ell-1)) ] is approximable by normal random variables to within ( except with probability ) .thus , a coordinate , \ell-1 ) ] is bounded by times .finally , since is independent of these normal variables , the errors can be added to obtain therefore , except with probability less than , the induction hypothesis ( [ induct1 ] ) holds with error and the induction is proven .the theorem now follows since the piecewise linear interpolants satisfy the same error bound [ see ] .[ re1 ] under the conditions for the theorems here , the coverage probability for the confidence interval ( [ confint ] ) is , which is achieved at ( where is a constant ) .sketch of proof recall the notation of remark [ remark2 ] in section [ sec2 ] . using theorem [ th1 ] and the quantile transform as described in the first steps of theorem [ th2 ] ( and not needing the dyadic expansion argument ) ,it can be shown that there is a bivariate normal pair such that \\[-8pt ] { \sqrt{n } } \bigl ( \hat\delta(h_n ) - \delta(h_n)\bigr ) & = & z + r^*_n,\qquad r^*_n = { { \mathcal}{o}}_p \bigl ( n^{-1/2 } ( \log n)^{3/2 } \bigr).\nonumber\end{aligned}\ ] ] note that from the proofs of theorems [ th1 ] and [ th2 ] , the terms above are actually terms except with probability where is an arbitrary fixed constant .the `` almost sure '' results above take , but will suffice for the bounds on the coverage probability here .now consider expanding .first , note that under the design conditions here , will be of exact order ; specifically , if is replaced by , all terms involving will remain bounded , and we may focus on .note also that for , the terms in the expansion of tend to zero [ specifically , .so the sparsity , , may be expanded in a taylor series as follows : where is a ( gradient ) vector that can be defined in terms of and ( and its derivatives ) , is a quadratic function ( of its vector argument ) and is a cubic function . note that under the design conditions , all the coefficients in , and are bounded , and so it is not hard to show that all the terms in tend to zero as long as .specifically , if is of order , then all the terms in tend to zero . also , is within a factor of and is even smaller .finally , is a difference of two quantiles separated by , and so has variance proportional to .thus , .thus , not only does , but powers of this term greater than 2 will also be .it follows that the coverage probability may be computed using only two terms of the taylor series expansion for the normal c.d.f . : note that the ( normal ) conditional distribution of given is straightforward to compute ( using the usual asymptotic covariance matrix for quantiles ) : the conditional mean is a small constant ( of the order of ) times , and the conditional variance is bounded .expanding the lower probability in the same way and subtracting provides some cancelation. the contribution of will cancel in the differences , and is negligible in subsequent terms since .similarly , the term will appear only in the difference where it contributes a term that is times a term of order , and will also be negligible in subsequent terms .also , the term will only appear in , as higher powers will be negligible .the only remaining terms involve .for the first power ( appearing in ) , .for the squared -terms in , since var( ) is proportional to , , and all other terms involving have smaller order . therefore , one can obtain the following error for the coverage probability : for some constants and , the error is ( plus terms of smaller order ) . since is of order nearly , the first terms have nearly the same order . using , it is straightforward to find the optimal to be a constant times , which bounds the error in the coverage probability by .
traditionally , assessing the accuracy of inference based on regression quantiles has relied on the bahadur representation . this provides an error of order in normal approximations , and suggests that inference based on regression quantiles may not be as reliable as that based on other ( smoother ) approaches , whose errors are generally of order ( or better in special symmetric cases ) . fortunately , extensive simulations and empirical applications show that inference for regression quantiles shares the smaller error rates of other procedures . in fact , the `` hungarian '' construction of komls , major and tusndy [ _ z . wahrsch . verw . gebiete _ * 32 * ( 1975 ) 111131 , _ z. wahrsch . verw . gebiete _ * 34 * ( 1976 ) 3358 ] provides an alternative expansion for the one - sample quantile process with nearly the root- error rate ( specifically , to within a factor of ) . such an expansion is developed here to provide a theoretical foundation for more accurate approximations for inference in regression quantile models . one specific application of independent interest is a result establishing that for conditional inference , the error rate for coverage probabilities using the hall and sheather [ _ j . r. stat . soc . ser . b stat . methodol . _ * 50 * ( 1988 ) 381391 ] method of sparsity estimation matches their one - sample rate .
in this chapter , we give a brief sketch about the field of complexity science : the questions that led to the emergence of this new approach to understand real - world structures and their dynamics .we discuss in particular the emergence of the new field of research - _ network science _ , and the various network models , their characteristic properties and their time - dependent dynamic behaviour . in relation to the theme of the thesis ,we outline the significance of complex networks approach to understand public transportation infrastructures .further , we review the literature in the area of structurally constrained network of bus routes in urban cities , which forms the core of the thesis .we end this section by elaborating on dynamical processes , like information percolation and phase - transition in networks in general , and epidemic spreading in transportation networks in particular .nonlinearity is the essence of reality .the emergence of the field of complexity science is often attributed to the fact that real - world processes are indeterministic due to the presence of numerous variables and their nonlinear combinations .real - world systems are also hard to comprehend because the agents constituting those systems show non - trivial interactions between them . in the context of these systems, it is always observed that the system as a whole is greater than its constituting parts .therefore , these systems are complex not only because of their scale but also because of their functionality .the growing interest to understand the underlying machinery of these _ complex systems _ has given rise to numerous mathematical and simulation techniques .some of the widely used techniques include agent - based modelling , time series analysis , ant - colony optimization , cellular automata , nonlinear differential equations , information theory and network theory . amongst these existing mathematical methods ,network theory in particular has been immensely successful in describing real world systems and processes in the recent times .the underlying mathematical concepts of network theory or commonly , network science , is the theory of graphs .one of the oldest examples of using graph theory to analyze real - world problem dates back to as early as 1736 , when the resolution to the famous knigsberg bridge problem by euler actually laid the foundations of graph theory ( see figure 1 ) .the abstract approach by euler made the geographical intricacies present in the problem seem totally irrelevant .his although simple , yet abstract formulation of the problem in terms of vertices ( nodes ) and edges ( links ) provided an elegant approach to model real - world structures as graphs or networks .the first physical application of graph - theoretical ideas was discovered by gustav kirchhoff in 1845 for calculating the voltage and current in electric circuits . since then, the use of graph theory as a modelling technique has found innumerable applications across diverse disciplines .one of the very important properties that these network models capture is the way different agents interact with each other in a connected system , which gives rise to non - trivial ( emergent ) properties in the system .be it social ties or technological interactions , socio - economic infrastructures or interaction between biological entities , almost every thing can be modelled as a graph containing nodes and edges .complex network science today has established itself as a mainstream field of research in the physical sciences .although , the fundamental breakthroughs in network science are often attributed to statistical physicists and mathematicians , the origin of this field of study surprisingly lies in the social sciences .two spectacular examples of studies that gave birth to network science as we know it today are that of milgram s ` small - world experiment ' of 1967 and granovetter s theory on the spread of information in social networks in 1973 . the results from milgram s experiment helped us to understand how closely connected we are through our social ties whereas granovetter s theory stressed on the importance of the individual connection that a node shares in the network .both the observations led to an increased surge of interest in this new field of study by mathematicians and social scientists . with the advancement in information technology that enabled availability of large amounts of real - world data and improved computational resources ,physicists eventually came into the picture in the early 1990 s .it was during this time that the structure and topology of large - scale complex networks were being studied and fundamental questions like the nature of connections and in general , the statistical properties of the network s building blocks or the nodes were being asked .the field which initially branched out from the mathematical theory of graphs suddenly found its applications and similarities to statistical physics .no longer were the properties of the single node a question of interest .the group behaviour of the nodes - how they connect to each other - became a significant non - trivial question of academic pursuit .the field of network science has emerged out of the mathematical theory of graphs .therefore , the underlying mathematical structure of the networks and its various characteristic properties use the language of graph theory for theoretical and computational purposes .before we go into the details of the theoretical questions of interests ( those we pointed out earlier ) , it is necessary to first mathematically describe a network and discuss certain characteristic features of these graphs , which are of interest in the particular context of the thesis .we will also observe later that these are the properties which help in differentiating between various network models .adjacency matrix is diagonally - symmetric.,scaledwidth=80.0% ] we define a graph , where the set where each is a node , and the set where each is a link connecting the node pair .the set of nodes belong to the n - dimensional euclidean space , , and the set of links form the cartesian product over .a graph can be directed or undirected .if a graph is directed , then , whereas if a graph is undirected , we have . another important mathematical structure associated with graphsis the adjacency matrix , .the adjacency matrix is a representation of the network as a square matrix .each element , either takes a value of or when the graph is unweighted . when the link does not exist , whereas or signifies the directionality of the link ( with in first case and in the latter ) . if the graph in question is a weighted graph , then can take any real value .the weights in the graph can represent diverse quantities , such as strength of connections ( social network ) , frequency of interactions ( call networks ) , travel times ( transportation networks ) , distances ( road networks ) , etc .for an undirected graph , the directionality of the links does not play any role ( ) , and the adjacency matrix takes on a diagonally - symmetric form .also , note that represents self - loops in the network ( see figure 2 ) .an important measure is the characteristic path length , which is defined as the average number of nodes crossed along the shortest paths for all possible pairs of network nodes .the average distance from a certain vertex to every other vertex is given by .then , is calculated by taking the median of all the calculated .the characteristic path length helps in identifying whether a small - world phenomenon exists in the network or not .the significance of the small - world property is two - fold : one , it signifies very low characteristic path length , which simply means that given any pair of randomly chosen nodes , the number of ` hops ' needed to reach one from the other will be very low as compared to the network size ; second , it also characterizes the change in the metric , as the network size changes ( the number of nodes present in a network is called its size ) .milgram s experiment showed that on an average any randomly chosen pair of individuals are separated by six acquaintances ( hence , the popular term : _ six - degrees of separation _ ) .another important network metric is the the clustering coefficient which measures the extent to which nodes tend to cluster in a network .the concept of clustering coefficient originated mainly from studying social networks , where a pair of individuals are connected by a link if they are friends .in social networks , close friends tend to have close - knit communities where each individual in the group knows every other individual . in figure 3, we can observe that the maximum number of triangles possible for the blue node is three .in the first case , since every node is connected to every other node , the magnitude of clustering coefficient is one . whereas , in the second and the last case when the dotted ( red ) links are removed , the possible number of triangles gets reduced to one and subsequently to zero .hence , the magnitude of clustering coefficient in the second case is one - third and zero in the latter .the above example denotes the local clustering coefficient , which is in reference to a particular node in the network .the local clustering coefficient is given by : in the above expression , are the neighbours of the node and the neighbourhood , , for a node , is defined as the set of its immediately connected neighbours , as . for the complete network ,watts and strogatz defined a global clustering coefficient , .hence , the clustering coefficient measures the extent to which nodes tend to form close - knit groups in a network .a network with a small characteristic path length and a high clustering coefficient is called a small - world network . ) in an undirected graph . in the first case , as every node is connected to every other node . in the second case , the removal of two links ( red , dotted ) causes the number of triangles for the blue node to reduce to , hence . in the last case ,an additional removal of link causes the number of triangles for the blue node to reduce to zero , therefore .,scaledwidth=80.0% ] although the above metrics are crucial in identifying small - world phenomenon in a network , they do not help us in understanding the network topology .understanding the network topology holds the key to several interesting and non - trivial questions . in orderto answer questions about network robustness or node centrality , we need to look at the pattern by which the nodes are connected in the network .before we go into the details of various degree - distribution patterns , we define the quantity - degree ( of a node ) in a network .the degree of a node is the number of neighbours to which it is directly connected to . in an undirected graph ,the degree of a node is mathematically expressed as . in a directed graph, the degree of a node will depend upon the directionality of the links which either emerge _ out _ of the node or converge _ into _ the node .therefore , in a directed graph , a node will have an out - degree ( ) and an in - degree ( ) , with , where the index runs over all the positive and vice - versa .when the network as a whole is studied , the significance of an individual node takes the backseat ( unless we are interested in node - specific properties ) , and the way the degree of every single node is distributed throughout the network becomes a question of extreme interest .we define degree - distribution ( ) of a network as the probability that a node , has a degree of atleast .the notion of degree - distribution holds a central role in network science .networks that follow a similar degree - distribution law tend to show similar network characteristics .therefore , the degree - distribution function can be used as a signature to differentiate between different network classes . among all the possible degree - distribution laws ,the ones that are most frequently encountered are the poisson , exponential and power - law patterns .the poissonion form of degree - distribution is given by : similarly , the exponential degree - distribution is given by : and finally , the power - law degree distribution is given by : in the above expressions , denotes the average node - degree , and , the exponential and power - law degree exponents .although all the distribution functions ( in eqn . 2 , 3 , 4 ) decay for large magnitudes of , a special feature of the distributions in eqn . 2 and 3is that they contain a typical scale .it is either the location of the maximum for the poisson distribution , or the characteristic decay length for the exponential .on the contrary , the power - law distribution in eqn .4 does not contain such a scale .networks with a power - law degree - distribution are therefore called scale - free networks . when the degree - exponent , , the average node degree diverges and when , the standard deviation of the degree diverges .2 , 3 and 4 are probability density functions , therefore : for the continuous case and for the discrete case .the constant , is known as the normalization constant . clearly , the distribution in eqn .4 diverges as so eqn .4 can not hold for all , _i.e. _ , there must be some lower - bound to the power - law behavior .we will denote this bound by .then , provided , it is straightforward to calculate the normalizing constant , and we find that : for the discrete case , the distribution in eqn .4 diverges when , so there must be a lower bound on the power - law behavior . on calculating the normalizing constant , we find that : where the hurwitz zeta function , is given by : the poisson distribution is strongly peaked about the mean , and has a tail that decays very rapidly as .this rapid decay is completely different from the heavy - tailed degree - distribution that is observed in many real - world complex networks .real - world networks tend to show a degree heterogeneity , _i.e. _ , very a small fraction of nodes tend to hold majority of the connections in the network which are called as ` hubs ' .instead of having a normally distributed degree in the network about the mean degree ( poissonian distribution function ) , real - world networks show tails which are heavily skewed towards the right . also , it is hard to find real - world networks that show a perfect power - law or an exponential distribution , therefore majority of the networks tend to show a combination of them , such as power - law degree - distribution with exponential cut - offs or a combination of different power exponents .whatever the case may be , it is important that we find proper explanations to these non - trivial characteristics of the real - world complex networks . in the following sections , we will elaborate on the important network models that will answer some of the interesting features of the networks that we saw earlier .also , these are not the exhaustive list of network properties to be analyzed .we will see in the later sections that there are numerous other network metrics that describe both the global network properties and local nodal characteristics . in this section ,we discuss the three prominent network models .starting with the random graph model by paul erds ( see figure 4 ) and alfred rnyi in 1959 that laid the fundamental ideas of network science , we look into the small - world properties present in a network by duncan watts and steven strogatz in 1998 . finally , in order to understand the presence of heavy tails in degree - distribution patterns of real - world networks , we look into the scale - free network model proposed by albert - lszl barabsi and rka albert also in 1998 .the erds - rnyi ( er ) model is either of the two closely related network models in graph theory for generating random graphs . in the model introduced by erds and rnyi , , the number of vertices ( ) and the number of edges ( ) are fixed , implying that all possible graph combinations are equally likely .whereas in the model introduced by gilbert , , each edge has a fixed probability , of being present or absent , independently of the other edges . therefore , in the model , a graph is constructed by connecting nodes randomly with each edge having a fixed probability independent from every other edge .therefore , all graphs with nodes and links have equal probability of .as the parameter in this model increases from , the model becomes more likely to include graphs with more edges and less likely to include graphs with fewer edges .in particular , when all graphs on vertices are chosen with equal probability .one of the interesting features of the er random graph is the degree - distribution patterns of the nodes which follow the binomial form , and approaches the poissonian form as in eqn . 2 when and constant .some of the important features about random graphs that erds and rnyi pointed out are : * if , then a graph in will almost surely have no connected components of size larger than * if , then a graph in will almost surely have a largest component whose size is of order * if , then a graph in will almost surely have a unique giant component containing a positive fraction of the vertices and no other component will contain more than vertices although these models ( and ) can be used in the probabilistic method to prove the existence of graphs satisfying various properties , or to provide a rigorous definition of what it means for a property to hold for almost all graphs , they hardly capture the essence of the real - world network properties . , with increasing wiring probability and ( left to right ) and . for the number of links present .similarly for and . observe how a ten - fold increase in probability causes the number of links to increase roughly by ten times.,scaledwidth=100.0% ] the er model provides a good mathematical description of graphs and the various properties associated with them .although there may be instances when er graphs show exceptionally small characteristic path lengths , they however do not show two important properties observed in many real - world networks : * presence of clusters or triadic closures er graphs have low clustering coefficient * presence of exceptionally high degree nodes or hubs degree - distributions in er graphs converge to a poisson form rather than a power - law pattern in order to address the first of the above two limitations , duncan watts and steven strogatz devised a simple model that kept the characteristic path length small ( similar to a random graph ) but added the additional attribute of clustering .the model interpolated between a regular ring lattice and an er random graph , and was able to explain the small - world phenomenon to some extent . given a fixed number of nodes , mean degree ( assumed to be an even integer ) , and a special parameter , satisfying and , the algorithm constructs an undirected graph with nodes and edges in the following way ( see figure 6 ) : * construct a regular ring lattice , a graph with nodes each connected to neighbors , on either side , _i.e. _ , if the nodes are labeled , there is an edge if and only if * for every node take every edge with , and rewire it with probability .rewiring is done by replacing with where is chosen with uniform probability from all possible values that avoid self - loops and link duplication ( there is no edge with at this point in the algorithm ) plays a crucial role .when , we have a regular ring lattice , whereas at , we have a random graph , with and .,scaledwidth=75.0% ] we list out some network characteristics of the watts - strogatz small - world network model ( ws ) .the degree distribution in the case of the ring lattice is the dirac delta function centered at , which assumes the poisson form in the limiting case of ( eqn .2 ) , similar to the classical er random graphs .the degree distribution for can be written as : where is the degree of the node , , and .the shape of the degree - distribution curve is similar to that of a random graph and has a pronounced peak at , which decays exponentially for large .the topology of the network is relatively homogeneous , and all nodes have more or less the same degree .the additional feature that the ws model exhibits is the property of forming triadic closures or clusters .the clustering coefficient is given as the ratio of the number of triangles present among the nodes to the total number of connected triplets among them ( see figure 3 ) . in terms of , where , is the clustering coefficient of a regular ring lattice given by and as .the ws model preserves the short characteristic path lengths exhibited by random graphs . for a regular ring lattice ,the characteristic path length is given by , which in the limiting case approaches to or for a random graph .thus , the number of hops required to visit all the nodes in a network scales as the logarithm of the network size , which still remains a very low number even though the network size is large ( or grows with time ) .the small - world property has been reported in various real - world networks such as electric power grids , www , internet , social - networks , protein - yeast ( metabolite ) interaction networks , citation networks , movie - actors collaboration networks .although these networks show small characteristic path lengths and high clustering , they also exhibit a heavy - tailed degree - distribution that can not be explained either by the er or the ws models . in order to understand the mechanism of growth and evolution of real - world networks ,we analyze the barabsi - albert scale - free model in the following section .albert - lszl barabsi , along with his doctoral student rka albert published two important articles in the late 90 s . in both the articles , they pointed out the peculiarity in the patterns of degree - distribution in real - world networks .they observed that the degree - distribution laws did not follow the poissonian form nor the usually expected normal form .the density functions decayed slower than exponentials and log - normals , and showed a heavy - tail towards their end .interestingly , their studies also confirmed the small - world property in these real - world datasets . in order to understand the growth and evolution of real - world networks exhibiting scale - free nature , barabsi and albert proposed an algorithm ( ba model ) that generates random networks with power - law degree - distribution patterns .the two salient features of the ba model are : _ growth _ and _ preferential attachment _ , both of which widely exist in real - world networks .the presence of preferential attachment allowed networks to exhibit degree - heterogeneity .a small fraction of nodes tend to attract more connections as compared to others , for example , highly influential people in a social network , important airports in an airline network or web pages , like google . in a way ,preferential attachment acted as a positive feedback loop for the network or initiated the ` rich getting richer ' phenomenon in the network .the earlier models failed to provide an explanation to this particularly observed phenomenon .barabsi and albert in their model incorporated this property and came up with the class of scale - free networks .the ba model , with growth and preferential attachment can be mathematically described using continuum theory , which calculates the time - dependence of the degree of a node .incorporating growth causes the network size to increase with time due to new incoming nodes .the preferential attachment is taken care by assigning a probability , for the new node to attach to an existing node in the network , such that .it is assumed that is a continuous real variable and the rate at which changes is proportional to . starting with a small number ( ) of nodes , we add at every time - step a new node with edges .the real variable , satisfies the dynamical equation : where leading to : the solution to the above equation with the initial condition that every node introduced at time has edges is given by , with . using eqn .11 , the probability that a node has a degree smaller than , can be written as : since the nodes are added at equal time intervals , the parameter has a constant probability density , . substituting the value of into eqn .12 , we obtain : the degree - distribution can be obtained by : the above equation can be written in the asymptotic ( ) limit as , with .the above result is independent of the number of links ( ) an incoming node has .therefore , ba model produces scale - free networks with degree - distribution following pure power - laws with degree exponent , ( see figure 7 ) .although ba model generates networks obeying pure power - law degree - distributions , real - world networks on the contrary mostly show a heavy tail .the reason for this can be attributed to several factors , such as non - linear preferential attachment , node aging , node s competitiveness to acquire incoming links , or presence of exponential cut - offs .even though real - world networks may not follow strict power - laws , the presence of preferential attachment is extremely crucial . without the presence of the preferential attachment, rule networks can not exhibit heavy - tailed degree - distribution patterns because the preferential attachment rule acts as a positive reinforcement in the network .ba models , without growth have also been proposed . however , network models without the attachment rule will result into random graphs . the characteristic path length for ba networksis given as .for , the characteristic path length has been found to decay as , thus making the topology of scale - free networks ultra - small in nature . and .the node size reflects the node degree , , nodes which have bigger size have higher degrees and vice versa . in the right panel , we plot the degree - distribution of the network on a double logarithmic scale .the degree - distribution follows a power - law pattern , , with .the parameter , is calculated from the slope of the degree - distribution plot in the right panel ( red , dotted).,scaledwidth=100.0% ] the availability of huge amounts of real - time data and sudden surge of interest in the field of networks have attracted numerous researchers working in applied sciences as well .particular in this context are those civil engineers whose research is primarily focused in the domain of transportation networks .although the study of transportation networks have a long history , the questions of interests have significantly changed after the introduction of network science techniques . the traditional network flow formulation ( in transportation networks )has answered many interesting engineering questions related to optimality of cost , maximality of flows and the classical shortest path determination .questions , those related to the topological structure of the network , such as the presence of small - world property or heavy - tailed degree - distribution patterns , or questions which are primarily concerned with the inter - nodal connectivity and static ( dynamic ) evolution of the network which the traditional formulation failed to address , found answers in the network science domain .one particular type of transportation network that found widespread interest in the network science community is that of the public transit network or ptn .a topological drawback of ptns is that they are structurally constrained in a two - dimensional space as compared to other networks , such as the internet , social networks or the airline networks .airline and metro - networks , specifically have been reported to show scale - free degree distribution patterns , whereas degree - distribution in bus and railway networks tend more towards exponential forms ( or to power - law patterns with larger magnitudes of ) . yet ,the above mentioned properties - small - world phenomenon and scale - free topology - have been reported in them as well . in some studies ,specific subsets of ptns were analyzed , for example , the boston subway network , the vienna subway network , or bus networks of three cities in china .it is important to note that each separate type of public transport ( bus , subway , trams or mono - rail network ) is not a closed system : these are only sub - networks of a much wider city transport system .an interesting observation was reported when network characteristics obtained while separately analyzing subway networks were compared to those of the network combination of subway and buses .statistical analysis of complete ptns for cities , such as berlin , paris , dsseldorf and 22 polish cities showed that power - law degree - distribution pattern is a common feature .they also concluded that the degree - distribution patterns for ptns can have both exponential as well as power - law forms , depending upon the topology of the network representation chosen and presence of geographical constraints .the small - world phenomenon in transportation networks makes sense as transportation facilities in a city are planned to provide maximum convenience to its people by allowing them to travel between places in minimum possible time .most transportation networks are pre - planned networks , where the initial design of the network decides the presence of hubs . also , the size of transportation networks is not as large compared to social - networks or the internet , and are subjected to geographical as well as socio - economical constraints .the reason for the contrasting behaviour of the airline and metro networks from bus and rail networks can be attributed to the two following observations : ( * i * ) airline - networks ( like the internet or social - networks ) are not bounded by geographical constraints , and ( * ii * ) metro - networks are _ local _ often catering to a part of the city , whereas bus and railway - networks are _ global _ as they are spread throughout the entire state and sometimes across the entire country . specific to indian scenarios , exhaustive studies on public transit networks as a whole are yet to be conducted .previous works , have shown that the pattern of nodal connectivity of the indian railway network ( irn ) drastically differs from that of the airport network of india ( ani ) , while the nature of indian bus networks still remains an unsolved problem .the central theme of the thesis lies around the topological structure and dynamics of bus transport networks or btns in india .the presence of comparatively less number of studies on btns among other modes of transportation is one of the fundamental reasons that their characteristic properties and their topological structures are yet inconclusive . in this section, we present the currently available literature on the studies pertaining to btns in specific .analysis of the statistical properties of btns in china have revealed scale - free degree - distribution patterns and small - world properties in them .the presence of nontrivial clustering , _ i.e. _ , the variation of clustering with degree , indicated a hierarchical and modular structure in those btns .weighted analysis of the network revealed a heavy - tailed power law with the strength ( weighted - degree ) and degree showing a linear dependency . in another study , the btns of four major cities of china , namely, hangzhou , nanjing , beijing and shanghai were analyzed using -space topology .the degree - distribution was reported to follow an exponential form , indicating a tendency for random attachment of the nodes .the authors also evaluated two new statistical properties of the btns : the distribution of number of stops in a bus route ( ) , and the number of bus routes a stop joins ( ) .while the former had an exponential functional form , the latter had an asymmetric unimodal functional form . in a separate study ,the urban public bus networks of two chinese cities , beijing and chengdu were analyzed .their analysis revealed small world characteristics and a scale - free topology , although with exceptionally high values for the degree - exponent , .the presence of more hubs in the beijing network yielded a smaller as compared to chengdu ; while both showed large clustering coefficients and small characteristic path lengths .the location of bus stops in a similar fashion in both the cities have led to a hierarchical structure , which is denoted by a power - law behaviour ( with nearly same exponents ) between the degree strength ( characterizing the passenger flows ) and clustering coefficient . in a recent study ,the combination of rail ( rtns ) and bus transportation systems ( btns ) in singapore were studied with respect to their topological as well as dynamical perspectives .the stations in rtns had high average degree indicating high connectivity amongst them , while the btns had a small average degree .both networks had an exponential degree - distribution indicative of randomly evolved connectivity .strength - distribution of the nodes ( weighted degree - distribution ) however showed scale - free topology for both networks , indicating the existence of high traffic hubs .both btns and rtns exhibited small - world characteristics .the btn in particular had a hierarchical star like topology .degree - assortativity ( ) , which measures the inter - connectivity between hubs , revealed rtns to be slightly disassortative , while the btns displayed strong disassortative nature . with the availability of geo - located data , an extended space ( es ) model with information on geographical location of bus stops and routeswas recently used to analyze the spatial characteristics of btns in china . the model consisted ofdirected weighted variations of the - and -space networks , designated as es- and es- networks respectively . often , two bus stops which are geographically close to each othermay not have any direct bus route link between them .however , such stops are at walkable distances from each other .these are defined as short - distance station pairs or ssps .the ssps greatly influence the btns by reducing the transfer times as well as the number of bus routes . the symmetry - weighted esw network model stored information of the ssps .the average clustering coefficient of the esw network was considerably large , denoting a nearly circular location of the ssps around a station .majority of the route sections in the bus routes were short , while a few route sections connecting cities downtowns and satellite towns or special purpose routes were long , leading to a power - law edge length distribution of the es- network .it may seem at first that the complexity of a bus transportation network is much lesser than that of other large - scale networks , however it is the nature of growth and the penetrative effect of these networks that makes them not only complex , but interesting and worthwhile to investigate . in the earlier sections , we discussed about the topological characteristics of the networks in detail . in order to get a deeper insight into the network characteristics , it is important to look at the dynamical process on networks as well . in this section ,we discuss the two dynamical processes on networks in detail : network diffusion ( si model ) and network contagion ( sir model ) .the spatial characteristics of the networks along with numerical simulations of the dynamical processes will help us understand the in - depth significance of various network metrics .the earliest accounts of mathematical modelling to capture the spread of diseases dates back to as early as the century .bernoulli used mathematical equations to defend his stand on vaccination against the outbreak of smallpox .works following bernoulli earliest formulation of epidemic modelling helped in understanding germ theory in detail .however , this was not until the works of mckendrick and kermack , which first proposed a deterministic model that predicted epidemic outbreaks very similar to the ones that were recorded during those times .since then our understanding of the mathematical models in epidemology has evolved over the years , the accounts of which can be found in the extensive works of anderson and may .all the above formulations focused on modelling epidemics over a set of population in which uniform ties between agents were assumed _ a priori_. contrary to this , the field of network science asserts the fact that ties , their strength and types in a system or a population , are not uniform and they play a significant role to describe the system s dynamics . also , a network model is not exhaustive to the study of population , rather it is a universal framework which can be used to understand numerous complex systems in general . over the years, the term _ epidemic modelling _ has evolved into a common metaphor for a wide array of dynamical processes on these networks .various complex phenomena such as percolation , the spreading of blackouts to the spreading of memes , ideas and opinions in a social network have been modelled under the common framework of epidemic modelling . in this context ,transportation networks play a vital role due to their widespread outreach across cities , countries and continents .hould people be worried about getting ebola on the subway was one of the numerous similar headlines that made the front pages of the newspapers around the world during the 2014 ebola scare . in this particular incident however , nobody was infected because the subject did not show symptoms of ebola while using public transportation .therefore not only airline networks , that can transmit pathogens across continents , even modes of public transport operating within cities , such as buses and subways , pose a serious threat as well as a source of panic during desperate times .although epidemic spreading in airline networks have been studied extensively , similar studies on bus networks are relatively rare .epidemological models have been simulated on bus network datasets ; however , the results were only used to validate the numerical models .also , a recent study on city - wide integrated travel networks ( itns ) , has found both the traveling speed and frequency to be important factors in epidemic spreading .thus , the effect of network structure and constraints in epidemic spreading is yet to be studied in these networks .the si model is the most basic representation of an epidemic spreading model that captures diffusion in complex networks . in this model , there are two states that an agent or a node can exist in : s ( susceptible ) or i ( infected ) .the si model describes the status of individuals or agents switching from susceptible to infected at every instant of time .it is assumed that the population is homogeneous and closed , _i.e. _ , no new entity is either created due to birth or removed due to death , and also , no new entity enters the system , thus preserving homogeneous mixing in the system .the si model also implies that each individual has the same probability to transfer disease , innovation or information to its neighbors .thus , the si model helps to capture the diffusion or percolation process in the entire network .the si model is formulated using the following differential equation . since an agent in the entire populationcan either be in state s or i , the si model is governed by a single parameter , , the infection transmission rate or simply , the infection rate .the growth in the number of agents in either of the sates is given by : substituting the value of s from eqn . 15 to eqn .16 , we get the following differential equation describing the growth rate of i : the solution of the above equation with the initial condition at , is given by the logistic form : contrary to the si model , the agents in sir model have access to three states : s ( susceptible ) , i ( infected ) and r ( recovery ) . although the earlier assumptions of a closed population and homogeneous mixing also hold in this case , the complexity of the dynamical process increases due to the addition of one more state .the agents , instead of only switching between susceptible and infected ( as in si model ) , tend to recover in the sir epidemic model .the dynamics of the sir model is controlled by two parameters : the infection rate , , and the recovery rate , .the sir model can be mathematically represented by the set of the following differential equations : and .the represents the population fraction of , and species respectively and represents simulation time.,scaledwidth=75.0% ] the population of susceptible nodes decreases in proportion to the number of encounters multiplied by the probability that each encounter results in an infection .the negative sign denotes that the population of s is decreasing .similarly , we can describe the evolution of the other two states , i and r. nodes become infected at a rate proportional to the number of encounters , and the probability of infection controlled by the parameter , .nodes recover at a rate proportional to the number of infected individuals , and the probability of recovery controlled by the parameter , : it would be interesting to analyze the spread of infection with respect to the susceptible individuals when there is a constant recovery ( from eqn .we calculate the variation in i with respect to s : the solution to the above equation with the initial conditions at , ( negligible as compared to the population ) and , is given by : in order to understand the rate of spread of infection in the population , we look at the rate equation for i from eqn . 20 : the above equation implies that the infection spreads if and only if . the epidemic dies out ( the number of infected individuals decreases ) if the above quantity is less than zero .bifurcation occurs at the stationary state , when , which separates the above two regimes and corresponds to the epidemic threshold . for a network with average node degree , the rate of change of the susceptible population is given by : the rate of change of the infected and simultaneously recovered individuals is similarly given by : note that the rate of change of the recovered individuals remains the same as before .substituting the value of i from eqn .25 into eqn .24 , and solving for s gives us the following expression for s in terms of r , at time , and .population of recovered individuals , is given as .recovery of the individuals occur in the network if and only if the slope of or .in this section , we present our main results on the studies of btns in india .first , we discuss the motivation of our study , and then we describe the datasets used for analysis . before we discuss the primary results of this work in detail , we outline the network representation techniques specific to transportation terminology .we also saw earlier that any physical , chemical , biological or social system can be visualized as a complex network with constituting elements known as nodes , and the interactions between them identified as links . based on the nature of the links ,these networks can be broadly classified into virtual and spatial networks . in the former category , the links are physically absent , e.g. , social networks or collaboration networks , whereas in the later case, the links are physically present , e.g. , geographically embedded road or railway networks . in between these two broad classes , there exist networks in which the links , although physically absent , are still geographically constrained . the structure of the real - world networks such as bus or electric power grid are dependent upon the structure of the physically constrained , geographically embedded networks on which they grow and evolve .therefore , our study would be incomplete if we do not explain the role constraints play in geographical embeddedness .we analyze this aspect by calculating the ` dimensions ' of these networks and check for self - similar patterns in them . in this study , we present the statistical analysis of the bus networks of six major indian cities as graphs in - and -spaces , using concepts from network science .although public transport networks such as airline and railway networks have been extensively studied , a comprehensive study on the structure and growth of bus networks is still lacking . in india , where btns play an important role in day - to - day commutation , it is of significant interest to analyze their topological structures and answer some of the basic questions on their growth , evolution , robustness and resiliency .therefore , we do a comparative study of the bus networks of some of the major indian cities , namely ahmedabad ( abn ) , chennai ( cbn ) , delhi ( dbn ) , hyderabad ( hbn ) , kolkata ( kbn ) and mumbai ( mbn ) . in order to understand the structure of these networks , we calculate various metrics , such as clustering coefficients , characteristic path lengths , degree - distribution and assortativity . we also simulate network robustness and resiliency by first removing nodes at random , followed by targeted removal based on degree , closeness and betweenness .simulating node removals and simultaneously capturing the variation in their characteristic path lengths helps us in understanding nodal redundancy in these networks .few studies on btns have although looked into the structural aspects in detail , the esw network that we described earlier , is the only model that has looked into the aspect of network redundancy albeit due to geographical placements of the nodes . for this study, we use the bus routes as network datasets by considering bus stops as nodes and bus routes as links .the route details were obtained from the government websites of amts ( abn ) , mtc ( cbn ) , dtc ( dbn ) , apsrtc ( hbn ) , cstc ( kbn ) , best ( mbn ) and ahmedabad brts ( bus rapid transit system ) .it can be seen from table 1 that the network sizes of all the cities are comparable to each other , except that of kbn , because cstc is localized and operates as a subdivision of the west bengal surface transport corporation ( wbstc ) that operates buses in the entire state . for computational and visualization purposeswe parse the datasets as edge - lists , where the two adjacent columns are labeled as ` source ' and ` target ' respectively . in -space representation , the values in the respective columns represent neighboring bus stops in a given route , whereas in -space representation , the adjacent columns represent all possible transfers in a route for a fixed value in the column of ` source ' ( see figure 9 ) .-space and -space representation for transportation networks .if represents a route in -space , then in -space representation we have all the possible transfers for the route , _i.e. _ , .,scaledwidth=100.0% ] before we analyze the statistical properties of the different btns , we need to understand the relationship between the various network representation forms and their respective advantages .the most common is the -space representation , where each bus stop is a node and a link between the nodes indicates that there is at least one route that services these two corresponding nodes consecutively ( see figure 9 ) . in this representation , no multiple links are allowed between a pair of nodes , or . in cases of route overlaps ,the element is multiplied with a weight element , thus denoting strength of that link ( or node ) : .a different network representation is that of a bipartite graph that has been found to be useful in the analysis of cooperation networks . in this representation , also called -space , both routes and stops are represented by nodes and each route node is linked to all bus stops that it services , and no direct links between nodes of same type occur .the neighbors of a given route node in this representation are all those stops that it services , while the neighbors of a given bus stop are all the routes that service it .there are two one - mode projections of the bipartite graph of -space .the projection to the set of station nodes is the -space graph ( see figure 9 ) , and the complementary projection to route nodes leads to the -space graph .the -space network representation has proven to be particularly useful in the analysis of ptns .the nodes of this graph are bus stops , and they are linked if they are serviced by at least one common route . in this way , the neighbors of a -space node are all those stops that can be reached without changing means of transport . in order to get the essence of different network representations and their significance , we try to visualize the metric : characteristic length ( ) , which in an -space graph is the number of ` hops ' one has to make to travel between any two randomly chosen bus stops . when the network is represented in -space, signifies the number of bus changes one has to make in order to travel between any two randomly chosen bus stops . from a transportation perspective, less number of bus changes will imply a small - world property .therefore , our calculation of the network metrics will strongly depend upon the network representation that is chosen .tabular representation of the statistical data for the ahmedabad brts and bus routes of six major indian cities as graphs in -space . represents the number of communities in these networks . for ahmedabad brts ,the dataset is very limited , which disables us from concluding its exact topological structure . [cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] in figure 25 , we simulate the si and sir models on the us airline network for 500 of the busiest airports ( ) .statistical analysis of the network reveals scale - free degree - distribution pattern between the nodes , with the characteristic path length and average degree .the si plot of the us airline network and abn show similar pattern of growth as both of them exhibit scale - free behaviour .however , the sir plot is similar to that of hbn due to extremely low characteristic path length and high average node degree . in table 3, we present our findings for the networks studied in the paper .the first column represents the simulation time ( in seconds ) for percolation threshold from the si model . in the second and third column ,we present the epidemic thresholds for the various networks studied by computing the values of the plots from the sir model ( figure 24 ) ( as a fraction of network size ) and the corresponding simulation times ( in seconds ) respectively . in the final column, we present the characteristic path lengths for the various networks .node removal ( degree - biased , betweenness - biased and closeness - biased ) .the y - axis denotes the cdf of the infection probability of the nodes , and the x - axis represents simulation time.,scaledwidth=100.0% ] finally , in figure 26 , we plot the variation in the rate of percolation by removing nodes from the network based upon their centralities and degrees . in transportation networks ,other than the degree of a node , closeness and betweenness centralities play a crucial role . in order to capture their effects on information diffusion , we simulate the si model on modified networks generated after directed removal of nodes .since cbn and mbn are strongly assortative , we remove only two percent of the nodes ( higher number of node removal will cause cbn and mbn to disintegrate into disconnected components ) .we find that the removal of nodes does not significantly affect the diffusion in abn .however for cbn , dbn and hbn , we observe that when nodes are removed based upon their closeness centrality , the diffusion curve shifts towards right , thus signifying a delay in the diffusion process .this can be explained due to the fact that the removal of nodes based upon closeness centrality has a direct effect on the characteristic path length .a node with high closeness allows every other node in the network to be reached along the shortest paths .the removal of such a node affects / delays diffusion until the next central node is encountered . for mbn , we observe that degree - biased removal causes the diffusion rate to increase steeply , signifying the presence of redundant nodes that simply increase the characteristic path length of the network .a removal of of such nodes causes the diffusion to improve significantly , as can be compared from the simulation times recorded in figure 23 .the present work places before us numerous questions of both academic as well as practical pursuit . one important practical scope of this work is in the planning of large - scale transportation networks for the future. it would be interesting to see the functionality of those networks which are planned using network science tools and techniques .another question of practical importance lies in optimizing these networks for efficient transportation and communication purposes . one important finding that could be of practical interest is the strong positive correlation between network assortativity and characteristic path length : will it be more efficient to travel on longer routes to reach hub nodes and from there to other parts of the city , or to travel though shorter routes , not reach the hub nodes but travel to other parts of the city through intermediate nodes ? from academic pursuit , there are numerous scopes for in - depth analysis based upon this study .the current study takes into account one subset of the large - scale public transit networks. it would be interesting to do an integrated study involving other modes of transport as well .the availability of high quality geo - located data will help in actually identifying redundant nodes in the network , thus making the network more efficient .since transportation plays an important role in the economic development of a city , the present study can be extended to incorporate other networks as well , such as road networks , supply chain networks and economic networks .a holistic approach to all these networks will help us in understanding each layer of society s complexity .in this study , we analyzed the statistical properties of the bus routes of six indian cities , namely ahmedabad , chennai , delhi , hyderabad , kolkata and mumbai .our analysis suggests that the bus networks show a wide spectrum of topological structure from power - law to exponential , with varying magnitude of the power - law exponent .we also observe that these networks show small - world behaviour in terms of either the placement of the bus stops ( -space ) or in terms of transfers ( -space ) .for example , cbn and mbn do not show the small - world property in -space .they , however , do show the small - world property in terms of transfers , as the majority of places can be visited by making as little as 2 to 3 bus changes .the redundancy in the network structure , as seen from the variation in and the presence of exponential cutoffs in the degree - distribution plots , validate our findings regarding the randomness associated with the growth and evolution of the bus networks with time .recently , wang et al .simulated exponential growth models for networks , with growth and adjacent node attachment as underlying processes .the growth equation according to their model is as follows : , where .the variable marks the vertices , and represents the probability that a vertex has a degree at time . the stationary degree distribution , given as .in the continuum limit , the above equation approaches the form , giving rise to the decay equation , with .our findings on the weak correlations between degree and centrality plots and self - similar structures of sub - networks motivate us to investigate the fractal nature of these networks .the degree - degree correlation matrix gives a rough idea on the degree - assortativity of these networks .strong assortativity in networks relate to strong connections between hubs , thus hubs tend to come in - between short - range connections .whereas , in disassortative mixing we observe hub - repulsion .the presence of hub - inter connectedness causes the characteristic path lengths of the networks to increase and also the fractal nature to diminish .presence of repelling hubs generate fractal topology in complex networks .an interesting aspect of the fractal topology is that they generate local small - worlds in well - defined communities . from a transportation perspective, it would be beneficial to have local small - worlds connected to a central core , as such a structure will reduce the characteristic path length and make the network efficient .the high values of characteristic path lengths for cbn and mbn can be attributed to the geographical structures of the two cities , chennai and mumbai .the routes in these two cities are exceptionally longer because the btns in these two cities have evolved more in a linear fashion .the reason for this may lie in the geographical limitation imposed by the presence of a water body on one side ( see figure 27 ) .also , most of our results on ahmedabad brts are inconclusive as the dataset is small . at present , the brts operates across only 13 routes with 129 bus stops .therefore , it would be interesting to see the evolution of this network in future . finally , we simulate si and sir dynamical processes on these networks in -space . since experiments with epidemic outbreaks in a population ( or a network ) is not a viable option , we resort to mathematical modelling to understand diffusion of information and contagion spreading .we therefore study the effect of percolation and epidemic spreading on these networks using si and sir epidemic models through numerical simulations . while it is observed that the characteristic path length plays a crucial role in information diffusion and epidemic spreading , several other network metrics also play important roles .their importance is however restricted to their relative contribution to the topological structures of the networks .small - world property , while an extremely desirable property in transportation networks , is highly subjective in its role of information diffusion , solely due to the diffusing entity .finally , bus networks form a specific class of complex networks that grow and evolve over physically constrained spatial networks .interesting in this regard is the city of ahmedabad and the abn. statistical analysis of road networks ( by considering intersections as nodes and roads as links ) has shown that the topological structure of the road networks in the city of ahmedabad exhibits a scale - free degree distribution with and , which is very much similar to abn .road intersections are usually separated by a distance which is geographically much smaller as compared to the distance between bus stops .therefore , our results emphasize the fact that transportation undoubtedly brings the world closer .weibing deng , wei li , xu cai , and qiuping a wang .the exponential degree distribution in complex networks : non - equilibrium network theory , numerical simulation and empirical data ., 390(8):14811485 , 2011 .yihan zhang , qingnian zhang , and jigang qiao .analysis of guangzhou metro network based on l - space and p - space using complex network . in_ geoinformatics ( geoinformatics ) , 2014 22nd international conference on _ , pages 16 .ieee , 2014 .o woolley - meza , c thiemann , d grady , jj lee , h seebens , b blasius , and d brockmann .complexity in human transportation networks : a comparative analysis of worldwide air transportation and global cargo - ship movements ., 84(4):589600 , 2011 .roger guimera , stefano mossa , adrian turtschi , and la nunes amaral .the worldwide air transportation network : anomalous centrality , community structure , and cities global roles ., 102(22):77947799 , 2005 .harold soh , sonja lim , tianyou zhang , xiuju fu , gary kee khoon lee , terence gih guang hung , pan di , silvester prakasam , and limsoon wong .weighted complex network analysis of travel routes on the singapore public transportation system ., 389(24):58525863 , 2010 .william o kermack and anderson g mckendrick .a contribution to the mathematical theory of epidemics . in _ proceedings of the royal society of london a : mathematical , physical and engineering sciences _ , volume 115 ,pages 700721 . the royal society , 1927 . atanu chatterjee and gitakrishnan ramadurai .scaling laws in chennai bus network . in _4th international conference on complex systems and applications , france _ , pages 137141 .https://halshs.archives-ouvertes.fr/halshs-01060875/document , 2014 .
in recent times , the domain of network science has become extremely useful in understanding the underlying structure of various real - world networks and to answer non - trivial questions regarding them . in this study , we rigourously analyze the statistical properties of the bus networks of six major indian cities as graphs in _ l_- and _ p_-space , using tools from network science . although public transport networks , such as airline and railway networks have been extensively studied , a comprehensive study on the structure and growth of bus networks is lacking . in india , where bus networks play an important role in day - to - day commutation , it is of significant interest to analyze their topological structure , and answer some of the basic questions on their evolution , growth , robustness and resiliency . we start from an empirical analysis of these networks , and determine their principle characteristics in terms of the complex network theory . the common features of small - world property and heavy tails in degree - distribution plots are observed in all the networks studied . our analysis further reveals a wide spectrum of network topologies arising due to an interplay between preferential and random attachment of nodes . unlike real - world networks , like the internet , www and airline , which are virtual , bus networks are physically constrained in two - dimensional space by the underlying road networks . in order to understand the role of constraints in the evolution of these networks , we calculate their fractal dimensions that reveal a three - dimensional space - like evolution in a constrained two - dimensional plane . we also extend our study to understand the complex dynamical processes of epidemic outbreaks and information diffusion in these networks using si and sir models . our findings , therefore throw light on the evolution and dynamics of such geographically and socio - economically constrained networks , which will help us in designing more efficient networks in the future . + * keywords * : complex networks , power - laws , self - similarity , small - world phenomenon , transportation networks
understanding living cells at a systemic level is an increasingly important challenge in biology and medicine .regulatory interactions between intracellular molecular agents ( e.g. dna , rna , proteins , hormones , trace elements ) , form so - called _ genetic regulatory networks _ ( grn ) , which orchestrate gene expression and replication , coordinate metabolic activity , and cellular development , respond to changes in the environment , or stress .grn coordinate regulatory dynamics on all levels from cell - fate to stress response .qualitative understanding of grn topology is for instance obtained from promoter sequences , gene - expression profiling or protein - protein interactions ( proteome ) .however qualitative information on grn topology alone is insufficient to understand grn dynamics .it has been recognized that quantitative information is required to understand the complex dynamical properties of regulatory interactions in living cells , mainly because dynamics on interaction networks with identical topology still depends on the strength of interactions ( links ) between agents ( nodes ) .models of grn dynamics aid the task of understanding properties of grn at various levels of detail available in experimental data and therefore provide valuable tools for integrating information from different sources into unifying pictures and for reverse engineering grn from experimental data .any model should _ adequately _ reproduce grn dynamics and _ sufficiently _ exhibit systemic properties of the grn , including homeostasis , multi - stability , periodic dynamics , alternating activity , self - organized critical dynamics ( soc ) and differentiability. _ homeostatic dynamics _ regulates the equilibrium concentration levels of agents , e.g. , _ multi - stability _ shows switching between multiple steady states .examples for _ periodic dynamics _ are e.g. the cell - cycle , circadian - clock , i - n signaling , her dynamics etc .some molecular agents show _ alternating activity _ , i.e. their concentrations alternate between being detectable ( on ) and below detection threshold ( off ) , see e.g. ._ self - organized critical _ ( soc ) dynamics corresponds to details of regulatory dynamics ensuring ( approximate ) stability within a fluctuating environment through various mechanisms of adaptation .finally the property of _ differentiability _ means that cells of multicellular organisms can differentiate into various cell - types ( liver , muscle , blood , kidney , cancer , ... ) .the differentiated cells possess identical grn but express distinguishable patterns of regulatory activity .the same grn therefore can be expressed in different _ modes _ so that some agents become expressed in one mode but not in another .recently it has been reported that both regulation of transcription and mrna decay rates ( i.e. the mrna turnover ) are necessary to understand experimentally observed expression values .moreover it has been demonstrated that decay rates of mrna are cell - type specific .analogously for proteins , where the dominant mechanism is the ubiquitin driven proteolyse in the proteasom , protein abundance and therefore their degradation has to be tightly controlled .also the abundance of proteins and whether certain proteins are produced or not is again cell - type specific .this indicates that decay - rates and their control play a crucial role in cell - differentiation .variable decay rates however and the property of differentiability are hardly ever considered in grn models where decay rates of agents are usually kept constant .understanding the effects of changes of decay rates of agents therefore is a crucial step towards a deeper understanding of grn dynamics and the role decay rates play in cell - differentiation .the grn is the set of all possible interactions of molecular reactions and bindings .the grn captures all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems .what happens when different cell - types express the same grn in alternative ways ? at any point in time only small subsets of the grn are active .any active subset of the grn leads to the expression of particular sets of molecules ( expression modes ) .the _ active regulatory network _ at time is the regulatory sub - network of the grn , governing the molecular ( auto - catalytic ) dynamics of all agents which exist at time .the set of existing molecules forms the _ active agent set _ at time .the active network changes over time and typical sequences of active sets represent what we call the _ expression modes _ of a specific cell - type and their cell - cycle .expression modes themselves can be modified , either locally as a reaction to an external signal , or fundamentally through further cell differentiation .active sets of molecules are transient and what is observed in experiments is a superposition of subsequent active sets , which we call the _ expressed set of agents _ and the regulatory interactions between the expressed agents the _ expressed regulatory network_. to find the property of differentiability in a regulatory network model therefore requires that one network is capable of producing different expression modes while perturbations ( external signal ) only modify active sets locally and the particular expression mode can be restored .the six dynamical properties we have listed above have been addressed with a variety of conceptually different models .the essence of all these models is that they try to capture the dynamics induced by positive and negative feed back loops within the grn .the choice of model depends largely on the type and resolution ( coarse graining ) of experimental data . at the single cell level cellular activity ( e.g. concentrations of biochemical agents )can be modeled by non - linear ( stochastic ) differential equations which can explain homeostasis , periodic and multi - stable behavior .the dynamics governed by a grn is given by a set of coupled non - linear differential equations where is a ( non - linear ) function capturing the grn .it depends on the vector of concentrations of all the possible molecular agents in a cell , . is the time derivative of the concentrations .note that can have stochastic components .analysis of such systems is often complicated by the interplay between fluctuations and non - linearities .differential equation models can be approximated by cellular automata , boolean or piecewise - linear models .the property of soc dynamics , or dynamics at the `` edge of chaos '' , has been studied mainly in the context of cellular automata and boolean models .soc dynamics was also discussed in continuous differential equation based models .boolean and piecewise - linear models share common origins in the work of glass and kauffman , , and have extensively been used for modeling and analyzing grn . for their superior properties in approximating non - linear systems ( in principle to any suitable precision ) piecewise - linear models also are applied in different disciplines , for instance for modeling highly non - linear electronic circuits . in the context of grn both boolean and piecewise - linear modelsusually are used for describing non - linear dynamics with switch - like regulatory elements frequently observed in biological regulatory processes .such switches react if the concentration of an agent ( the signal ) crosses a specific threshold level . to model such switches in regulation networks of molecular agents with concentrations the space of concentrations is cut into segments defined by the threshold values where the concentration can trigger a regulatory switch .these segments are called _ regulatory domains _ ( e.g. ) . in each such domain eq .( [ nlde ] ) gets approximated by a linear equation of the form where the are production rates and are interaction matrices between agents . if , then promotes the production of .if , then suppresses . if has no influence on diagonal elements are _ decay rates _ , .non - linear effects purely come from concentration passing threshold levels , where the dynamics of switches from one to another regulatory domain with different values of and .equation ( [ linearisation ] ) is a slight generalization of the glass - kauffman plm , , where except for the ( usually ) fixed decay rates , so that only the production rates change with the regulatory domain . given that the interaction matrix of the regulatory network is invertible ( which is almost certainly true for the biologically relevant range of connectivities of grn ) eq . ( [ linearisation ] )can be rewritten with being the solution of the equation .the fixed - point is stable ( unstable ) and will be attracted ( repelled ) by .if is stable and for all then is a stationary solution of eq .( [ linearisation ] ) .not all models approximating nonlinear differential equation descriptions of grn are equally suited to capture all grn properties discussed above simultaneously depending on whether discrete ( boolean , cellular automata ) or smooth ( differential equation ) features dominate the model .however there exists a surprisingly simple class of models which exhibits _ all _ desired grn properties . herewe present such a simple model that captures all of the above dynamical properties .we find that the alternating dynamics plays a key role for the stability of regulatory systems and for the formation of soc dynamics in particular .most importantly we are able to show that even unspecific control over decay rates , changing the magnitude of all decay rates simultaneously by a ( small ) factor , leads to `` cell differentiation '' , i.e. the same regulatory network enters different expression modes , displaying different sequences of active regulatory networks .we show that experimental facts , linking variations of decay rates observed between different cell - types of an organism to variations of the abundance of intra - cellular biochemical agents in these cell - types , correspond to ( a ) differences in the _ expressed _ genetic regulatory network , and ( b ) these differences can be controlled via decay rates of intracellular agents .in other words typical expression modes ( cyclical sequences of successive active sub - networks of the grn ) can be altered and switched by controlling decay rates .glass - kauffman systems , , produce positive concentrations for all times given positive initial conditions .this however makes it impossible to produce alternating activity of agents since zero - concentrations can not appear .therefore we have to generalize glass - kauffman systems to more general forms of invertible interaction matrices where the positivity of solutions of eq .( [ linearisation ] ) is not implicitly guaranteed , but where positivity ( non - negativity ) is ensured as a constraint to the system , this constraint alters the linear dynamics of eq .( [ linearisation ] ) in the following way .whenever a concentration becomes zero at time then remains zero for for as long as , according to eq .( [ linearisation ] ) . if for then is no longer subject to the positivity constraint and continues to evolve according to eq .( [ linearisation ] ) again .agent is said to be _ active _ at time , if and _ inactive _ , if .the positivity constraint eq .( [ eq : pc ] ) implies the following consequences . at any point in timethere will be a sub - set of agents with non - vanishing concentrations which we call the _ active set _ of agents .the remaining agents have zero concentration , and therefore do not actively influence the concentrations of any of the non - vanishing agents .there exist different active sets , i.e. combinations in which agents can be active or inactive .each active set can be uniquely identified by an index .in the course of time some agents will vanish while others re - appear , so that one effectively observes a sequence of sets of active agents being the initial active set .the active set switches to active set at time . in each time interval ] .correlation coefficients for simulated and measured time - series are for time larger and agents in order of the legend .the model simulation uses zero concentrations for all agents as initial condition and a time increment . for matching the simulation with experiment time in the modelis shifted by .[ figure_main2 ] , width=384 ]we first show that the model is able to explain actual empirical data , including alternating dynamics .figure [ figure_main2 ] shows data of molecular concentrations ( her ( black ) , pol ii ( red ) , trip1 ( blue ) , hdac1 ( green ) ) over three periods of about 40 minutes time .these four agents are all part of the human estrogen nuclear receptor dynamics .the source of the data is metivier et .al . .data points were taken from pigolotti et al . and the actual values of the matrix elements are bests fits with identical decay rates for optimal explanation of the data .the trip1 data ( blue ) shows _ alternating activity _ which is reproduced perfectly by our sequential linear model . in the following we show how the change of decay rates induces changes from one cell - type to another . in particular we show how changes of the overall strength of the decay rates results in differentiated dynamics , i.e. in distinct sequences of active expressed networks .this allows to understand recent experimental observations which indicate correlations between cell - type , expressed sets of agents , and decay - rates . for a fixed interaction network temporal self - organizationcan be maintained for a wide range of decay rates .we show this in the same -node system considered in fig .[ figure_main1 ] by only varying the decay rate from eq .( [ eq : matrix ] ) . of the four node system ,( [ eq : matrix ] ) , is shown in ( a ) as a function of the decay rate , which exhibits a `` plateau '' with in the range . in( b ) the length of the periodic sequence of domains is plotted in green triangles and the number of different active sets as red squares. in ( c ) the sequences of active sets are shown for decay rates , and .the limit circles for decay rates ( short sequence ) and ( long sequence ) are visualized in ( d ) in a poincare map using three out of four phase - space dimensions . with decreasing radius of the limit circle becomes wider and additional sets ( marked with colors ) become active . in ( e ) the spectra of eigenvalues are shown for all the appearing active sets with .[ figure_main3 ] , title="fig:",width=384 ] + of the four node system , eq . ( [ eq : matrix ] ) , is shown in ( a ) as a function of the decay rate , which exhibits a `` plateau '' with in the range . in( b ) the length of the periodic sequence of domains is plotted in green triangles and the number of different active sets as red squares . in ( c ) the sequences of active sets are shown for decay rates , and .the limit circles for decay rates ( short sequence ) and ( long sequence ) are visualized in ( d ) in a poincare map using three out of four phase - space dimensions . with decreasing radius of the limit circle becomes wider and additional sets ( marked with colors ) become active . in ( e ) the spectra of eigenvalues are shown for all the appearing active sets with .[ figure_main3 ] , title="fig:",width=230 ] of the four node system , eq . ( [ eq : matrix ] ) , is shown in ( a ) as a function of the decay rate , which exhibits a `` plateau '' with in the range . in( b ) the length of the periodic sequence of domains is plotted in green triangles and the number of different active sets as red squares . in ( c )the sequences of active sets are shown for decay rates , and .the limit circles for decay rates ( short sequence ) and ( long sequence ) are visualized in ( d ) in a poincare map using three out of four phase - space dimensions . with decreasing radius of the limit circle becomes wider and additional sets ( marked with colors ) become active . in ( e ) the spectra of eigenvalues are shown for all the appearing active sets with .[ figure_main3 ] , title="fig:",width=153 ] figure [ figure_main3 ] a shows the lyapunov exponent as a function of .a plateau , where , is clearly visible .if the decay rate is larger than a critical value , the lyapunov exponent becomes negative ( ) and the system stable .if the decay rate is smaller than a critical value of , temporal balance can not be achieved any more , refocusing breaks down , and the system becomes chaotic and trajectories diverge exponentially with . in fig .[ figure_main3 ] b the length of the periodic sequences ( green triangles ) , which is the number of time - domains in a sequence , and the number of different active sets activated in this sequence ( red squares ) is depicted .figure [ figure_main3 ] b also shows that at several critical values of in the plateau region the sequences of active regulatory sub - networks changes when temporal balance can no longer be established merely by adapting the switching times of a sequence .sequences do not usually change completely at critical values of and are only expanded by additional active subsets .this can be seen clearly in the 3d poincare map of the dynamics fig .[ figure_main3 ] d , where the sequence of subsystems given by ( for ) gets expanded to the sequence ( for ) . in the materials and methods , fig .( 1 ) , the longer sequence is also shown in the space of all possible active sets .the mathematical reason why such critical decay rates exist is that changes of shift the eigenvalue spectra of the active interaction matrix , shown in fig .[ figure_main3 ] e , along the real axis .the real part of the leading eigenvalues , , is becoming smaller ( larger ) than zero and becomes an attractor ( repellor ) of .the stable fixed point then either is accessible and the dynamic changes from periodic to stationary or inaccessible and the dynamic changes qualitatively but remains periodic . which agents become active in a given active set depicted in fig .[ figure_main3 ] b for three different sequences of active sets associated with three different ranges of the decay rate indicated by gray lines .if node is active in active set then the associated field is white and black otherwise .and and identical initial conditions for all values of . (a ) the lyapunov exponent , ( b ) the number of active sets in a period ( if then the the sequence is not periodic but a steady state ! ) , and ( c ) the fraction of expressed nodes are plotted as functions of the uniform decay rates . for stable . in the range has become unstable but the plateau ( ) can not form since the dynamic finds active sets with stable and accessible .the inset in ( b ) shows that in the plateau region a small window , , exists where again an active set contains an accessible attracting the dynamics . in the range plateau forms and dynamics gets periodic . for system gets unstable .[ figure_main4 ] , width=384 ] the number of _ expressed _agents is the number of agents that are active at least once during a period of the dynamics . to demonstrate that not only the periodic activation of agents depends on but also the number of expressed nodes itself, we consider a larger sequentially linear system with agents .the interaction matrix of the system is a random matrix with average connectivity , meaning for each node interactions with other agents have been randomly chosen with equal probability .each non - zero entry , describing such an interaction , is drawn from a normal distribution with mean zero and a standard deviation of .this means that the interaction strength is of magnitude on average and has positive or negative sign with equal probability . in fig .[ figure_main4 ] a the lyapunov exponent , in fig .[ figure_main4 ] b the number of sets that become active during a cycle and in fig .[ figure_main4 ] c the fraction of expressed agents is plotted as a function of . for large decay rates ( )the system is stable and is a fixed - point of the dynamics .as decreases becomes unstable for .however for the system ends up in some stable accessible fixed point so that approaches a stationary state and . in this range increases with .the plateau with stable self - organized critical dynamics ( ) only emerges in the range where number of active sets and expressed network size vary strongly . varies between and which means that changes of the decay rate can induce changes of the size of the expressed network comparable to the magnitude of the full interaction network .a small window of stability exists for ( see inset ) .the strong dependence of on the decay - rate ( up to of the total regulatory network ) demonstrates clearly that decay - rates alone massively influence sequences of active systems without changing the interaction strength between agents in the regulatory network at all .moreover , decay rates can also cause switches between fixed - point dynamics and periodic dynamics . while fixed points favor larger decay - rates ( in the example ) there can also exist fixed points for smaller decay rates ( window of stability ) where systems favor periodic dynamics .we presented a model which de - composes the dynamics of molecular concentrations governed by the full molecular regulatory networks into a temporal sequence of active sub - networks .this novel type of model allows not only to reduce the vast complexity of the full regulatory network into sub - networks of managable size but further to approximate the complicated dynamics by linear methods .the intrinsic non - linearities in the system which lead to alternating dynamics in concentrations ( as found in countless experiments ) are absorbed into switching events , where the dynamics of one linear system switches to another one . in this viewdifferent cell types correspond to different sequences of active sub - networks over time .these sequentially linear models allow not only for the first time to describe all the relevant dynamical features of the gnr ( homeostasis , multi - stability , periodic dynamics , alternating activity , differentiability , and self - organized criticality ) , but also offers the handle to understand the role of molecular decay rates .the fact that sequentially linear dynamics properly models homeostasis , multi - stability and periodic behavior was shown in .here we have shown how self - organized criticality ( lyapunov exponent self - regulates to zero ) arises as a consequence of temporal balance of switching events .this requires agents to show alternating activity ( being repeatedly on and off ) , which is a natural property by construction of sequentially linear models , and which has posed an unresolved problem of previous models such as the glass - kauffman model and its many variants .the mechanism behind self - organized criticality is based on adaptive switching times which effectively lead to refocusing of perturbed dynamics onto the attractor of sequences of active sub - networks .such a temporal self - organization causes long time memory of perturbations in terms of phase - shifts of the otherwise unchanged periodic dynamics , causing the lyapunov exponent to become zero . in other wordsslight perturbations , e.g. noise , only cause time - shifts of the sequence of regulatory reactions but do not change the underlying sequence .perturbations are `` remembered '' by the system by non vanishing phase - shifts and the dynamics gets `` refocused '' onto the periodic attractor merely accumulating a time - shift .this has the consequence that the lyapunov exponent is zero and the system self - organizes its criticality by adapting switching - times .practically this means that a system balances the time it spends in its active sub networks with stable and unstable dynamics ( temporal balance ) .applying the sequentially linear model to the problem of cell - differentiation we demonstrate that different levels of decay rates are one to one related with transitions from one active sub - network sequence ( cell type ) to another .this might be a key ingredient to understand a series of recent experimental facts reported on the role of decay - rate regulation systems and the role of noise in cell differentiation .we found that by varying the decay rates only , while keeping the complete regulatory network fixed over time , substantially modifies the temporal organization of regulatory events .in particular the decay rate controls the number of expressed agents , the sequence of active sub - networks , and sometimes even the type of solution ( stable , stationary , periodic ) .the changes occur at critical levels of decay rates and changes can be drastic . for example we find situations where a 5% variation of the decay rate causes an approximate doubling of the number of expressed agents .this demonstrates that different expression modes , which distinguish different cell - types from each other , can be very efficiently obtained by controlling the decay rates of agents without altering any interactions between agents in the regulatory network , which is very costly in an evolutionary sense .these findings highlight the importance of intracellular decay rate control mechanisms and the role of noise in cell differentiation .for system shown in article fig .( 3 ) . in set all , yellow background stand for complex leading eigenvalues of the active interaction matrix .black indicates that the agent associated with that index is not active .the gray lines indicate to all possible switching events where the number of active agents changes .blue arrows mark the observed sequence of the dynamics for the examples eq .( 8) with .[ fig - systree ] , width=384 ] the eigenvalues and eigenvectors of a matrix are defined as solutions of the matrix equation .the solution of a linear differential equation is of the form . for large timesthe will therefore point into the direction of the eigenvector with the eigenvalue with the largest real part and as gets large .if the largest real part of is larger ( smaller ) than zero will grow ( decay ) exponentially and is an unstable ( stable ) fixed point of the differential equation . let be the maximal real part of the leading eigenvalue of the active interaction matrix associated with the active subset .the effective fixed point is _ stable _ and perturbations of concentrations vanish if .the fixed point is _ accessible _ if approaching does not cause a switching event and _ inaccessible _ otherwise .stationary solutions of a sequentially linear system therefore require fixed points that are both stable and accessible .suppose a bounded attractor exists for a sequentially linear system with agents .the perturbation at time also effects later switching times of agents , i.e. such that for some constant , where . since sufficiently fast as ( there exists an attractor ) the cumulated time shift of switching times remains finite for all times .this shows that the perturbed behaves ( after some time ) just like the unperturbed only shifted in time .perturbation neither vanishes nor grow exponentially , and the lyapunov exponent can only be zero ( ) .moreover , since the number of active sets is finite ( ) and the dynamics is bounded the concentrations have to return to values on the attractor with arbitrary precision within some finite return - time .the remaining concentration difference can be seen as a perturbation so that the attractor can only be a periodic cycle .the time - shift produces a phase - shift of the periodic dynamics .while eigenvalues tell us something about the stability of a fixed point the lyaponov exponent tells something about the stability of the dynamics itself .the lyapunov exponent measures how a small perturbation grows with time . if the perturbation vanishes exponentially with time or grow exponentially if .system with are chaotic ( in - stable dynamics extremely sensitive to noise or perturbations ) while indicates stable dynamics insensitive to perturbations and noise .systems with are special as their dynamics is sensitive to noise and perturbations without `` overreacting '' like chaotic systems .these systems at the `` edge of chaos '' adapt to fluctuations but remain close to their unperturbed dynamics .here we derive a simple approximation of the lyapunov exponent of sequentially linear dynamics which explains temporal self - organization quantitatively .this is necessary for understanding why switching in general happens between active networks with stable and unstable dynamics and not from one stable stable ( unstable ) to another stable ( unstable ) active network .qualitative analysis of bounded attractors of sequentially linear dynamics has shown that the attractor is periodic and the lyapunov exponent .characteristic information on the dynamics gets encoded by periodic sequences , with a period of some length such that and ( for large enough ) as in the example shown in fig .( [ figure_main1 ] ) in the main article . if the dynamics of the system would remain in an active network the lyapunov exponent would be identical with the largest real part of the eigenvalues of .the lyapunov exponent of the sequentially linear system therefore is well approximated or into the direction of the leading possibly complex eigenvector , if is unstable , remains incomplete since convergence is always interrupted by a switching event .] by the time average over , i.e. since the dynamics is periodic the time average only needs to be taken over one period and since one gets for large enough .the `` refocusing '' mechanism discussed above qualitatively therefore is also `` balancing '' the times specific active sets remain active by fine tuning switching times approaches zero consistently as the time increment is made smaller and orbits become periodic again . ]such that contributions from time - domains with stable ( ) and unstable dynamics ( ) compensate each other ._ temporal balance _ and _ refocusing _ are two aspects of the temporal self - organizing principle manipulating switching times .this work has been funded by the _ forum integrativmedizin _ an initiative of the _ hilde umdasch privatstiftung_. hood l , heath jr , phelps me , lin b ( 2004 ) systems biology and new technologies enable predictive and preventative medicine . _gavin ac , aloy p , grandi p , krause r , boesche m , et al . ( 2006 ) proteome survey reveals modularity of the yeast cell machinery ._ nature _ 440:631636 .kashtan n , alon u ( 2005 ) spontaneous evolution of modularity and network motifs ._ proc natl acad sci usa _ 102:1377313778 .mtivier r , penot g , hbner mr , reid g , brand h , et al .( 2003 ) estrogen receptor- directs ordered , cyclical , and combinatorial recruitment of cofactors on a natural target promoter . _ cell _ 115:751763 .ciechanover a ( 2006 ) intracellular protein degradation : from a vague idea thru the lysosome and the ubiquitin - proteasome system and onto human diseases and drug targeting . _exp biol med _ 231:11971211 .bossi a , lehner b ( 2009 ) tissue specificity and the human protein interaction network ._ mol syst biol _ 5:260 .burkard tr , planyavsky m , kaupe i , breitwieser fp , brckstmmer t , et al .( 2011 ) initial characterization of the human central proteome ._ bmc syst biol _ 5:17 .kauffman sa ( 1993 ) _ the origins of order- self - organization and selection in evolution_. ( oxford university press ) .bhattacharjya a , liang s ( 1996 ) power - law distributions in some random boolean networks ._ phys rev lett _ 77:16441647 .rewieski m , white j ( 2003 ) a trajectory piecewise - linear approach to model order reduction and fast simulation of nonlinear circuits and micromachined devices ._ ieee / acm transactions on computer - aided design of integrated circuits and systems _ 252 - 257 .yagil g , yagil e ( 1971 ) on the relation between effector concentration and the rate of induced enzyme synthesis ._ biophys j _ 11:1127 .ptashne m ( 1992 ) _ a genetic switch : phage and higher organisms_. _ blackwell science & cell press_. casey r , de jong h , gouz jl ( 2006 ) piecewise - linear models of genetic regulatory networks : equilibria and their stability ._ j math biol _ 52:2756 .
systemic properties of living cells are the result of molecular dynamics governed by so - called genetic regulatory networks ( grn ) . these networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems . at any point in time only small subsets of these networks are active . any active subset of the grn leads to the expression of particular sets of molecules ( expression modes ) . the subsets of active networks change over time , leading to the observed complex dynamics of expression patterns . understanding of this dynamics becomes increasingly important in systems biology and medicine . while the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems , the understanding of the role of degradation of biochemical agents ( mrna , protein ) in regulatory dynamics remains limited . recent experimental data suggests that there exists a functional relation between mrna and protein decay rates and expression modes . in this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the grn . the model is able to reproduce key characteristics of molecular dynamics , including homeostasis , multi - stability , periodic dynamics , alternating activity , differentiability , and self - organized critical dynamics . moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes . the model explains recent experimental observations that decay - rates ( or turnovers ) vary between differentiated tissue - classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation .
compressed sensing ( cs ) is an emerging sparse sampling theory which received a large amount of attention in the area of signal processing recently . consider a -sparse signal which has at most nonzero entries .let be a measurement matrix with and be a measurement vector .compressed sensing deals with recovering the original signal from the measurement vector by finding the sparsest solution to the undetermined linear system , i.e. , solving the the following _ -optimization _ problem where denotes the -norm or ( hamming ) weight of .unfortunately , it is well - known that the problem ( [ l0 ] ) is np - hard in general . in compressed sensing , there are essentially two methods to deal with it .the first one pursues greedy algorithms for -optimization , such as the orthogonal matching pursuit ( omp ) and its modifications .the second method considers a convex relaxation of ( [ l0 ] ) , or the _ -optimization _ ( basis pursuit ) problem , as follows where denotes the -norm of .note that the problem ( [ l1 ] ) could be turned into a linear programming ( lp ) problem and thus tractable .the construction of measurement matrix is one of the main concerns in compressed sensing . in order to select an appropriate matrix ,we need some criteria . in their earlier and fundamental works ,donoho and elad introduced the concept of _ spark_. the spark of a measurement matrix , denoted by , is defined to be the smallest number of columns of that are linearly dependent , i.e. , where furthermore , obtained several lower bounds of and showed that if then any -sparse signal can be exactly recovered by the -optimization ( [ l0 ] ) .in fact , we will see in the appendix that the condition ( [ proeq1 ] ) is also necessary for both the -optimization ( [ l0 ] ) and the -optimization ( [ l1 ] ) .hence , the spark is an important performance parameter of the measurement matrix .other useful criteria include the well - known restricted isometry property ( rip ) and the nullspace characterization .although most known constructions of measurement matrix rely on rip , we will use spark instead in this paper since the spark is simpler and easier to deal with in some cases . generally , there are two main kinds of constructing methods for measurement matrices : random constructions and deterministic constructions .many random matrices , e.g. , fourier matrices , gaussian matrices , rademacher matrices , _ etc _ , have been verified to satisfy rip with overwhelming probability .although random matrices perform quite well on average , there is no guarantee that a specific realization works .moreover , storing a random matrix may require lots of storage space .on the other hand , a deterministic matrix is often generated on the fly , and some properties , e.g. , spark , girth and rip , could be verified definitely .there are many works on deterministic constructions . among these , constructions from coding theory many attentions , e.g. , amini and marvasti used bch codes to construct binary , bipolar and ternary measurement matrices , and li _ et al . _ employed algebraic curves to generalize the constructions based on reed - solomon codes . in this paper , we usually use to denote a real measurement matrix and a binary measurement matrix .recently , connections between ldpc codes and cs excite interest .dimakis , smarandache , and vontobel pointed out that the lp decoding of ldpc codes is very similar to the lp reconstruction of cs , and further showed that parity - check matrices of good ldpc codes can be used as _ provably _ good measurement matrices under basis pursuit .ldpc codes are a class of linear block codes , each of which is defined by the nullspace over of a binary sparse parity - check matrix .let and denote the sets of column indices and row indices of , respectively .the _ tanner graph _ corresponding to is a bipartite graph comprising of variable nodes labeled by the elements of , check nodes labelled by the elements of , and the edge set , where there is an edge if and only if . the _ girth _ of , or briefly the girth of , is defined as the minimum length of circles in .obviously , is always an even number and . is said to be -_regular _ if has the uniform column weight and the uniform row weight .the performance of an ldpc code under iterative / lp decoding over a binary erasure channel is completely determined by certain combinatorial structures , called stopping sets . a _ stopping set _ of is a subset of such that the restriction of to , say , does nt contain a row of weight one .the smallest size of a nonempty stopping set , denoted by , is called the stopping distance of . lu _ et al . _ verified that binary sparse measurement matrices constructed by the well - known peg algorithm significantly outperform gaussian random matrices by a series of experiments .similar to the situation in constructing ldpc codes , matrices with girth 6 or higher are preferred in the above two works . in this paper, we manage to establish more connections between ldpc codes and cs .our main contributions focus on the following two aspects .* _ lower bounding the spark of a binary measurement matrix ._ as an important performance parameter for ldpc codes , the stopping distance plays a similar role that the spark does in cs .firstly , we show that , which again verifies the fact that good parity - check matrices are good measurement matrices . a special case of the lower bound is the binary corollary of the lower bound for real matrices in .then , a new general lower bound of is obtained , which improved the previous one in most cases .furthermore , for a class of binary matrices from finite geometry , we give two further improved lower bounds to show their relatively large spark . * _ constructing binary measurement matrices with relatively large spark ._ ldpc codes based on finite geometry could be found in . with similar methods ,two classes of deterministic constructions based on finite geometry are given , where the girth equals 4 or 6 .the above lower bounds on spark ensure that the proposed matrices have relatively large spark .simulation results show that the proposed matrices perform well and in many cases significantly better than the corresponding gaussian random matrices under the omp algorithm . even in the case of girth 4 ,some proposed constructions still manifest good performance . moreover ,most of the proposed matrices could be put in either cyclic or quasi - cyclic form , and thus the hardware realization of sampling becomes easier and simpler .the rest of the paper is organized as follows . in section [ fgldpc ]we give a brief introduction to finite geometries and their parallel and quasi - cyclic structures , which result in the two classes of deterministic constructions naturally .section [ mainresults ] obtains our main results , or the two lower bounds of spark for general binary matrices and two further improved lower bounds for the proposed matrices from finite geometry .simulation results and related remarks are given in section [ simulation ] .finally , section [ conclusion ] concludes the paper with some discussions .finite geometry was used to construct several classes of parity - check matrices of ldpc codes which manifest excellent performance under iterative decoding .we will see in the later sections that most of these structured matrices are also good measurement matrices in the sense that they often have considerably large spark and may manifest better performance than the corresponding gaussian random matrices under the omp algorithm . in this section ,we introduce some notations and results of finite geometry .let be a finite field of elements and be the -dimensional vector space over , where .let be the -dimensional euclidean geometry over . has points , which are vectors of .the -flat in is a -dimensional subspace of or its coset .let be the -dimensional projective geometry over . is defined in .two nonzero vectors are said to be equivalent if there is such that .it is well known that all equivalence classes of form points of . points .the -flat in is simply the set of equivalence classes contained in a -dimensional subspace of . in this paper , in order to present a unified approach , we use to denote either or .a point is a -flat , a _ line _ is a -flat , and a -flat is called a _hyperplane_. for , there are -flats contained in a given -flat and -flats containing a given -flat , where for and respectively let and be the numbers of -flats and -flats in respectively .the -flats and -flats are indexed from to and to respectively .the incidence matrix of -flat over -flat is a binary matrix , where for and if and only if the -flat contains the -flat .the rows of correspond to all the -flats in and have the same weight .the columns of correspond to all the -flats in and have the same weight .hence , is a -regular matrix , where the incidence matrix or will be employed as measurement matrices and called respectively the _ type i or type ii finite geometry measurement matrix_. moreover , by puncturing some rows or columns of or , we could construct a large amount of measurement matrices with various sizes . to obtain submatrices of or with better performance ,the property of parallel structure in euclidean geometry are often employed as follows . in this class of constructions , an important rule of puncturing the rows or columns of or to make the remained submatrix as regular as possible .a possible explanation may come from theorem [ preth ] in the next section .this rule can be applied since the euclidean geometry has the parallel structure and all -flats ( or -flat ) can be arranged by a suitable order . since a projective geometrydoes not have the parallel structure , we concentrate on only . recall that a -flat in is a -dimensional subspace of or its coset .a -flat contains points .two -flats are either disjoint or they intersect on a flat with dimension at most .the -flats that correspond to the cosets of a -dimensional subspace of ( including the subspace itself ) are said to be parallel to each other and form a parallel bundle .these parallel -flats are disjoint and contain all the points of with each point appearing once and only once .the number of parallel -flats in a parallel bundle is .there are totally -flats which consist of parallel bundles in .we index these parallel bundles from .consider the incidence matrix of -flat over -flat .all rows of could be divided into bundles each of which contain rows , i.e. , by suitable row arrangement , could be written as where ( ) is a submatrix of and corresponds to the -th parallel bundles of -flat .clearly , the row weight of remains unchanged and its column weight is 1 or 0 .similar to the ordering of rows , the columns of can also be ordered according to the parallel bundles in .hence , by deleting some row parallel bundles or column parallel bundles from , and transposing the obtained submatrix if needed , we could construct a large amount of measurement matrices with various sizes .these will be illustrated by several examples in section iv . in this paper , we call or the first class of binary measurement matrices from finite geometry , and their punctured versions the second class of binary measurement matrices from finite geometry . apart from the parallel structure of euclidean geometry , most of the incidence matrices in euclidean geometry and projective geometry also have cyclic or quasi - cyclic structure .this is accomplished by grouping the flats of two different dimensions of a finite geometry into cyclic classes .for a euclidean geometry , only the flats not passing through the origin are used for matrix construction .based on this grouping of rows and columns , the incidence matrix in finite geometry consists of square submatrices ( or blocks ) , and each of these square submatrices is a circulant matrix in which each row is a cyclic shift of the row above it and the first row is the cyclic shift of the last row .note that by puncturing the row blocks or column blocks of the incidence matrices , the remained submatrices are often as regular as possible . in other words , this skill is compatible with the parallel structure of euclidean geometry .hence , the sampling process with these measurement matrices is easy and can be achieved with linear shift registers . for detailed results and discussions ,we refer the readers to ( * ? ? ?* appendix a ) .the definition of spark was introduced by donoho and elad to help to build a theory of sparse representation that later gave birth to compressed sensing . as we see from ( [ proeq1 ] ) , spark of the measurement matrixcan be used to guarantee the exact recovery of -sparse signals . as a result , while choosing measurement matrices , those with large sparks are prefered . however , the computation of spark is generally np - hard . in this section ,we give several lower bounds of the spark for general binary matrices and the binary matrices constructed in section ii by finite geometry .these theoretical results guarantee the good performance of the proposed measurement matrices under the omp algorithm to some extent .firstly , we give a relationship between the spark and stopping distance of a general binary matrix . for a real vector , the support of is defined by the set of non - zero positions , i.e. , . clearly , .traditionally , a easily computable property , _ coherence _ , of a matrix is used to bound its spark . for a matrix with column vectors , the coherence or the matrix has a all - zero column . ]is defined by : where denotes the inner product of vectors .furthermore , it is shown in that note that this lower bound applies to general real matrices . for the general binary matrix , the next theorem shows that the spark could be lower bounded by the stopping distance .[ preth0 ] let be a binary matrix .then , for any , the support of must be a stopping set of .moreover , assume the contrary that is not a stopping set .by the definition of stopping set , there is one row of containing only one ` 1 ' on .then the inner product of and this row will be nonzero , which contradicts with the fact that .hence , according to the definitions of stopping distance and spark .let be the parity - check matrix of a binary $ ] hamming code , which consists of all -dimensional non - zero column vectors .it is easy to check that , which implies that the lower bound ( [ lbpre ] ) could be achieved .this lower bound verifies again the conclusion that good parity - check matrices are also good measurement matrices .in particular , consider a binary matrix .suppose the minimum column weight of is , and the maximum inner product of any two different columns of is . by ( [ defcoh ] ) , we have thus the lower bound ( [ generalspark ] ) from implies on the other hand , it was proved that .hence , the bound ( [ binarysparkold ] ) is a natural corollary of theorem [ preth0 ] . as a matter of fact , for the general binary matrix , we often have a tighter lower bound of its spark . [ preth ]let be a binary matrix .suppose the minimum column weight of is , and the maximum inner product of any two different columns of is .then for any , we split the non - empty set into two parts and , without loss of generality , we assume that . for fixed , by selecting the -th column of and all the columns in of , we get a submatrix . since the column weight of is at least , we could select rows of to form a submatrix of , say , where the column corresponds to is all 1 column .now let s count the total number of 1 s of in two ways .* from the view of columns , since the maximum inner product of any two different columns of is , each of the columns of corresponds to has at most 1 s .so the total number is at most . * from the view of rows , we claim that there is at least two 1 s in each row of , which implies the total number is at least 1 s .the claim is shown as follows .let be a row of and be its corresponding row in .note that . since , which implies that so there are at least one 1 s in and has at least two 1 s .therefore , , which implies that . since , and the conclusion follows .note that the matrix in theorem [ preth ] has girth at least 6 if .when , it is clear that the lower bound ( [ thgeneral ] ) of theorem [ preth ] is tighter than ( [ binarysparkold ] ) .combining theorem [ preth ] with ( [ proeq1 ] ) , we have that any -sparse signal can be exactly recovered by the -optimization ( [ l0 ] ) if .consider a complete graph on 4 vertices .the incidence matrix is clearly , and the lower bound ( [ thgeneral ] ) is 4 .moreover , it is easy to check that and is a stopping set , which implies that and .this result could be generalized to the complete graph on nodes . then spark and stopping distance are also 4 and 3 respectively , which implies that the lower bound ( [ thgeneral ] ) of theorem [ preth ]could be achieved and may be tighter than one in theorem [ preth0 ] .clearly , the lower bound of theorem [ preth ] for general binary matrices applies to all ones constructed in section ii by finite geometry . in this subsection, we will show that for these structured matrices based on finite geometry , more tighter lower bound could be obtained .let be the incidence matrix of -flats over -flats in , where , and .recall that or are called respectively the type - i or tyep - ii finite geometry measurement matrix .the following lemma is needed to establish our results .[lem1 ] let and .given any different -flats in and for any , there exists one -flat such that and for all .[ thm1 ] let be integers , and be the type - i finite geometry measurement matrix .then where let and assume the contrary that select a such that . by ( [ split1 ] ) and ( [ split2 ] ) , we split the non - empty set into two parts and , and assume without loss of generality .thus by the assumption for fixed , by selecting -th column of and all the columns in of , we get a submatrix .the number of columns in is and not greater than .let and be the -flats corresponding to the columns of . by lemma [ lem1 ], there exists one -flat such that and for all .there are exactly -flats containing .note that among these -flats , any two distinct -flats have no other common points except those points in ( see ) .hence , each of these -flats contains the -flat and for any , there exist at most one of these -flats containing the -flat . in other words ,there exist rows in such that each of these rows has component at position and for any , there exists at most one row that has component at position .let be the submatrix of by choosing these rows , where the column corresponds to is all 1 column .now let s count the total number of 1 s of in two ways .the column corresponds to has s while each of the other columns has at most one .thus from the view of columns , the total number of s in is at most . on the other hand ,suppose is the number of rows in with weight one . then , there are rows with weight at least two .thus from the view of rows , the total number of s in is at least .hence , , which implies that by the assumption . in other words , contains a row with value at the position corresponding to and at other positions .denote this row by and let be its corresponding row in .note that and . since , which leads to a contradictiontherefore , the assumption is wrong and the theorem follows by ( [ fg3 ] ) .combining theorem [ thm1 ] with ( [ proeq1 ] ) , we have that when the type - i finite geometry measurement matrix is used , any -sparse signal can be exactly recovered by the -optimization ( [ l0 ] ) if .[ remark2 ] for the type - i finite geometry measurement matrix , it is known that .thus , by theorem [ preth0 ] , obviously , has uniform column weight .the inner product of two different columns equals to the number of -flats containing two fixed -flats .it is easy to see that the maximum inner product is .thus , by theorem [ preth ] , it is easy to verity by ( [ fg3 ] ) that where the last inequality always holds because of and .this implies that the lower bound ( [ lb2 ] ) is strictly tighter than the lower bound ( [ lb1 ] ) . on the other hand ,the lower bound ( [ eqthm1 ] ) of theorem [ thm1 ] is tighter than the lower bound ( [ lb2 ] ) .this is because in other words , for , the three lower bounds ( [ eqthm1 ] ) , ( [ lb2 ] ) , ( [ lb1 ] ) satisfies where the inequality becomes an equality if and only if .similarly , for the type - ii finite geometry measurement matrix , we obtain the following results . [ lem2 ] let and .given any different -flats in and for any , there exists one -flat such that and for all .[ thm2 ] let be integers , and be the type - ii finite geometry measurement matrix .then where for euclidean geometry ( eg ) and projective geometry ( pg ) respectively note that the columns of are rows of .let and assume the contrary that select a such that .we split into and , and assume without loss of generality .thus for fixed , by selecting -th column of and all the columns in of , we get a submatrix .the number of columns in is and not greater than .let and be the -flats corresponding to the columns of . by lemma [ lem2 ], there exists one -flat such that and for all .there are exactly -flats contained in .now , we claim that contains these -flats and ( ) contains at most one -flat among these -flats .otherwise , if ( ) contains at least two distinct -flats among these -flats , then must contain since is the only -flat containing these two distinct -flats .this contradicts to the fact that is not contained in .hence , there exist rows in such that each of these rows has component 1 at position and for any , there exists at most one row that has component 1 at position . using the same argument in the proof of theorem [ thm1 ], it leads to a contradiction .therefore , the assumption is wrong and the theorem follows by ( [ fg1 ] ) and ( [ fg2 ] ) . combining theorem [ thm2 ] with ( [ proeq1 ] ) , we have that when the type - ii finite geometry measurement matrix is used , any -sparse signal can be exactly recovered by the -optimization ( [ l0 ] ) if . [ remark3 ] for the type - ii finite geometry measurement matrix , it is known that .thus , by theorem [ preth0 ] , obviously , has uniform column weight .the inner product of two different columns equals to the number of -flats contained in two fixed -flats at the same time .it is easy to see that the maximum inner product is .thus , by theorem [ preth ] , using the same argument in remark [ remark2 ] , we have that for , the three lower bounds ( [ eqthm2 ] ) , ( [ lb2 t ] ) , ( [ lb1 t ] ) satisfies where the inequality becomes an equality if and only if or for euclidean geometry and for projective geometry , respectively .in this section , we will give some simulation results on the performances for the proposed two classes of binary measurement matrices from finite geometry .the theoretical results on the sparks of these matrices in last section could explain to some extent their good performance .afterwards , we will show by examples how to employ the parallel structure of euclidean geometry to construct measurement matrices with flexible parameters .all the simulations are performed under the same conditions as with .the upcoming figures show the percentage of perfect recovery ( snr ) when different sparsity orders are considered . for the generation of the -sparse input signals , we first select the support uniformly at random and then generate the corresponding values independently by the standard normal distribution . the omp algorithm is used to reconstruct the -sparse input signals from the compressed measurements and the results are averaged over 5000 runs for each sparsity . for the gaussian random matrix each entry is chosen _ i.i.d . _ from .the percentages of perfect recovery of both the proposed matrix ( red line ) and the corresponding gaussian random matrix ( blue line ) with the same size are shown in figures for comparisons . from theorem[ thm1 ] and theorem [ thm2 ] , the two types of finite geometry measurement matrices have relatively large sparks and thus we expect them to perform well under the omp . for the type - i finite geometry measurement matrix , we expect to recover at least -sparse signals ; while for the type - ii finite geometry measurement matrix , we expect to recover at least -sparse signals .[ example1 ] let and .the consists of 3-flats and 1-flats .let be the incidence matrix of 3-flat over 1-flat in .then is a type - i euclidean geometry measurement matrix . has girth 4 and is -regular , where and .moreover , according to theorem [ thm1 ] . from fig .[ figeg4213 ] , it is easy to find that the performance of the proposed matrix is better than that of the gaussian random matrix . in particular , for all signals with sparsity order the recovery are perfect .this example shows that some girth 4 matrices from finite geometry could also perform very well . with and the corresponding gaussian random matrix.,scaledwidth=50.0% ] with and the corresponding gaussian random matrix.,scaledwidth=50.0% ] [ example2 ]let and .the consists of lines and points and is the incidence matrix of line over point in .then is an type - ii projective geometry measurement matrix . has girth 6 and is -regular , where and .moreover , by theorem [ thm2 ] . it is observed from fig .[ figpg3401 ] that performs better than the gaussian random matrix , and the sparsity order with exact recovery may exceed the one ensured by the proposed lower bound .for , exact recovery is obtained and the corresponding points are not plotted for clear comparisons , and the similar methods are used in the following figures . with and the corresponding gaussian random matrix .the step size of is 4.,scaledwidth=50.0% ] let , , , and .the consists of lines and points , and is the incidence matrix of line over point in .then is a type - ii euclidean geometry measurement matrix . has girth 6 and is -regular , where and .moreover , by theorem [ thm2 ] . note that the step size of sparsity order is 4 . with and the corresponding gaussian random matrix .the step size of is 6.,scaledwidth=50.0% ] let , , , and .the consists of 2-flats and 1-flats .let be the incidence matrix of 2-flat over 1-flat in .then is a type - i euclidean geometry measurement matrix . has girth 6 and is -regular , where and .moreover , by theorem [ thm1 ] .[ figeg3812 ] shows that some matrices from finite geometry have very good performance for the moderate length of input signals ( about 5000 ) .parallel structure of euclidean geometry is very useful to obtain various measurement matrices .next , we show how to puncture rows or columns from the incidence matrix or by several examples .[ example - parallel-256 ] let and .the euclidean plane consists of points and lines .let be the incidence matrix . since is close to , both and are not suitable to be measurement matrices directly .however , according to the the parallel structure of described in section [ fgldpc ] , all the 272 lines can be divided into parallel bundles and each bundle consists of 16 lines . by ( [ hpara ] ) , , where for , consists of the 16 lines in the -th parallel bundle . by choosing the first submatrices , we get an measurement matrix with uniform column weight . fig .[ figpara256 ] shows the performance of the , , , submatrices of which correspond to the first 4 , 5 , 6 and 7 parallel bundles of lines in , respectively . with and their corresponding gaussian random matrices .the rows of the 4 submatrices from left to right are chosen according to the first 4 , 5 , 6 and 7 parallel bundles of lines in , respectively .the step size of is 2.,scaledwidth=50.0% ] from fig .[ figpara256 ] we can see that all of the proposed submatrices perform better than their corresponding gaussian random matrices , and the more parallel bundles are chosen , the better the submatrix performs , and its gain over the corresponding gaussian random matrix becomes larger .[ example - parallel-1024 ] let and .the euclidean plane consists of points and lines .let be the incidence matrix .all the 1056 lines can be divided into parallel bundles and each bundle consists of 32 lines . by ( [ hpara ] ) , , where consists of the 32 lines in the -th parallel bundle . by choosing the first parallel bundles , we get an measurement matrix with uniform column weight .[ figpara1024 ] shows the performance of the , , , submatrices of which correspond to the first 6 , 8 , 10 and 12 parallel bundles of lines in , respectively . from fig .[ figpara1024 ] it is observed that all of the submatrices perform better than their corresponding gaussian random matrices , and the more parallel bundles are chosen , the better the submatrix performs , and its gain over the corresponding gaussian random matrix becomes larger . with and their corresponding gaussian random matrices .the rows of the 4 submatrices from left to right are chosen according to the first 6 , 8 , 10 and 12 parallel bundles of lines in , respectively .the step size of is 8.,scaledwidth=50.0% ] and their corresponding gaussian random matrices , where is the submatrix of in example 6 .the 4 submatrices ( the red lines from left to right ) are obtained by deleting 0 , 128 , 256 , 384 columns of , respectively .the step size of is 4.,scaledwidth=50.0% ] [ example - parallel-1024-del ] consider the submatrix in , say , in the last example .we will puncture its columns to obtain more measurement submatrices . recall that and the first 10 submatrices are chosen to obtain . for the fixed submatrix ,its corresponding 32 lines are paralleled to each other and partition the geometry .hence , when selecting the first lines from , the points on these lines are different pairwise and the total number of points is since each line contains 32 points . by deleting the columns corresponding to these points from , we obtain a submatrix , where is still regular . the 4 red lines from left to right in fig .[ figpara1024-del ] show the performance of the , , , submatrices of which correspond that , respectively .it is observed that all of the submatrices perform better than their corresponding gaussian random matrices ( the 4 blue lines from left to right ) , but its gain becomes slightly smaller when more columns are deleted .in this paper , by drawing methods and results from ldpc codes , we study the performance evaluation and deterministic constructions of binary measurement matrices .the spark criterion is used because its similarity to the stopping distance of an ldpc code and the fact that a matrix with large spark may perform well under the approximate algorithms of -optimization , e.g. , the well - known omp algorithm .lower bounds of spark were proposed for real matrices in many years ago . when the real matrices are changed to binary matrices , better results may emerge .firstly , two lower bounds of spark are obtained for general binary matrices , which improve the one derived from in most cases .then , we propose two classes of deterministic binary measurement matrices based on finite geometry .one class is the incidence matrix of -flat over -flat in finite geometry or its transpose , which are called respectively the type i or type ii finite geometry measurement matrix .the other class is the submatrices of or , especially those obtained by deleting row parallel bundles or column parallel bundles from or in euclidean geometry . in this way, we could construct a large amount of measurement matrices with various sizes .moreover , most of the proposed matrices have cyclic or quasi - cyclic structure which make the hardware realization convenient and easy . for the type i or ii finite geometry measurement matrix , two further improved lower bounds of spark are given to show their relatively large spark .finally , a lots of simulations are done according standard and comparable procedures .the simulation results show that in many cases the proposed matrices perform better than gaussian random matrices under the omp algorithm .the future works may include giving more lower or upper bounds of sparks for general binary measurement matrices , determining the exact value of sparks for some classes of measurement matrices , and constructing more measurement matrices with large sparks .we only need to show the necessity .clearly , the measurement matrix does not have an all-0 column , which implies that .assume the contrary that .select a such that .let , where is the floor function .then let be the -th non - zero position of and set let . clearly , in other words , both and are -sparse vectors , and may be sparser .however , since , we have that which implies that can not be exactly recovered by the -optimization ( [ l0 ] ) .this finishes the proof .it is known that any -sparse signal can be exactly recovered by the -optimization ( [ l1 ] ) if and only if satisfies the so - called _ nullspace property _ , or for any and any with , where .assume the contrary that . by selecting a such that , it is easy to see that does not satisfy ( [ kspcon1 ] ) for some -subset , e.g. , letting be the set of positions with the largest s .this leads to a contradiction , which implies the conclusion .99 e. j. cand , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .inf . theory _ ,2 , pp . 489509 , feb . 2006 .w. xu and b. hassibi , `` compressed sensing over the grassmann manifold : a unified analytical framework , '' in _ proc .46th allerton conf .commun . , control , comput ._ , monticello , il , sep .2008 , pp . 562567 .m. stojnic , w. xu , and b. hassibi , `` compressed sensing - probabilistic analysis of a null - space characterization , '' in _ proc .acoust . , speech signal process _ , lasvegas , nv , mar.31-apr.4 , 2008 , pp .33773380 .l. applebauma , s. d. howardb , s. searlec , and r. calderbank , `` chirp sensing codes : deterministic compressed sensing measurements for fast recovery , '' _ appl .2 , pp . 283290 , mar .m. a. iwen , `` simple deterministically constructible rip matrices with sublinear fourier sampling requirements , '' in _ proc .information sciences and systems _ , baltimore , md , usa , 2009 , pp . 870875 .s. d. howard , a. r. calderbank , and s. j. searle , `` a fast reconstruction algorithm for deterministic compressive sensing using second order reed - muller codes , '' in _ proc .information sciences and systems _ , princeton , nj , usa , 2008 , pp . 1115 . n. kashyap and a. vardy , `` stopping sets in codes from designs , '' in _ proc .inf . theory _, yokohama , japan , june 29-july 4 , 2003 , p. 122 .the full version is available online via http://www.mast.queensu.ca/ nkashyap / papers / stopsets.pdf .
for a measurement matrix in compressed sensing , its spark ( or the smallest number of columns that are linearly dependent ) is an important performance parameter . the matrix with spark greater than guarantees the exact recovery of -sparse signals under an -optimization , and the one with large spark may perform well under approximate algorithms of the -optimization . recently , dimakis , smarandache and vontobel revealed the close relation between ldpc codes and compressed sensing and showed that good parity - check matrices for ldpc codes are also good measurement matrices for compressed sensing . by drawing methods and results from ldpc codes , we study the performance evaluation and constructions of binary measurement matrices in this paper . two lower bounds of spark are obtained for general binary matrices , which improve the previously known results for real matrices in the binary case . then , we propose two classes of deterministic binary measurement matrices based on finite geometry . two further improved lower bounds of spark for the proposed matrices are given to show their relatively large sparks . simulation results show that in many cases the proposed matrices perform better than gaussian random matrices under the omp algorithm . compressed sensing ( cs ) , measurement matrix , -optimization , spark , binary matrix , finite geometry , ldpc codes , deterministic construction .
generalized langevin equations ( gle ) and non - markovian master equations , which arise in the treatment of systems interacting with environmental degrees of freedom , often have an integro - differential form . unlike ordinary differential equations which can be readily solved using runge - kutta , predictor - corrector and other well known numerical schemes are no general methods for solving equations of integro - differential type . herewe show that these integro - differential equations can be converted to ordinary - differential equations at the expense of introducing a new time variable which is treated as if it is of spatial type .[ similar schemes are employed to numerically solve the schrdinger equation for time - dependent hamiltonians and as analytical tools .there is also some resemblence to schemes for solving intego - differential equations of viscoelasticity . ]we then develop a numerical method based on this exact transformation and show that it can be used to accurately solve a variety of physically motivated examples .neglecting inhomogeneous terms resulting from noise , for simplicity , the generalized langevin equations for position and momentum of a damped oscillator in one dimension can be expressed in the form where and are the mass and frequency of the oscillator and is the memory function . defining a space - like time variable and a function it can be verified by direct substitution that and satisfy the following ordinary differential equations here we have introduced a differentiable damping function ( with ) which plays a useful role in the numerical scheme we will introduce to solve the ordinary differential equations ( [ gle1 ] ) , ( [ gle3 ] ) and ( [ gle4 ] ) . [ note that . ] neglecting inhomogeneous terms , non - markovian master equations can be written in the form -\int_{-\infty}^tk(t , t')\rho(t')~dt ' \label{mastera}\ ] ] where is the time - evolving reduced density matrix of the subsystem , is an effective hamiltonian , and is a memory operator .[ we employ units such that . ]defining an operator it can be verified by direct substitution that and satisfy ordinary differential equations -\chi(t,0)\label{master1}\\ d\chi(t , u)/dt&=&f(u)k(t+u , t)\rho(t)+\frac{\partial \chi(t , u)}{\partial u}\nonumber \\ & -&\frac{f'(u)}{f(u)}~\chi(t , u).\label{master2}\end{aligned}\ ] ] here is again a differentiable damping function such that .thus , the integro - differential langevin equations ( [ gle1])-([gle2 ] ) can be expressed in the ordinary differential forms ( [ gle1 ] ) and ( [ gle3])-([gle4 ] ) and the integro - differential master equation ( [ mastera ] ) can be expressed as the ordinary differential equations ( [ master1])-([master2 ] ) . to exploit these transformed equations as a practical numerical scheme we must discretize the variable on a grid of points so that the number of ordinary differential equations is finite . once this is achievedthe ordinary differential equations can be solved using standard techniques .we use an eighth order runge - kutta routine in our calculations . to minimize the number of grid points we choose a damping function which decreases rapidly with . in the calculations reported here we used . in practicefewer grid points are needed for positive than for negative , and we found that the points for worked well when we chose . here is the largest positive value . while accurate solutions can be obtained for almost any non - zero value of we found the most rapid convergence when values were optimized for the type of equation .hence , is specified differently below for each type of equation . to complete the numerical methodwe need a representation of the partial derivative with respect to on the grid .this could be performed via fast fourier transform techniques .we chose instead to employ a matrix representation which is known as the sinc - dvr ( discrete variable representation) . a discrete variable representation ( dvr )is a complete set of basis functions , associated with a specific grid of points , in which functions of the variable are diagonal and derivatives have simple matrix representations .dvrs are often used in multi - dimensional quantum mechanical scattering theory calculations . in the sinc - dvr , which is associated with an equidistantlyspaced grid on , partial derivatives can thus be evaluated with a sum for any function or operator . in our calculations we chose to equal the time interval between output from the runge - kutta routine . for the generalized langevin equation we chose an initial value problem ( i.e. for and for ) where has one of the following forms which are displayed graphically in figure 1 .the solid curve is ( [ mem1 ] ) , the dashed is ( [ mem2 ] ) , the short - dashed is ( [ mem3 ] ) and the dotted is ( [ mem4 ] ) .these memory functions were chosen to roughly represent the various functional forms which can occur physically and for ease in obtaining exact solutions .the constants appearing in equations ( [ gle1 ] ) , ( [ gle3 ] ) and ( [ gle4 ] ) are chosen as and .figure 2 shows the functional form of the exact solutions ( solid curve ) and ( dashed ) , which evolve from initial conditions and , for memory function ( [ mem1 ] ) over a timescale of 20 units with .solutions for the other memory functions ( and the same initial conditions ) are similar in appearance .these exact solutions were obtained by expoiting the fact that the above memory functions are sums of exponentials ( i.e. ) from which it follows that one may write for , and solve these ordinary differential equations using standard methods .this approach only works for memory functions of this type .the negative logarithm of the absolute error in , is shown in figure 3 plotted against time for the values of indicated in the inset .[ the error in is similar . ] as increases increases ( on average ) and hence the error decreases .the oscillations in are caused by periodic intersections of the two solutions . in practiceit is impossible to visually distinguish the two solutions when .note that after a short transient the error ( on average ) does not increase .this is probably a consequence of the linearity of these equations .some decline in accuracy with time should be expected when the langevin equations are non - linear ( e.g. a particle in a double - well ) .figure 4 compares the exact solutions for ( solid curve ) and ( short - dashed ) with those obtained using our method for ( dashed and dotted , respectively ) over a time of 40 units .no disagreement is visible .convergence for memory function ( [ mem2 ] ) is similar .memory functions ( [ mem3 ] ) and ( [ mem4 ] ) which take negative values and have long time tails require many grid points for convergence .figure 5 shows the negative logarithm ( base ten ) of the absolute error in for this case .while many grid points are required , high accuracy solutions can clearly be obtained using our method . for the master equation we chose an initial value problem consisting of a dissipative two - level system representing a spin interacting with environmental degrees of freedom .if the spin hamiltonian is and the coupling to the environment is proportional to then the equation for the density matrix is of the form \nonumber \\ & & -c\int_0^t w(t - t')\{\sigma_x^2\rho(t')+\rho(t')\sigma_x^2 - 2\sigma_x\rho(t')\sigma_x\}~dt'\nonumber\\ & & ~~~~\end{aligned}\ ] ] where the sigmas denote pauli matrices .parameters were set as and .we chose to define which differs somewhat from the general definition employed in ( [ smo ] ) .the transformed equations are then - 2c\{\chi(t,0)\nonumber \\ &-&\sigma_x\chi(t,0)\sigma_x\ } \label{stu1}\\ \frac{d\chi(t , u)}{dt}&=&e^{-gu^2}w(u)\rho(t)+\frac{\partial \chi(t , u)}{\partial u}\nonumber \\ & + & 2 g u~\chi(t , u)\label{stu2}.\end{aligned}\ ] ] theory predicts that the memory function for this problem is approximately gaussian in form .however , we were unable to obtain an exact solution of the master equation for this case. instead we approximate the gaussian via the similar function .exact solutions for and initial conditions and were obtained in the same way as for the generalized langevin equations and are plotted vs time in figure 6 . for the approximate method we used ^ 2 $ ] and for negative we set . from figure 7 where we plot against timewe see that convergence of the numerical method is very rapid for these equations .[ similar accuracies are achieved for and . ]thus , we have shown that accurate solutions of integro - differential equations can be obtained via transformation to a larger set of ordinary differential equations . because this transformation is exactwe expect that the method will also work for equations not considered in this manuscript .it should be possible to obtain accurate solutions for such equations via the following steps .first find an approximation of the memory function or operator which will allow exact solutions to be obtained .optimize the numerical method by finding the best for the model equations .finally , apply the numerical method to the original equations and look for convergence of the solutions with increasing .
we show that integro - differential generalized langevin and non - markovian master equations can be transformed into larger sets of ordinary differential equations . on the basis of this transformation we develop a numerical method for solving such integro - differential equations . physically motivated example calculations are performed to demonstrate the accuracy and convergence of the method .
understanding the intrinsic mechanism of collective behaviors of coupled units has become a focus for a variety of fields , such as biological neurons , circadian rhythm , chemically reacting cells , and even social systems .some properties of collective behaviors depend on the complexity of the system , while the other properties , such as phase transition , may be described by low - dimensional dynamics with macroscopic variables . discovering the method to simplifythe system is just as important and as fascinating as the discovery of the complexity of it .like most cases in physics , simplification and low - dimensional reduction are associated with some symmetry of the system .as the identity of gas particles is the foundation of statistical mechanics and collective variables as temperature and pressure , the identity of the coupled units in a complex system is also related to some order parameters . in previous works , in the limit of large number of oscillators with special coupling function this work has been done by ott - antonsen(oa ) ansatz for oscillators with nonidentical parameters . as for the oscillators with identical parameters the same resultwas also got from group theory analysis in called watanabe - strogatz s approach . in this paper, we will show a different way in which the low - dimensional reduction is a natural consequence of the symmetry of the system by taking the order parameters as collective variables , namely the order parameter analysis .our approach is simple and concise for oscillators with both identical and nonidentical parameters .we can also get the scope of the oa ansatz. two more cases beyond the scope are further discussed with appropriate approximations . with our approachwe show that the oa ansatz can be used beyond its scope with the approximations works .the model discussed in this paper is the all - connected coupled phase - oscillators . in the first section , the dynamical equations for order parameters are derived .the oa ansatz is got naturally from the symmetry of the order parameter equations , with its scope as the limit of infinitely many oscillators and the condition that only three fourier coefficients of the coupling function are nonzero . in the second and third sections, we will consider two approximate use of this ansatz beyond its scope , the case of a finite - size system and the case of coupled oscillator systems with more complicated coupling functions , with approximations respectively , i.e. , the ensemble approach and the dominating - term assumption .the famous kuramoto model for the process of synchronization attracts much attentions upon it is proposed and has been developed for decades .this model consists of a population of n coupled phase oscillators with natural frequencies , and the dynamics are governed by with the mean - field coupling and the definition of order parameter the equations eq .( [ e1 ] ) can be rewritten as where is the imaginary unit , is the complex conjugate of .apart form parameters and , the dynamics of each phase variable depends only on itself and .it is the important character of this mean - field model , and the order parameter is always used to describe the state of the system , as the system is in synchronous states if and only if . a more general form of the mean - field model with phase oscillators can be written as where is any smooth , real , -periodic function for . is the order parameter with the order parameter defined as is the identical parameter such as the coupling strength which is identical for all the oscillators . is the nonidentical parameter such as the natural - frequency which is nonidentical for the oscillators and usually has a distribution among the system .almost all the mean - field models based on the kuramoto model belong to this general category . in the following, we will build our approach for this general model to explore the conditions in which we can get the low - dimensional description of the system eq .( [ ew1 ] ) . to begin with ,let us consider the simple case of identical oscillators , e.g. , for in the kuramoto model , which reads in the limit of infinitely many oscillators , let denote the fraction of oscillators that lie between and . because each oscillator in eq.([e2 ] ) moves with the angular velocity , the single oscillator density obeys the continuity equation as if the phase density is known , all of macroscopic properties of the system can be got through some statistical average , such as order parameters as if we are only concerned with the collective or macroscopic state of the system , as in this paper , the macroscopic description of the phase density is equivalent to the microscopic description of phases of all the oscillators .moreover , for the coupling function is always -periodic for , we have the fourier expansion of , with the the definition eq .( [ e3 ] ) , the dynamics of order parameters can be got as where . substituting the expansion of into these equations, we have the closed equations for the dynamics of order parameters as on the other hand , from eq .( [ e3 ] ) the order parameters are exactly fourier components of the phase density .if all the order parameters are known , we have hence if we know all the initial values of order parameters , with the dynamical equations eq .( [ e4 ] ) and the fourier transformation eq .( [ e5 ] ) , the system is identified , which is the same as the dynamical equations eq .( [ e2 ] ) for phase variables and the continuity equation eq .( [ ea1 ] ) for the phase density .therefore , we can choose either the order parameters , the phase variables or the phase density to perform the analysis of the collective behaviors of the coupled phase oscillators .these three descriptions of the system , i.e. phase variables , density of phase and order parameters , correspond to the dynamical , statistical and macroscopic descriptions of the system , respectively , and they are equivalent to each other in the limit of infinitely many oscillators . as a matter of fact , the ott - antonsen ansatz is based on the representation of the density of phase , and the watanabe - strogatz s approach is based on the scenario of the phase variables . in the following , we will take our approach on the base of order parameters .the complexity of dynamical equations for order parameters eq .( [ e4 ] ) depends on the coupling function , or explicitly , on the fourier expansion of .first , let us consider the simplest case , with only the first three nonzero terms of the fourier expansion , because is a real function , we have where means the complex conjugate of .( [ e6 ] ) , the dynamics of order parameters eq .( [ e4 ] ) becomes with and .the recursion form of these equations shows that there is some structure for order parameters with which we can simplify the system and get some low - dimensional equations for the high - dimensional coupled oscillator dynamics of the system .in fact , by choosing an invariant manifold as , all the equations for is reduced to for .the infinitely many dynamical equations of order parameters is thus reduced to a single equation on this manifold and the corresponding phase density defined by eq .( [ e5 ] ) is the so - called poisson kernel distribution as where .if we choose the initial state on this manifold , the state will never evolve out of it , which is called the invariant manifold of the dynamical system eq .( [ e7 ] ) .this is exactly the low - dimensional behavior of the system we are looking for . with our approach , the manifold is derived naturally and concisely .we will call this manifold the poisson manifold in this paper as in where the dynamics of , as eq .( [ ew6 ] ) , is got from the group theory analysis for josephson junction arrays .furthermore , the low - dimensional behavior is not confined to the system of identical oscillators .the order parameter analysis we used above can also be used in a more general case of nonidentical oscillators with nonidentical parameters .firstly , let us consider a system of nonidentical oscillators with a discrete distribution of the parameter .this distribution naturally separates the oscillators into groups , and each group has the same parameter . in this case , the phase variable for oscillators is denoted by which means the oscillator with the same parameter . for each group we can define a local order parameter as where is the number of oscillators in the group with .the order parameter of the system denoted by is defined by assume that the coupling function for each oscillator depends only on the order parameters whether for the whole system or the groups , then equations for phase oscillators are where . and are the local and global order parameters , respectively . asthe coupling function is -periodic for , by performing fourier expansion of , the equations for order parameters of each group read where , and .the equations eq . ( [ e8 ] ) are closed for local order parameters , and the local order parameters here can be regarded as the coordinates of the system which are equivalent to phase variables .each group is almost separated from others and the only relation for the groups is the dependence of the coupling function on order parameters of the system . in the simplest casewhen only the first three terms of fourier expansion are nonzero , we have with and . for each group, is obviously a solution for the equations which reduce all the equations for to the same one as where .the poisson manifold is an invariant manifold for each group , and the order parameters of the system read which is determined by .therefore , the fouriercoefficients of the coupling function depends on only .the system described by the local order parameters is governed by a group of poisson manifolds with the low - dimensional dynamics eq .( [ e9 ] ) . in the limit with and , the distribution of parameter as for has the continuous form denoted by the function and the local order parameters become the function of , i.e. , . replacing the summation by the integral over , we get the continuous form of the dynamical equations as where are functionals of as even though for each specific the approach has already reduced the dynamics of the oscillators to the dynamic of as eq .( [ e10 ] ) , it is still hard to get any analytical results as eq .( [ e10 ] ) depends on the integral eq .( [ e11 ] ) which can not be expressed by functions of . on the other hand , for some specific choice of , the integral eq .( [ e11 ] ) can be obtained analytically , in which case we can get the simpler form of eq .( [ e10 ] ) .what we are looking for is a two - step reduction .the first step is finished by introducing the poisson manifold with which we have reduced the dynamic of phase oscillators to the dynamic of a group of poisson manifolds , i.e. , eq .( [ e10 ] ) .the second step is rooted in the relation between the poisson manifolds in the group which will reduce the dynamics of the group further to the dynamic of a specific poisson manifold in this group .for instance , in the case that the function is analytical which can be extended to the complex plane , and the distribution of is the lorentzain distribution as the integral eq .( [ e11 ] ) can be obtained by residue theorem as .setting , eq .( [ e10 ] ) becomes which is closed for order parameter .then , the behavior of , together with the collective behaviors of the system , is determined by eq .( [ e12 ] ) .another solvable example is the case when the distribution of natural frequencies of oscillators is the dirac s delta distribution function .the integral eq .( [ e11 ] ) can be worked out as .setting in eq .( [ e10 ] ) , the model is reduced to the network of identical oscillators which we discussed above .the approach shown above indicates there are two steps of the reduction scheme .the first step is the reduction from the equation eq .( [ e8 ] ) to eq .( [ e9 ] ) or the continuous form eq .( [ e10 ] ) , which means the choice of the poisson manifold of the system .this is related to the the symmetry of the system or the recursion form of order parameters equations .the second step is the further operation from eq .( [ e10 ] ) to eq .( [ e12 ] ) , which depends on the special choice of distribution of nonidentical parameters .this reduction with the lorentzain distribution was first found in with an ansatz for low - dimensional manifold , namely the oa ansatz .for the case of the dirac s delta distribution function , this reduction was discussed comprehensively in with group theory analysis of the system . in our approach ,the reduction comes from the same basis , i.e. , the symmetry of order parameter equations . along the approach , we can also consider more nonidentical parameters , such as the location of oscillators for example in , where the coupling function depends on the locations as with the distributions and .the order parameter of the system is defined as when the fourier expansion of the coupling function for contains only the first three terms , for each specific and , the relation is exactly the solution for dynamical equations , which reads if is analytic for and is the lorentzain distribution , the integral eq .( [ e13 ] ) can be obtained for as where .setting and in the dynamical equation eq .( [ e14 ] ) , we have where is the order parameter of the system and the functional of as the solution of this integral differential equation eq .( [ e15 ] ) describes the structure of the local order parameter along . up to now, we have discussed the system of all - connected phase oscillators in the limit of infinitely many oscillators and the case that only three fourier coefficients of the coupling function are nonzero .the low - dimensional invariant manifold , namely the poisson manifold , is got for both cases .we will see in the next two sections that these two conditions are exactly the scope of the oa ansatz .two cases beyond this cope will be discussed respectively .we have considered the general model eq .( [ e2 ] ) in the limit of infinitely many oscillators , , in which we could get the phase density of oscillators and the corresponding continuity equation eq .( [ ea1 ] ) .order parameters could be considered as fourier coefficients of , from which we get the equivalent expression of the dynamics of the system as the order parameter equations eq .( [ e4 ] ) and build the approach above . in the limit , the macroscopic variable , namely the order parameter , has the limit for steady states as the synchronous state and incoherent states , which gives us the basis to discuss the system analytically . in the case of a finite but large number of oscillators ,i.e. , , the order parameter defined as will fluctuate around the value .this fluctuation depends on the number of oscillators as .when the fluctuation is small enough , as for , the collective behaviors of the system with finite number of oscillators could be described by the approximation of infinitely many oscillators , for which analytical methods can be applied .some analytical methods , e.g. , the self - consistent method and the oa ansatz , are applicable for the case of infinitely many oscillators or equivalently the phase density description. on the other hand , in the case of only a few number of oscillators , e.g. , , the difference would be so large , as , whose magnitude comparable with the value of . obviously , it is not reliable to treat the system with the approximation in this case .it is necessary to propose new approaches in analytically dealing with the collective behaviors of finite - oscillator systems . in our approach , order parameters could be considered as not only the fourier coefficients of but also the collective variables as which does not depend on the approximation of infinitely many oscillators . for the system as with the definition of order parameters( [ ew7 ] ) , the dynamical equations for order parameters are for the coupling function is -periodic for , we have the fourier expansion as substituting the expansion into eq .( [ e16 ] ) , together with eq .( [ ew7 ] ) , we have the closed form of equations for order parameters as which is the same as we get in the limit of infinitely many oscillators . in the case of a few number of oscillators , the definition of order parameterscan also be considered as the coordinate transformation that transforms the microscopic variables to macroscopic variables .therefore it is inspiring that the dynamics of phase oscillators with corresponding initial conditions is equivalent to the dynamics of order parameters with corresponding initial conditions .the number of oscillators , no matter finite or infinite , has no influence on the dynamical equations of order parameters .this forms the important basis of our approach . following this approach ,when only the first three fourier coefficients of the coupling function are nonzero , by setting and substituting the relation to the dynamical equations , the equations eq . ( [ e17 ] ) for all the will be reduced to a single one as it appears that we could get the poisson manifold again even the size of the system is finite .however , the finite - size effect should be taken into account with care . as a matter of fact , in the case of a finite number of oscillators ,the poisson manifold with is not attainable for the system . to see this ,take the system of as an example .for the first two order parameters and , by definition if the system described by is in the poisson manifold , the relation could be rewritten in terms of the definition eq .( [ ew7 ] ) as however , the equality eq .( [ e18 ] ) is not naturally valid . by using a simple calculation of the difference between the left - hand and the right - hand terms in eq .( [ e18 ] ) one obtains this indicates that the system can evolve on the poisson manifold only when . in general , the relation gives exactly infinitely many independent constraints like eq .( [ e18 ] ) with , and any solutions for finite oscillators will be determined as the trivial one as for all the index , which means that except the synchronous state , all the states of the system of finite oscillators lie out of this poisson manifold .the system can only evolve on the poisson manifold in the limit of infinitely many oscillators .whereas , as a matter of fact , the equations eqs .( [ e17 ] ) hold for all , whether the size of system is finite or infinite , and the synchronous state always satisfies the relation of the manifold .the poisson manifold can be used at least approximately in the vicinity of synchronous state .however , we need some new methods and approximations .let us consider an ensemble with identical systems of oscillators which have the same dynamical equations and same parameters .the initial phases of oscillators are chosen from the same distribution , which makes the mean values of order parameters over the ensemble have the limit when .this leads to the definition of the ensemble order parameter as where is the number of sampling systems in the ensemble and is the order parameter for the system in the ensemble .taking the ensemble average for both sides of the dynamical equations eq .( [ e17 ] ) , we get where , and means ensemble average . in general eq .( [ e20 ] ) is not solvable because the terms in the right side as can not be simply described by in general .however , if is independent of all the systems in the ensemble as , then we can get exactly the dynamical equations of the ensemble order parameters , as where and the terms may depend on .similar to eq .( [ e17 ] ) , there is a solution for the ensemble order parameter equations eqs .( [ e21 ] ) as , which is exactly the poisson manifold . moreover , for more general cases , but when the terms can be approximated by , namely the statistical independence , we can also get the dynamical equations of the ensemble order parameters with similar approach .the validity of this approach can be measured by the error terms with and . in the following ,let us take the star sakaguchi - kuramoto model as an example , which is a typical topology and model in grasping the essential properties of heterogeneous networks and synchronization process , as where , and are the natural frequency and the phase of hub and leaf nodes respectively , is the coupling strength and is the phase shift . by introducing the phase difference , the dynamical equation can be transformed into where is the natural - frequency difference between hub and leaf nodes . by introducing the order parameter , we could rewrite the dynamics as where , .hence the system is in the framework which we discussed above , with the first three nonzero fourier coefficients of the coupling function . with our approach ,the dynamical equations for order parameters can be obtained as in this specific model , we can not simply make the approximation as because diverges with .there are two ways in further simplifying the system .first , we could set a hypothetical system as where , is the hypothetical phase oscillator and is considered as a parameter in this system . defining the order parameter for this system as , eq .( [ e24 ] ) becomes where , .following our approach , the corresponding order parameter equations read which are exactly the same as eq .( [ e23 ] ) .but for this hypothetical system , we could take the limit of infinitely many oscillators as , which could be discussed analytically with the traditional oa anstaz . obviously , these two systems are quite different for phase variables , one with finite oscillators the other with infinitely many oscillators , but they have the same order parameter equations . in the following we will see that the hypothetical system eq .( [ e24 ] ) is exactly a representation of the ensemble of the original model eq .( [ ea2 ] ) .on the other hand , following our ensemble approach , for the system eq .( [ ea2 ] ) , let us choose an ensemble consisting of systems with the same parameters and , and different initial conditions chosen from the same distribution , as the poisson kernel .we have the ensemble average of the dynamical equation as where . setting for .if and , or typically and , they can be seen as the perturbation terms for the dynamics of ensemble order parameters in eq .( [ e26 ] ) . ignoring the perturbations, we could get and this is the same as eq .( [ e25 ] ) for hypothetical system . in this case , the ensemble order parameters are the same as the order parameters for the hypothetical system . for the case of a finite but large number of oscillators ,the hypothetical system is exactly the continuous model which shares the same order parameter equations but with infinitely many oscillators . as in the case of a few oscillators ,the hypothetical system is a presentation of the ensemble for the systems , where the difference terms and by this method will not only introduce some fluctuations but also some systematic errors , which can be described by with for this specific model . in terms of numerical computation, we can check the above approximation by examining and for the ensemble of systems .the approximation and can be divided into two parts as and where and are the fluctuation parts proportional to which depend on the size of the ensemble and and are the systematic parts which are introduced by the statistical independence assumption .take as an example .when the fluctuation parts are ignorable . we find that except for is around , all of and for are much smaller than in this case as plotted in fig .[ fig:2](a ) and ( b ) . comparing the typical magnitude of values of order parameters as ,the approximation of the ensemble approach is obviously reasonable . for the initial conditions chosen from the poisson kernel , with the dynamical equations eq .( [ e27 ] ) , the ensemble order parameters will evolve on an invariant manifold , i.e. , the poisson manifold with .denote by , we have the difference between and the ensemble average of numerical simulations is checked in fig .[ fig:2](d ) , which shows that it is reasonable to describe the behavior of the ensemble by .moreover , for every single system in the ensemble , with some fluctuation , it can also be described by the ensemble order parameter and hence by , as shown in fig . [ fig:2](c ) . and versus time for the system with .( c ) and ( d ) order parameters getting from numerical simulation and approximated oa ansatz .results from approximated oa ansatz is the red line , and every single simulation results is the blue lines in ( c ) with ensemble average the light blue line in ( d).,height=264 ] with this ensemble assumption works , the two - dimensional equation eq .( [ ea5 ] ) in the bounded space as describes the dynamics of the coupled oscillator system .every stationary state for phase oscillators system has its counterpart in this space for .for example , the in - phase state(ips ) defined as corresponds to a limit cycle as .the synchronous state ( ss ) defined as corresponds to a fixed point with and .the splay state(sps ) defined as with the period of corresponds to a fixed point with and . in the two - dimensional order - parameter space, it is easy to analyze the existence and stability conditions of these states , and the basion of attraction for each state in the coexistence region can also be conveniently determined .this is the reason why we introduce the invariant manifold governed by low - dimensional dynamics .as we see , in the case with only finite oscillators , the ensemble order parameter plays the dominant role in revealing the key collective structure of the system .the price of the ensemble order parameter description is the error induced by the statistical independence assumption , whose validity can be checked by numerical results . in the next sectionwe will see that when more than three fourier coefficients of the coupling function are nonzero , where the traditional order parameter scheme fails , one can still discuss the collective dynamics in terms of the ensemble order parameter approach .in the above sections , for the coupling functions with only first three terms as we have the dynamical equations for order parameters as on the invariant manifold , all these equations are reduced to a single one for general situations , the coupling function can not be simply truncated to only the first terms .it is thus important to consider the order parameter approach for more complicated cases .firstly , let us consider a coupling function with higher order fourier coefficients like where is an integer and are the order parameters for the system .note that eq .( [ e29 ] ) can be regarded as the transformation of eq .( [ ea3 ] ) with and .the order parameters for is related to the order parameters for as and the phase density of with the transformation reads this indicates that all the terms with are zero . moreover , we can transform the poisson manifold for to the invariant manifold for , which is governed by corresponding to eq .( [ ea4 ] ) for .this is exactly the low - dimensional manifold for the coupling function eq .( [ e29 ] ) with higher order fourier coefficients .this manifold is based on the order parameter and the statement that all the terms with are zero .this is a bit queer in this respect .let us take the case as an example .in this case only and among the fourier components of the coupling function are nonzero , i.e. , according to the above analysis , the system has the invariant manifold as , or equivalently $ ] , with dynamics the phase density of oscillators in this case is by using our order - parameter - analysis approach , for the case of , it is not hard to get the the equations of order parameters as with .the manifold with is obviously the solution of these equations .this invariant manifold separates the relation into two parts , where the part conserves the relation , and the odd part can be regarded as a special choice of the relation . even if in the state defined as , we still have , which represents the state of cluster synchrony as discussed in . and the manifold with can be regarded as a sub - manifold of the poisson manifold .an interesting issue here is the reason why we set all the other than . to see this, we just substitute the relation into the odd part of eq .( [ e31 ] ) , then the equations will be reduced to the following equations , where the first equation is got from in eq .( [ e31 ] ) , and the second equation is got from all the others in eq .( [ e31 ] ) . comparing the two equations in eq .( [ ew4 ] ) , the consistent condition requires , and this implies that .hence , there is nt a general relation of , apart from the vicinity of synchronous state .thus we should consider another special solution for the odd part of eq .( [ e31 ] ) as . on the other hand , for the even part of eq .( [ e31 ] ) , by setting , the equations will be reduced to a single one as the conjugate part is separated from naturally in this equation , and we can get the low - dimensional manifold governed by the order parameter equation eq .( [ ew5 ] ) , as a sub - manifold of the poisson manifold . as mentioned above ,the poisson manifold no longer exists when the order parameter relation can not be used to reduce the order parameter equations to the single one . for the case of only higher termsthis problem can be fixed by separating the order parameters into groups and setting some of them zero with which a special sub - manifold of the poisson manifold can be used to get low - dimensional behaviors . however , when more than three terms in fourier expansion are considered , even the sub - manifold on longer exists .consider a coupling function with the first five nonzero fourier coefficients , as the corresponding equations of order parameters read now with , . in this case , if we suppose , then two different dynamical equations will be obtained hence we will have as the requirement of coincidence of these two equations .the relation of is broken by the conjugate terms and there is no way in finding out a separated solution as we did for eq .( [ e31 ] ) .the only solutions with the relation for this system seems to be either or .none of them are expected for our analysis , unless in the vicinity of synchronous state with or in the vicinity of the incoherent state with . on the other hand , for all the dynamical equations for the order parameter in eq .( [ e32 ] ) are reduced to the same equation by , and the symmetry of eq .( [ e32 ] ) can be represented by at least for all the order parameter equations for . moreover , due to the bound , the higher the order components become smaller with increasing . if we choose the initial conditions as , then along the evolution of eq .( [ e32 ] ) , the differences of higher terms with will keep small enough . by ignoring the differences and making the approximation that , we can obtain the dynamical equations for as which can be regarded as the main term of the dynamics eq .( [ e32 ] ) . from now on we call this approach _ dominating - term assumption_. we will further show that this approximation is pretty efficient in dealing with systems with complicated coupling functions . let us take the system described by eq .( [ ea2 ] ) as an example . by considering higher terms, we have the dynamical equations where and , , with . by introducing the dominating - term assumption ,the dynamical equation for reads if the dominating - term assumption works , eq .( [ e33 ] ) can be used to study the collective behaviors of the coupled oscillators , which can be regarded as a sort of approximate low - dimensional behaviors .the validity of the dominating - term assumption can be checked via numerical simulations . for the initial state chosen from the poisson kernel distribution , by using numerical simulation, we have the order parameter from the continuity equation of the phase density and the approximate manifold . in fig . [ fig:5](a ) the order parameter is plotted .it can be found that the result in terms of the dominating - term approximation coincides very well with numerical results . in fig .[ fig:5](b ) , the density of oscillators at a given time is also plotted , and the result shows that though the distribution is indeed no longer the poisson kernel distribution , it is instead close to it .this gives us the mechanism of how this assumption works . ) ( blue line ) .( b ) distribution of phase variables getting form numerical simulation ( red ) and reconstituting poisson kernel with order parameter ( blue ) .the system considered as ,height=132 ] according to numerical results , we can see that the dominating - term assumption works pretty well , and the system shows approximated low - dimensional behaviors . with the dynamical equations for the first order parameter eq .( [ e33 ] ) , we can do the study further to find the property of collective behaviors of the system , which will be discussed in detail in another paper .in this paper , we focused on the theoretical description of low - dimensional order - parameter dynamics of the collective behaviors in coupled phase oscillators .we have derived the closed form of dynamical equations for order parameters , from which we find the poisson kernel as an invariant manifold and the well - known oa ansatz .different from the traditional ott - antonsen ansatz and watanabe - strogatz s approach , our approach is suitable for systems of both finite and infinite , identical and nonidentical oscillators . in our approach , the scope of the oa ansatz is determined as two parts , i.e. , the limit of infinitely many oscillators and the condition that only the first three fourier coefficients of the coupling strength are nonzero . by using our order parameter analysis , we also discussed two cases that go beyond the scope of the oa ansatz , i.e. , the case of a finite - size system and the case of coupled oscillator systems with more complicated coupling functions .we have discussed the reasons why the oa ansatz can not be used directly to these two cases and we further developed the approximation methods to deal with these difficulties with the oa ansatz .we developed two methods , namely the ensemble method and dominating - term assumption .it is shown that these schemes work pretty well , and their validity has been checked by the numerical simulations .for the case of a finite number of coupled oscillators , it is shown that the system is out of the domain of low - dimensional manifold of the coupled order parameter equations .hence , from the view of the geometry theory of manifolds , the finite size of system brings some fluctuations along the manifold which is introduced from initial states .if the dynamics of order parameter equations is not too complicated and the manifold satisfying can be used to describe the dynamics approximately , all the trajectories from different initial conditions around the poisson manifold will be described by the mechanism of this manifold approximately .this is the meaning of the ensemble method and the reason for why it works in some models . in other case , with a more complicated coupling function ,the poisson manifold is not a solution of the dynamical system .however , for the infinitely many coupled equations for , except for few of them , the relation indeed reflects the symmetry of these order parameter equations .when we choose the initial states in this manifold , the evolution of the dynamics will certainly be influenced by the symmetry of it , which is described by approximately .the method with dominating - term assumption is indeed designed to show this effect , and consequently the system is simplified by it for some models . in this paper , we have developed the order parameter analysis , with which we get the low - dimensional behaviors of coupled phase oscillators , such as the oa ansatz , the ensemble order parameter approach and the dominating - term approximation . however , to fully understand the low - dimensional collective behaviors we need a more comprehensive understanding of the poisson manifold and its relation with the dynamics of the system , which is still an open question .we believe that the order parameter analysis should be a powerful tool in helping to reveal the mechanism of low - dimensional collective behaviors .this work is partially supported by the national natural science foundation of china ( grant no . 11075016 and 11475022 ) .kuramoto , y. _ chemical oscillations , waves and turbulence . _. 7576 ( springer , berlin , 1984 ) .acebron , j. a. , bonilla , l. l. , vicente , c. j. p. , ritort , f. & spigler , r. the kuramoto model : a simple paradigm for synchronization phenomena .phys . _ * 77 * , 137185 ( 2005 ) .strogatz , s. h. from kuramoto to crawford : exploring the onset of synchronization in populations of coupled oscillators ._ physica d _ * 143 * , 120 ( 2000 ) .pikovsky , a. , rosenblum , m. & kurths , j. _ synchronization : a universal concept in nonlinear sciences ._ pp . 279296 ( cambridge university press , cambridge , england , 2001 ) .dorogovtsev , s. n. , goltsev , a. v. & mendes , j. f. f. critical phenomena in complex networks .phys . _ * 80 * , 1275 ( 2008 ) .arenas , a. , diaz - guilera , a. , kurths , j. , moreno , y. & zhou . c. synchronization in complex networks .rep . _ * 469 * , 93153 ( 2008 ) .zheng , z. , hu , g. & hu , b. phase slips and phase synchronization of coupled oscillators . _ phys .lett . _ * 81 * , 53185321 ( 1998 ) .watanabe , s. & strogatz , s. h. constants of motion for superconducting josephson arrays ._ physica d _ * 74 * , 197253 ( 1994 ) .ott , e. & antonsen , t. m. low dimensional behavior of large systems of globally coupled oscillators ._ chaos _ * 18 * , 037113 ( 2008 ) .marvel , s. a. , mirollo , r. e. & strogatz , s. h. identical phase oscillators with global sinusoidal coupling evolve by mbius group action ._ chaos _ * 19 * , 043104 ( 2009 ) .marvel , s. a. & strogatz , s. h. invariant submanifold for series arrays of josephson junctions ._ chaos _ * 19 * , 013132 ( 2009 ) .omelchenko , o. e. & wolfrum , m. nonuniversal transitions to synchrony in the sakaguchi - kuramoto model .lett . _ * 109 * , 164101 ( 2012 ) .watanabe , s. & strogatz , s. h. integrability of a globally coupled oscillator array .* 70 * , 2391 ( 1993 ) .laing , c. r. the dynamics of chimera states in heterogeneous kuramoto networks ._ physica d _ * 238 * , 15691588 ( 2009 ) .daido , h. scaling behaviour at the onset of mutual entrainment in a population of interacting oscillators ._ j. phys . a : math .gen . _ * 20 * l629l636 ( 1986 ) .daido , h. intrinsic fluctuation and its critical scaling in a class of populations of oscillators with distributed frequencies .* 81 * 727731 ( 1989 ) .daido , h. intrinsic fluctuations and a phase transition in a class of large populations of interacting oscillators ._ journal of statistical physics _* 60 * 753800 ( 1990 ) .skardal , p. s. , ott , e. , restrepo , j. g. cluster synchrony in systems of coupled phase oscillators with higher - order coupling .* 84 * , 036208 ( 2011 ) .xu , c. , gao , j. , sun , y. , huang , x. , & zheng , z. , explosive or continuous : incoherent state determines the route to synchronization ._ scientific report _ * 5 * , 12039 ( 2015 ) .
coupled phase - oscillators are important models related to synchronization . recently , ott - antonsen(oa ) ansatz is developed and used to get low - dimensional collective behaviors in coupled oscillator systems . in this paper , we develop a simple and concise approach based on the equations of order parameters , namely , order parameter analysis , with which we point out that the oa ansatz is rooted in the dynamical symmetry of the order parameters . with our approach the scope of the oa ansatz is identified as two conditions , i.e. , infinite size of the system and only three nonzero fourier coefficients of the coupling function . coinciding with each of the conditions , a distinctive system out of the scope is taken into account and discussed with the order parameter analysis . two approximation methods are introduced respectively , namely the ensemble approach and the dominating - term assumption .
the principal role of chaperones is to assist in the resolution of the multitude of alternative misfolded structures that rna readily adopts so that sufficient yield of the native material is realized in biologically viable time less than . because spontaneous yield of the native state of large ribozymes even at high mg concentrations is small , it is likely that _ in vivo _ rna chaperones are required to boost the probability of reaching the folded state within . unlike the well - studied bacterial groel - groes , a well - identified `` one - fit - all '' chaperonin system for processing cytosolic proteins , protein - cofactors that act as rna chaperones vary from one rna to the other .based on a number of experiments ( see for reviews ) we classify the client rna molecules into two classes depending on the need for the rna chaperones to utilize the free energy of atp hydrolysis in facilitating folding . 1. folding of class i rna molecules is greatly aided by the interactions with protein cofactors while their assistance may not be strictly required .these rna molecules are not stringent rna substrates . for example , the splicing reaction of mitochondrial bi5 group i intron is activated in 50 mm or greater mg concentration at room temperature but interactions with cytochrome b pre - mrna processing protein 2 ( cbp2 ) or _ neurospora crassa _ mitochondrial tyrosyl trna synthetase ( cyt-18 ) enables splicing at physiological level ( mm ) of mg by enhancing folding of the bi5 core .tetrahymena _ ribozyme and other group i introns belong to stringent class ii substrates .spontaneous folding , even at high counterion concentration , occurs too slowly with low yield of the native state to be biologically viable . at high temperatures folding of the misfolded _ tetrahymena _ribozyme is aided by formation of ribonucleoprotein ( rnp ) assembly with the promiscuously interacting cyt-18 , which in essence follows the mechanism of passive assistance . however , an atp - dependent helicase activity associated cyt-19 produces functionally competent states that can splice efficiently at normal growth temperature .although not firmly established , it is suspected that rna chaperones bind to single stranded regions of the misfolded structures , which upon release places the rna in a different region of the folding landscape , giving it a new opportunity to fold just as anticipated by the iterative annealing model ( iam ) . the fundamental difference between class i and class ii rna substrates is in the apparent time scale of catalysis ( ) by the ribonucleprotein ( rnp ) complex formed between rna and the rna chaperone .if this time scale is smaller than ( ) , the formation of rnp alone is sufficient to produce functionally competent rna molecules . in the opposite case ( ), the conversion of misfolded rna into folding competent form needs assistance from a specially designed action of rna chaperone that can transduce the free energy of atp hydrolysis . below we will describe a mathematical model for the two scenarios .the tertiary structure capture model accounts for the passive action of rna chaperones in the folding of mitochondrial bi5 group i intron without atp .explicit mechanisms of recognition by passive rna chaperones by the collapsed rnas may differ for different systems , and might also depend on whether the rna collapse is specific or non - specific . in a majority of casesribozymes undergo an extended to a collapsed transition even at modest ion concentration producing a heterogeneous population of compact structures whose affinity for the protein cofactors could vary greatly .for example , cbp2 could bind to these compact structures with partially folded cores ( p5-p4-p6 and p3-p7-p8 ) of group i intron ( fig.[fig : passive_rnp]a ) with differing specificity , and promote the subsequent assembly of 5 domain of bi5 core .in contrast , cyt-18 binds to rna and forms a stable cyt-18-bi5 complex at an early stage of rna folding and promotes the splicing competent states .if the association between the cofactor and compact rna is too weak then large conformational fluctuations can produce long - lived entropically stabilized metastable kinetic traps for rna . in this case , the protein cofactor would have little effect on rna folding . in the opposite limit , when the cofactor interacts strongly with collapsed rna , transient unfolding in the rna conformations , which are needed for resolving misfolded structures to the native state , would be prohibited .thus , for the chaperone - assisted folding of class i rna substrates , an optimal stability of the rna - cofactor intermediate is needed to efficiently produce an assembly - competent rnp complex . the physical picture of passive assistance of rna chaperones described above , encapsulated in the weeks - cech tertiary capture mechanism ,can be translated into the kinetic scheme shown in fig.[fig : passive_rnp ] . after rna collapses rapidly to an ensemble of collapsed intermediate structures \{c } ( ) consisting of a mixture of specifically and non - specifically collapsed structures , promiscuous binding of chaperone ( blue spheres in [ fig : passive_rnp ] ) to the conformations in \{c } produces a fluctuating ensemble of tightly and loosely bound intermediate rnp complex .this process is conceptually similar to the encounter complex in protein - protein interaction .only a fraction ( ) of states among the tightly bound ensemble of rnp , , is viable for producing functionally competent rnp state .thus , is partitioned roughly into , where denotes the intermediate ensemble that can fold into the competent rnp while can not . since transitions among the states in non - permissible on viable time scales , the only way for a molecule trapped in to reach the competent rnp state is to visit a transiently unbound ( or loosely bound ) intermediate ensemble and explore the states belonging to .once the rna is in \{ } ensemble , the rate of rnp formation is given by , \end{aligned}\ ] ] which can be quantified by assuming steady state production of ] .defining a constant for rapid pre - equilibration ( , ) between the two collapsed intermediate ensembles {ss}/[i_b]_{ss}=k_{ub}/k_b ] , we obtain the rate of rnp formation at steady state : a change in the strength of binding between rna and protein cofactor would affect the values of by modulating or the stability of ensemble , while keeping other rate constants ( and ) unchanged ( the inset of fig.[fig : passive_rnp ] ) . it can be argued that there be an optimal stability for in order to maximize the rate of rnp formation . if is too stable compared with then becomes a dead end with negligible probability of reshuffling its population into non - productive ensemble of into through conformational fluctuations .in contrast , if is more stable than the production of competent rnp would be inefficient .it is clear from eq.[eqn : rss ] that the limiting condition of , either or , leads to a vanishing value of ; hence it follows that there is an optimum value that maximizes the rate of rnp production .the maximum rate is obtained using : where .the presence of that maximizes is indicative of an optimal unbinding rate ( ) , for the formation of competent rnp , that satisfies . as long as remains less than the biologically viable time scale ( ), rna chaperone promotes rna molecule to reach the functionally competent form by merely providing a suitable molecular interface on which rna could interact and anneal its conformation .physically , this situation is not that dissimilar to the role mini chaperone ( apical domain of groel ) plays in annealing certain non - stringent substrates .if sufficient yield of the folded rna is not realized on the time scale , i.e. , ( the partition factor , , in the kpm is small ) , then a more active role including atp consumption is required to resolve the misfolded states . for kinetically trapped misfolded rna molecules , transient unfolding of misfolded elements by rna chaperonesis needed to increase the yield of rna since it provides another chance for refolding into a functional state . in experiments involving cyt-18 and the dead - box protein cyt-19 on _ neurospora crassa _group i intron , it was shown that the two protein cofactors ( cyt-18 and cyt-19 ) work in a coordinated fashion by utilizing atp hydrolysis .atp - dependent activity of cyt-19 was required for efficient splicing at the normal growth temperature ( 25 ) while cyt-18 alone could rescue the misfolded rna at high temperatures . in this sense , the active participation shares features of many biological processes including motility of molecular motors , and steps in signal transduction pathways . in the absence of rna chaperone, the kpm predicts that the initial pool of unfolded rna ribozymes are partitioned into folded and misfolded conformations , described by the following set of rate equations . for a given ribozyme concentration +[m]+[n] ] and +[cs] ] , =1 ] .note that all the states in misfolded ensemble are converted to the native state through the reactions , followed by {k_f}n ] and +[cs] ] and \neq 0 ] , ] , ] ( a ) when chaperone does not recognizes the native state , and ( b ) when chaperone recognizes the native state , with , , =0 ] , , , , . [fig : chaperone_action],width=384 ] ) and 5 misfolded states corresponding to the cbas in ( a ) ( ) .the free energy value of each state is assigned as , , , , , and , which leads to .( c ) the plot of ( black solid lines ) , graphically showing the solutions of , i.e. , the pole structure of due to : , , , , , .( d ) the time evolution of the fraction of native state ( ) with different initial conditions ( i ) , ( black , solid line ) ( ii ) , ( red , dashed line ) ( iii ) , , , ( blue , dot - dashed line ) .note that the fraction of native state in the steady state is due to the flux out of native to non - native states .the steady state value is independent of the initial conditions , suggesting that the rna chaperones redistribute the population of folded and misfolded states till equilibrium is reached .[ fig : ot ] , width=499 ]
as a consequence of the rugged landscape of rna molecules their folding is described by the kinetic partitioning mechanism according to which only a small fraction ( ) reaches the folded state while the remaining fraction of molecules is kinetically trapped in misfolded intermediates . the transition from the misfolded states to the native state can far exceed biologically relevant time . thus , rna folding _ in vivo _ is often aided by protein cofactors , called rna chaperones , that can rescue rnas from a multitude of misfolded structures . we consider two models , based on chemical kinetics and chemical master equation , for describing assisted folding . in the passive model , applicable for class i substrates , transient interactions of misfolded structures with rna chaperones alone are sufficient to destabilize the misfolded structures , thus entropically lowering the barrier to folding . for this mechanism to be efficient the intermediate ribonucleoprotein ( rnp ) complex between collapsed rna and protein cofactor should have optimal stability . we also introduce an active model ( suitable for stringent substrates with small ) , which accounts for the recent experimental findings on the action of cyt-19 on the group i intron ribozyme , showing that rna chaperones does not discriminate between the misfolded and the native states . in the active model , the rna chaperone system utilizes chemical energy of atp hydrolysis to repeatedly bind and release misfolded and folded rnas , resulting in substantial increase of yield of the native state . the theory outlined here shows , in accord with experiments , that in the steady state the native state does not form with unit probability . since the ground breaking discovery of self - splicing catalytic activity of group i intron ribozymes numerous and growing list of cellular functions have been shown to be controlled by rna molecules . these discoveries have made it important to determine how rna molecules fold , and sometimes switch conformations in response to environmental signals to execute a wide range of activities from regulation of transcription and translation to catalysis . at a first glance , it may appear that rna folding is simple because of the potential restriction that the four different are paired as demanded by the watson - crick ( wc ) rule . however , there are several factors that make rna folding considerably more difficult than the more thoroughly investigated protein folding problem . the presence of negative charge on the phosphate group of each nucleotide , participation of a large fraction of nucleotides in non wc base pairing , the nearly homopolymeric nature of purine and pyrimidine bases , and paucity of structural data are some of the reasons that render the prediction of rna structures and their folding challenging . despite these difficulties considerable progress has been made in understanding how large ribozymes fold _ in vitro _ . these studies have shown that the folding landscape of rna is rugged consisting of many easily accessible competing basins of attraction ( cbas ) in addition to the native basin of attraction ( nba ) , which implies that the stability gap separating the cbas and the nba is modest relative to proteins . as a consequence of the rugged folding landscape , only a small fraction of initially unfolded molecules reaches the nba rapidly while the remaining fraction are kinetically trapped in a number of favorable alternative low energy misfolded cbas , as predicted by the kinetic partitioning mechanism ( kpm ) . the free energy barriers separating the cbas and the nba is often high . consequently , the transitions times to the nba from the cbas could exceed biologically relevant time scale ( ) . the upper bound for should be no greater than tens of minutes given the typical cell cycle time . because of the modest stability gap even simple rna molecules could misfold at the secondary as well as tertiary structure levels . in structural terms , secondary structure rearrangements , which are observed in the folding of p5abc and riboswitches induced by metal ions and metabolites , respectively , are one cause of the high free energy barriers separating cbas and nba in rna . the free energy barrier associated with melting of base pairs is where , the free energy stabilizing a base pair , is kcal / mol / bp . the average length of a duplex in rna structure is estimated to be bp from the ratio of nucleotides participating in the duplex formation , where is the average length of a single stranded chain in native rnas . using these estimates , we surmise that a typical free energy barrier associated with secondary structure rearrangement is kcal / mol ( ) . by assuming that the prefactor for barrier crossing is the time scale for spontaneous melting of a hairpin stack could be as large as sec 1 day ! indeed , several _ in vitro _ experiments have shown that _ tettrahymena _ ribozyme does not reach the folded state with unit probability even after hundreds of minutes . the sluggish rna folding kinetics _ in vitro _ is reminiscent of that observed in glasses due to the presence of multiple metastable states ( cbas ) . because of trapping in long - lived cbas , it is practically impossible for a large ribozyme to spontaneously make a transition to the native state with substantial probability within . these considerations suggest _ in vivo _ folding would require rna chaperones . the goal of this paper is to produce a quantitative framework for understanding the function of rna chaperones , which are protein cofactors that interact with the conformations in the cbas and facilitate their folding . we classify rna chaperones as passive and active . passive chaperones transiently interact with rna molecules and reduce the entropy barrier to folding without requiring an energy source . on the other hand , active chaperones function most efficiently by lavish consumption of atp in the presence of dead - box proteins . the need for passive or active chaperones depends on the client molecules and the extent of misfolding ( see below ) . we formulate a general kinetic model to describe both passive ( no atp required ) and active ( requires atp hydrolysis ) roles rna chaperones play in rescuing misfolded states . the resulting theory accounts for experimental observations , and should be useful in quantitatively analyzing future experiments .
the price of a european option is typically expressed in terms of the black&scholes _ implied volatility _ ( where denotes the log - strike and the maturity ) , cf . .since exact formulas for a given model are typically out of reach , an active line or research is devoted to finding asymptotic expansions for , which can be useful in many respects , e.g. for a fast calibration of some parameters of the model .explicit asymptotic formulas for also allow to understand how the parameters affect key features of the volatility surface , such as its slope , and what are the possible shapes that can actually be obtained for a given model .let us mention the celebrated lee moment s formula and more recent results .a key problem is to link the implied volatility _explicitly _ to the distribution of the risk - neutral log - return , because the latter can be computed or estimated for many models .the results of benaim and friz are particularly appealing , because they connect directly the asymptotic behavior of to the _ tail probabilities _ their results , which are limited to the special regime of extreme strike with fixed maturity , are based on the key notion of _ regular variation _ , which is appropriate when one considers a single random variable ( since is fixed ) .this leaves out many interesting regimes , notably the much studied case of small maturity with fixed strike . in this paperwe provide a substantial extension of : we formulate a suitable generalization of the regular variation assumption on , which , coupled to suitable moment conditions , yields the asymptotic behavior of in essentially _ any regime of small maturity and/or extreme strike _ ( with bounded maturity ) .we thus provide a unified approach , which includes as special cases both the regime of extreme strike with fixed maturity , and that of small maturity with fixed strike .mixed regimes , where and vary simultaneously , are also allowed .this flexibility yields asymptotic formulas for the volatility surface in open regions of the plane . in section [ ch2:sec : examples ] we illustrate our results through applications to popular models , such as carr - wu finite moment logstable model and merton s jump diffusion model .we also discuss heston s model , cf . [ ch2:sec : heston ] . in a separate paper we consider a stochastic volatility model which exhibits multiscaling of moments , introduced in .the key point in our analysis is to connect explicitly the asymptotic behavior of the tail probabilities to call and put option prices ( cf .theorems [ ch2:th : main2b ] , [ ch2:th : main2bl ] and [ ch2:th : main2a ] ) .in fact , once the asymptotics of are known , the behavior of the implied volatility can be deduced in a model independent way , as recently shown gao and lee .we summarize their results in [ ch2:sec : main1 ] ( see theorem [ ch2:th : main1 ] ) , where we also give an extension to a special regime , that was left out from their analysis ( cf .also ) .the paper is structured as follows . * in section [ ch2:sec : main ] we set some notation and we state our main results . * in section [ ch2:sec : examples ] we apply our results to some popular models . * in section [ ch2:sec : pricetovol ] we prove theorem [ ch2:th : main1 ] , linking option price and implied volatility . * in section [ ch2:sec : probtoprice ]we prove our main results ( theorems [ ch2:th : main2b ] , [ ch2:th : main2bl ] and [ ch2:th : main2a ] ) .* finally , a few technical points have been deferred to the appendix [ ch2:sec : app ] .we consider a generic stochastic process representing the log - price of an asset , normalized by .we work under the risk - neutral measure , that is ( assuming zero interest rate ) the price process is a martingale .european call and put options , with maturity and a log - strike , are priced respectively \ , , \qquad p(\kappa , t ) = { \ensuremath{{\mathrm}e}}[(e^\kappa - e^{x_t})^+ ] \,,\ ] ] and are linked by the _ call - put parity _ relation : as in , in our results _ we take limits along an arbitrary family ( or `` path '' ) of values of . this includes both sequences and curves , hence we omit subscripts . without loss of generality , we assume that all the s have the same sign ( just consider separately the subfamilies with positive and negative s ) . to simplify notation , we only consider positive families and give results for both and .our main interest is for families of values of such that whenever this holds , one has ( see [ ch2:sec : cpexplained ] ) we stress that gathers many interesting regimes , namely : [ ch2:it : a ] and ( in particular , the case of fixed ) ; [ ch2:it : b ] and ; [ ch2:it : c ] and ( in particular , the case of fixed ) ; [ ch2:it : d ] and .remarkably , while regime needs to be handled separately , regimes -- will be analyzed at once , as special instances of the case `` is bounded away from zero '' .we stress the requirement of _ bounded _maturity in .some of our arguments can be adapted to deal with cases when , but additional work is needed ( for instance , we assume the boundedness of some exponential moments ] and , because this is true along a suitable subsequence .we first focus on families of such that a regime that we call _ atypical deviations_. this is the most interesting case , much studied in the literature , since it includes regimes , and described on page , and also regime provided sufficiently slow .when with fixed , benaim and friz require the _ regular variation _ of the tail probabilities , i.e.there exist and a slowly varying function is slowly varying if for all . ] such that it is not obvious how to generalize when is allowed to vary , i.e. which conditions to impose on .however , one can reformulate the first relation in simply requiring the existence of for any fixed , by ( * ? ? ?* theorem 1.4.1 ) , and analogously for the second relation in .this reformulation ( in which is not even mentioned ! ) turns out to be the right condition to impose in the general context that we consider , when is allowed to vary .we are thus led to the following : [ ch2:ass : rv ] the family of values of with , satisfies , and for every the following limit exists in ] .* given , the second moment condition is < \infty \,,\end{gathered}\ ] ] along the given family of values of .note that for this simplifies to \le 1 + c \kappa^2 \,.\ ] ] we are ready to state our main results , which express the asymptotic behavior of option prices and implied volatility explicitly in terms of the tail probabilities .due to different assumptions , we first consider right - tail asymptotics .[ ch2:th : main2b ] consider a family of values of with , such that hypothesis [ ch2:ass : rv ] is satisfied by the right tail probability .let the moment condition hold for _ every _ , or alternatively let it hold only for _ some _ but in addition assume that then special case : if , assumption can be relaxed to and relations - simplify to let the moment condition hold for _ every _ , or alternatively let it hold only for _ some _ but in addition assume .then next we turn to left - tail asymptotics .the assumptions in this case turn out to be sensibly weaker than those for right - tail .for instance , the left - tail condition < \infty ] .fix with , resp . . for any family of with asymptotic behavior of option prices is given by \ , , \qquad \text{resp . }\qquad p(-\kappa , t ) \sim \gamma_t \ , { \ensuremath{{\mathrm}e}}[(y+a)^- ] \,,\end{aligned}\ ] ] and correspondingly the implied volatility is given by }{a } \big ) } & \text { if \ } \, , \\\rule{0em}{1.7em}\displaystyle \sqrt{2\pi } \ , { \ensuremath{{\mathrm}e}}[y^\pm ] & \text { if \ } \ , .\end{cases}\end{aligned}\ ] ] [ rem : equivalent ] hypothesis [ ch2:ass : smalltime ] can be easily checked when the characteristic function of is known , because , by lvy continuity theorem , the convergence in distribution is equivalent to the pointwise convergence \to { \ensuremath{{\mathrm}e}}[e^{i u y}] \displaystyle\frac{c(\kappa , t)}{\kappa } \to a \in ( 0,\infty) \displaystyle\frac{c(\kappa , t)}{\kappa } \to \infty \kappa = 0 ] and we fix to work in the risk - neutral measure , cf .* proposition 1 ) .the moment generating function of is = \begin{cases } \displaystyle e^{[\lambda \mu - \frac{(\lambda\sigma)^{\alpha}}{\cos ( \frac{\pi \alpha}{2 } ) } ] \ , t } & \text{if } \lambda \ge 0 \ , , \\ + \infty & \text{if }\lambda < 0 \ , .\end{cases}\ ] ] note that as one recovers black&scholes model with volatility , cf . [ ch2:sec : bs ] below .applying theorems [ ch2:th : main2b ] , [ ch2:th : main2bl ] and [ ch2:th : main2a ] , we give a _ complete characterization _ of the volatility smile asymptotics with bounded maturity .this includes , in particular , the regimes of extreme strike ( with fixed ) and of small maturity ( with fixed ) .[ th : cw ] the following asymptotics hold . * atypical deviations .consider any family of with , such that ( this includes the regimes , , on page , and part of regime . )then one has the right - tail asymptotics the corresponding left - tail asymptotics are given by which can be made more explicit distinguishing different regimes : * typical deviations . for any family of with one has }{a }\big ) } & \text{if } a > 0 \ , , \\ \displaystyle \rule{0pt}{1.6em } \sqrt{2\pi } \ , \sigma \,{\ensuremath{{\mathrm}e}}[y^\pm ] & \text{if } a = 0 \ , . \end{cases}\ ] ] [ ch2:rem : joint ] the fact that relations and hold for _ any _ family of satisfying yields interesting consequences .we claim that , for any and , there exists such that the following inequalities hold _ for all in the region _ : similar inequalities can be deduced from - and .relation gives a _ uniform approximations _ of the volatility surface in open regions of the plane .the proof of is simple : assume by contradiction that there exist such that for every relation _ fails _ for some . extracting a subsequence , the family satisfies but not , contradicting theorem [ th : cw ] .let denote a random variable with characteristic function = e^{-|u|^\alpha\left(1+i \ , { \mathrm{sign}}(u ) \tan ( \frac{\pi \alpha}{2 } ) \right ) } \,,\ ] ] i.e. has a strictly stable law with index and skewness parameter , and = 0 ] , because has finite moments of all orders strictly less than , cf .* property 1 ) . since for onehas \le { \ensuremath{{\mathrm}e } } [ e^{q(1+\eta)x_t } ] < \infty ] , because \to e^{-\frac{u^2}{2\sigma^2}} ] for some constant , since \to 1 ] , the heston model is a stochastic volatility model defined by the following sdes where and are standard brownian motions with . note that displays explosion of moments , i.e. = \infty ] for while = \infty ] also for ) .the behavior of the explosion moment is described in the following lemma , proved below .[ ch2:th : lemmaheston ] if , then for every .+ if , then for every .moreover , as where the asymptotic behavior of the implied volatility is known in the regimes of large strike ( with fixed maturity ) and small maturity ( with fixed strike ) . * in , benaim and friz show that for fixed , when based on the estimate ( cf .also ) * in , forde and jacquier have proved that for any fixed , as where is the legendre transform of the function given by where is the constant in .their analysis is based on the estimate obtained by showing that the log - price in the heston model satisfies a large deviations principle as , with rate and good rate function .we first note that the asymptotics and follow easily from our theorem [ ch2:th : main2b ] , plugging the estimates and into relations and , respectively .we also observe that the estimates and match , in the following sense : if we take the limit of the right hand side of ( i.e. we first let and then in ) , we obtain if , on the other hand , we take the limit of the right hand side of ( i.e. we first let and then in ) , since , as , hence the slope of converges to as .] we obtain which coincides with .analogously , also the estimates and match .it is then natural to conjecture that , for _ any _ family of values of such that and jointly , one should have where is the constant in .if this holds , applying theorem [ ch2:th : main2b ] , relation yields providing a smooth interpolation between and .[ ch2:rem : jointh ] if holds for any family of values of with and , it follows that for every there exists such that the following inequalities hold : _ for all in the region _ , as it follows easily by contradiction ( cf .remark [ ch2:rem : joint ] for a similar argument ) . given any number define the explosion time as < \infty \ } \,.\ ] ] note that if then .by ( see also ) where observe that if , then and , which implies for every , or equivalently for every . on the other hand ,since we observe that if , then as , which implies in particular this leads to the conclusion that , if , then where was defined in . it remains to study the case , in which for every .we have two possibilities : if then when , and so by on the other hand , if , then when and so finally if , , and so in all the cases we obtain , in agreement with .in this section we prove theorem [ ch2:th : main1 ] .we start with some background on black&scholes model and on related quantities .we let be a standard gaussian random variable and denote by and its density and distribution functions : the mills ratio is defined by the next lemma summarizes the main properties of that will be used in the sequel .[ ch2:th : mills ] the function is smooth , strictly decreasing , strictly convex and satisfies since and is an analytic function , is also analytic . since , one obtains recalling that , these relations already show that and for all . for ,the following bounds hold ( * ? ? ?* eq . ( 19 ) ) , ( * ? ? ?1.5 ) : applying yields and for all , hence .we recall that the smooth function was introduced in . since is a strictly decreasing bijection ( note that and ) .its inverse is then smooth and strictly decreasing as well .writing , it follows by that as , hence it follows easily that satisfies .let be a standard brownian motion .the black&scholes model is defined by a risk - neutral log - price , where the parameter represents the volatility .the black&scholes formula for the price of a normalized european call is , where is the log - strike , is the maturity and we define = \begin{cases } ( 1-e^\kappa)^+ & \text{if } v = 0 \ , , \\\rule{0pt}{1.2em}\phi(d_1 ) - e^\kappa \phi(d_2 ) & \text{if } v > 0 \ , , \end{cases}\ ] ] where is defined in , and we set note that is a continuous function of .since , for all one easily computes hence is strictly increasing in and strictly decreasing in ( see figure [ ch2:fig : bs ] ) .it is also directly checked that for all and one has in the following key proposition , proved in appendix [ ch2:sec : app : bs ] , we show that when the black&scholes call price vanishes precisely when or ( or , more generally , in a combination of these two regimes , when ) .we also provide sharp estimates on for each regime ( weaker estimates on could be deduced from theorems [ ch2:th : main2b ] and [ ch2:th : main2bl ] ) .[ ch2:th : bs ] for any family of values of with , ,one has that is , if and only if from any subsequence of one can extract a sub - subsequence along which either or . moreover : * if , then * if , then where and are defined in and .since the function is a bijection from to , it admits an inverse function , defined by by construction , is a strictly increasing bijection from to .we will mainly focus on the case , for which ., for ] .] consider an arbitrary model , with a risk - neutral log - price , and let be the corresponding price of a normalized european call option , cf . .since is a convex function , one has - e^\kappa)^+ = ( 1-e^\kappa)^+ ] .consequently , by , we have the following relation between the _ implied volatility _ ( defined in [ ch2:sec : setting ] ) and : relation allows to reformulate theorem [ ch2:th : main1 ] more transparently in terms of the function .inspired by , we define by consider an arbitrary family of values of , such that either , and , or alternatively , and ( with as in ) .then , in light of , we can write the following : * if bounded away from zero , relation is equivalent to * if is bounded away from infinity , relations and are equivalent to where is the inverse of the function defined in , and satisfies .the proof of theorem [ ch2:th : main1 ] is now reduced to proving relations and .we first show that we can assume , by a symmetry argument . recalling and , for all and we have where is defined in . as a consequence , in the case , replacing by and by in the first line of , one obtains the second line of . performing the same replacements in the first line of yields which is slightly different with respect to the third line of .however , the discrepancy is only apparent , because we claim that .this is checked as follows : if , then ; if , since by assumption , the first relation in yields , as required .( see the lines following below for more details . )we fix a family of values of with and bounded away from zero , say for some fixed .our goal is to prove that relation holds .if we set , by definition we have .let us first show that . by proposition [ ch2:th : bs ] , implies , which means that every subsequence of values of admits a further sub - subsequence along which either or .the key point is that implies , because ( recall that ) .thus along every sub - subsequence , which means that along the whole family of values of . since , we can apply relation .taking of both sides of that relation , recalling the definition of and the fact that , we can write we now show that the last term in the right hand side is and can therefore be neglected .note that eventually , because , hence since is decreasing for , in case one has on the other hand , recalling that , in case one has , which can be rewritten as and together with yields in conclusion , yields , that is there exists such that , and since we can write this is a second degree equation in , whose solutions ( both positive ) are \,.\ ] ] since , eventually one has : since , it follows that , which selects the `` '' solution in .taking square roots of both sides of and recalling that yields the equality as one checks squaring both sides of . finally , since , it is quite intuitive that relation yields . to prove this fact , we observe that by we can write where for fixed we define the function by by direct computation , when ( resp . ) one has ( resp . ) for all . since , it follows that for every one has if , while if ; consequently , for any , which yields _ uniformly over . by , relation is proved .we now fix a family of values of with and bounded away from infinity , say for some fixed , and we prove relation .we set so that , cf . .( note that , because by assumption . ) applying proposition [ ch2:th : bs ] we have , i.e. either or along sub - subsequences .however , this time implies , because ( recall that ) , which means that along the whole given family of values of . since , relation yields let us focus on : recalling that and , we first show that by a subsequence argument , we may assume that ] and ] , hence holds .having proved , we can plug it into , obtaining precisely the first line of .this completes the proof of theorem [ ch2:th : main1 ] .in this section we prove theorems [ ch2:th : main2b ] , [ ch2:th : main2bl ] and [ ch2:th : main2a ] .we stress that it is enough to prove the asymptotic relations for the option prices and , because the corresponding relations for the implied volatility follow immediately applying theorem [ ch2:th : main1 ] .we prove theorem [ ch2:th : main2b ] and [ ch2:th : main2bl ] at the same time .we recall that the tail probabilities , are defined in . throughout the proof, we fix a family of values of with and , for some fixed , such that hypothesis [ ch2:ass : rv ] is satisfied . extracting subsequences, we may distinguish three regimes for : * if our goal is to prove , resp . ; * if our goal is to prove , resp ., because in this case , plainly , one has , resp . , by ; * if , our goal is to prove , resp . .of course , each regime has different assumptions , as in theorem [ ch2:th : main2b ] and [ ch2:th : main2bl ] ._ step 0 . preparation ._ it follows by conditions and that therefore for every one has eventually where the inequality is `` '' instead of `` '' , because both sides are negative quantities .we stress that , resp . , by , hence moreover , we claim that in any of the regimes , and one has this follows readily by if or .if we argue as follows : by markov s inequality , for e^{-(1+\eta)\kappa } \,,\ ] ] hence \,.\ ] ] since in the regime we assume that the moment condition holds for some or every , the term ] . for all ] , it follows that } e^{\kappa y } \ , { \overline f}_t(\kappa y ) \bigg ) & \le \max_{n=1 , \ldots , \bar n } \big ( a_n \kappa + ( 1-\delta ) a_{n-1 } \log { \overline f}_t(\kappa ) \big ) \\ & = \max_{n=1 , \ldots , \bar n } \big((1-\delta ) a_{n-1 } \big ( \log { \overline f}_t(\kappa ) + \kappa \big ) + \delta(1+m ) \kappa \big ) \ , .\end{split}\ ] ] plainly , the is attained for , for which .recalling , we get } e^{\kappa y } \ , { \overline f}_t(\kappa y ) \bigg ) \le ( 1-\delta(1+a+am ) ) \big ( \log { \overline f}_t(\kappa ) + \kappa \big ) \,.\ ] ] choosing , the claim is proved .we are ready to give sharp upper bounds on , refining . for fixed ,we write + { \ensuremath{{\mathrm}e}}[(e^{x_t}-e^\kappa ) { { \sf 1}}_{\{x_t > \kappa m\ } } ] \,,\ ] ] and we estimate the first term as follows : by fubini - tonelli s theorem and , & = { \ensuremath{{\mathrm}e}}\bigg [ \bigg ( \int_\kappa^{\infty } e^x \ , { { \sf 1}}_{\{x < x_t\ } } \ , { \text{\rm d}}x\bigg ) { { \sf 1}}_{\{\kappa < x_t\le \kappa m\ } } \bigg ] \\ & = \int_\kappa^{\kappa m } e^x \ , { \ensuremath{{\mathrm}p}}(x < x_t \le \kappa m)\ , { \text{\rm d}}x \le \int_\kappa^{\kappa m } e^x \ , { \overline f}_t(x ) \ , { \text{\rm d}}x \\ & = \kappa \int_1^m e^{\kappa y } \ , { \overline f}_t(\kappa y ) \ , { \text{\rm d}}y \le \kappa \ , ( m-1 ) \ , e^{(1-{\varepsilon } ) ( \log { \overline f}_t(\kappa ) + \kappa ) } \ , .\end{split}\ ] ] to estimate the second term in , we start with the cases and , where we assume that holds for some , as well as , hence we can fix such that .bounding , hlder s inequality yields \le { \ensuremath{{\mathrm}e}}[e^{(1+\eta)x_t}]^{\frac{1}{1+\eta } } \ , { \overline f}_t(\kappa m)^{\frac{\eta}{1+\eta } } = c \ , { \overline f}_t(\kappa m)^{\frac{\eta}{1+\eta } } \,,\ ] ] where is an absolute constant , by . applying relation together with we obtain eventually \le ( 1-{\varepsilon } ) \log { \overline f}_t(\kappa ) \le ( 1-{\varepsilon } ) \big(\log { \overline f}_t(\kappa ) + \kappa \big ) \,.\ ] ] recalling and , eventually , hence by \le ( 1 - 2{\varepsilon } ) \big(\log { \overline f}_t(\kappa ) + \kappa \big ) \,.\ ] ] looking back at , since by , and again one has eventually since is arbitrary , this shows that which together with completes the proof of , if .since if , by , we can rewrite in this case as which together with completes the proof of .it remains to consider the case when , where we assume that relation holds for some , together with .as before , we fix such that .since \le { \ensuremath{{\mathrm}e}}\bigg [ \bigg|\frac{e^{x_t}-1}{\kappa } \bigg|^{1+\eta}\bigg ] \le c \,,\ ] ] for some absolute constant , by , the second term in is bounded by \le \kappa { \ensuremath{{\mathrm}e}}\bigg [ \bigg|\frac{e^{x_t}-e^\kappa}{\kappa } \bigg|^{1+\eta } \bigg]^{\frac{1}{1+\eta } } \ , { \overline f}_t(\kappa m)^{\frac{\eta}{1+\eta } } \le \kappa \ , c \ , { \overline f}_t(\kappa m)^{\frac{\eta}{1+\eta } } \,.\ ] ] in complete analogy with - , we obtain that eventually }{\kappa } \le ( 1-{\varepsilon } ) \log { \overline f}_t(\kappa ) \,.\ ] ] by , eventually , hence by }{\kappa } \le ( 1 - 2{\varepsilon } ) ( \log { \overline f}_t(\kappa ) + \kappa ) \,.\ ] ] recalling and , we can finally write because and . since is arbitrary , we have proved that which together with completes the proof of .upper bounds on ._ we are going to prove sharp upper bounds on , that will complete the proof of relations , and .by we can write \le e^{-\kappa } \ , f_t(-\kappa ) \,,\ ] ] therefore which together with completes the proof of , if . on the other hand ,if , since relation implies ( recall that ) in view of , the proof of is completed . it remains to consider the case .if relation holds _ for every _ , we argue in complete analogy with -- , getting which together with completes the proof of .if , on the other hand , relation holds only _ for some _ , we also assume that condition holds , hence we can fix such that .let us write + { \ensuremath{{\mathrm}e}}[(e^{-\kappa } - e^{x_t } ) { { \sf 1}}_{\{x_t \le -\kappa m\ } } ] \,.\ ] ] in analogy with , for every fixed , the first term in the right hand side can be estimated as follows ( note that is decreasing ) : & \le \int_{-\kappa m}^{-\kappa } e^x \ , f_t(x ) \ , { \text{\rm d}}x = \kappa \int_{1}^{m } e^{-\kappa y } \ , f_t(-\kappa y ) \, { \text{\rm d}}y \\ & \le \kappa ( m-1 ) \ , f_t(-\kappa ) \le \kappa \ , e^{(1-{\varepsilon } ) \log f_t(-\kappa ) } \ , . \end{split}\ ] ] the second term in is estimated in complete analogy with -- , yielding }{\kappa } \le ( 1-{\varepsilon } ) \log f_t(-\kappa ) \,.\ ] ] recalling , we obtain from and since is arbitrary we have proved that relation still holds , which together with completes the proof of , and of the whole theorem [ ch2:th : main2b ] . by skorokhods representation theorem , we can build a coupling of the random variables and such that relation holds a.s .. since the function is continuous , recalling that , for we have a.s .{\,a.s.\ , } ( y - a)^+ \,,\ ] ] and analogously for {\,a.s.\ , } ( -a - y)^+ = ( y+a)^- \,.\ ] ] taking the expectation of both sides of these relations , one would obtain precisely . to justify the interchanging of limit and expectation , we observe that the left hand sides of and are uniformly integrable , being bounded in .in fact and the second term in the right hand side is uniformly bounded ( recall that by assumption ) , while the first term is bounded in , by .recall from [ ch2:sec : setting ] that denotes the risk - neutral log - price , and assume that in distribution as ( which is automatically satisfied if has right - continuous paths ) . for an arbitrary family of values of , with and , we show that condition implies .assume first that ( with no assumption on ) .since , one has in distribution , hence by and fatou s lemma . with analogous arguments , one has ,hence is satisfied .next we assume that and is bounded , say ] and optimize over . ] , hence where denotes a gaussian random variable with mean and variance .we recall the standard estimate as . then we can write : in particular , we get from that , for fixed , for a lower bound , restricting the sum in to a single value , we get we now prove relation .we fix a family of with and for some . to get an upper bound, we drop the term in ( since ) and plug , getting let us denote by the value of for which the minimum in the definition of is attained . choosing large enough , so that , the middle term in is and is the dominating one , provided and , so that the third term is . for an analogous lower bound , we apply with : since ( recall that ) , we get we have thus proved relation .it remains to prove relation .we fix a family of such that either and , or and .since , by direct computation , the infimum is attained at which yields we now choose in , so that where we have absorbed inside the term in the exponential , because by ( recall that ) .the dominant contribution to is given by the middle term ( note that , always by ) . for a corresponding lower bound, we apply with : since and ( because ) , we get we have thus shown that completing the proof of relation and of lemma [ th : lemma ] .let us first prove and . since , cf . and, recalling we can rewrite the black&scholes formula as follows : if , applying we get and is proved . next we assume that . by convexity of ( cf. lemma [ ch2:th : mills ] ) , hence to prove it suffices to show that . to this purpose , by a subsequence argument , we may assume that .since for , when necessarily ] then both and converge to , by continuity of , hence , i.e. as requested .let us now prove .assume that , and note that for every subsequence we can extract a sub - subsequence along which either or .we can then apply and to show that : * if , the right hand side of is bounded from above by ; * if and , then and consequently is uniformly bounded from above , hence the right hand side of vanishes ( since ) . finally , we assume that and show that . extracting a subsequence, we have for some fixed , i.e. both and , and we may assume that ] .consider first the case , i.e. : by one has , hence ( because is bounded ) , and recalling relation yields next consider the case : since , we have and again by we obtain . in both cases , .we thank fabio bellini , stefan gerhold and carlo sgarra for fruitful discussions .
we provide explicit conditions on the distribution of risk - neutral log - returns which yield sharp asymptotic estimates on the implied volatility smile . we allow for a variety of asymptotic regimes , including both small maturity ( with arbitrary strike ) and extreme strike ( with arbitrary bounded maturity ) , extending previous work of benaim and friz . we present applications to popular models , including carr - wu finite moment logstable model , merton s jump diffusion model and heston s model .
electrical capacitance tomography ( ect ) is an attractive method for imaging multiphase flows , as it is noninvasive , fast , safe and low cost .a typical ect system consists of three main parts : a multi - electrode sensor , an acquisition hardware and a computer for hardware control and image processing .specifically , the multi - electrode hardware in ect typically has electrodes surrounding the wall of the process vessel . the number of independent capacitance measurements in such a configuration is due to the independent number of sensor pairs with electrodes .the final objective is to recover the cross section or even 3d images of the permittivity distribution by using these measurements to solve an inverse problem .however , the inverse problem is underdetermined , since the number of measurements is far fewer than the number of pixels in the reconstructed image .furthermore , the governing equations to be considered are non - linear .various reconstruction algorithms have been developed to cope with these difficulties .direct , or single step , algorithms include the classic linear back projection ( lbp ) , approaches based on singular value decomposition ( svd ) , and tikhonov regularization . indirect , or iterative ,algorithms include landweber iterations ( li ) and iterative tikhonov methods .these algorithms all inherently assume a smooth permittivity distribution within the sample , but for many systems this assumption is poor . in recent years , concepts from compressivesensing ( cs ) theory have been shown to permit the reconstruction of sharp changes in permittivity .cs can not be applied strictly to ect image reconstruction due to the non - linear nature and because the sensitivity matrix does not satisfy the restricted isometry property ( rip ) . however , several researchers have extended the ideas of cs to non - linear systems . in this paperwe adapt ideas from cs to propose a comprehensive reweighted total variation iterative shrinkage thresholding ( tv - ist ) algorithm for non - linear ect image reconstruction . after explaining the ect physical model , we modify the conventional tv - ist to develop the tv - ist for ect and its fast version with auxiliaries .we introduce adaptive weights to approximate the -norm solution closely .finally , we combine this reweighting approach with a method to minimize the non - linearity of the reconstruction by updating the sensitivity matrices within tv - ist . the algorithm has been examined using simulated measurements of phantoms to show its superiority compared with other existing algorithms .in ect , the permittivity distribution inside a pipe or vessel of interest , corresponding to the material distribution , is calculated from measured capacitances between all pairs of sensors located around the pipe s periphery .the total electric flux over all the electrodes surfaces is equal to zero , hence the potential and permittivity are obtained from a form of poisson s equation : = 0,\ ] ] where is the spatial permittivity distribution , and the electric potential distribution .the boundary conditions are for the excited electrode and for other electrodes . for the two - dimensional case , the relationship between the capacitance and permittivity distribution can be expressed by the following equation : where is the total charge , denotes the closed line of the electrical field , is the permittivity distribution in the sensing field , and is the potential difference between two electrodes forming the capacitance . in ( [ eq : c ] ) is also a function of .therefore the capacitance between electrode combinations can be considered as a function of permittivity distribution : where is a non - linear function , and elements of are the non - redundant capacitance values obtained from the electrode pairs $ ] .if we descretise the permittivity and express it as a vector , we can estimate the changes in the capacitance values from a taylor s series expansion : where is the sensitivity of the capacitance with respect to changes in the permittivity distribution , and represents the higher order terms of . because is usually small ,the high order terms are often neglected .( [ eq : trianglec ] ) can be linearized in a matrix form : where , is a jacobian / sensitivity matrix denoting the sensitivity distribution for each electrode pair , and . as a result, the non - linear forward problem has been reformulated to a linear approximation . generally in ect ,( [ eq : linearc ] ) is written in a normalized form where is the normalized capacitance vector , is the jacobian matrix of the normalized capacitance with respect to the normalized permittivities , which gives a sensitivity map for each electrode pair , and is the normalized permittivity vector , which can be visualized by the colour density of the image pixels .the conventional optimization problem of ect becomes because there are electrode pairs , should be .the objective of the reconstruction algorithm of ect is to recover from measured capacitance vector . while in the discrete linear model, it is to estimate given , and is seen as a constant matrix determined in advance for simplicity .there are several difficulties with the reconstruction problem .firstly , ( [ eq : lambda ] ) is under - determined so the solution is not unique , and it is very sensitive to disturbances of .secondly , owing to the non - linearity in eq .( [ eq : c ] ) , is not constant but varies for different permittivity distributions . in this paper , we propose a non - linear reweighted total variation image reconstruction algorithm to overcome these difficulties .to recover the permittivity distribution image , many reconstruction algorithms for ect have been developed .generally the reconstruction algorithms can be categorized in two groups : direct algorithms and iterative algorithms . among them , landwater iteration and steepest descent method ( lwsdm ) is considered as one of the best algorithms with good efficiency .it minimizes the cost function , e.g. to minimize the gradient of is we iteratively update the image in the direction that decreases most quickly .therefore the new image will be where is a positive value determining the step size .in fact , lwsdm can be derived from the iterative shrinkage thresholding algorithm ( ista ) as a special case with ect constraints .here we introduce and explain the general model of ista on which our algorithm also is based .the ista is to solve a class of optimization problems with convex differentiable cost functions and convex regularization .[ ther : ista ] consider the general formulation : and the following assumptions are satisfied : * : a smooth convex function which is also continuously differentiable with lipschitz continuous gradient : where is the lipschitz constant of . * : a continuous convex function mapping * problem is solvable. then basic ista converges to its true solution by running iteration , where its iteration : for example , lwsdm is actually a special instance of problem by substituting and as a smooth quadratic minimization problem with the lipschitz constant of the gradient being . then according to ( [ eq : p_l ]) we have which is equivalent to the lwsdm , where .theorem [ ther : ista ] provides the theoretical convergence for algorithms .total variation ( tv ) norm of the image has been used widely to penalize the cost function .it also has been verified that the tv norm can be utilized to address the under - determined image reconstruction and reproduce ect or other tomography images with sharp transitions in intensity .therefore , unlike conventional techniques for iterative reconstruction , we assume that there are sharp changes in intensity that can be sparsely represented by their spatial gradients . in this casethe cost function is to minimize the least squares error and the sparsity of intensity changes : where is the discrete isotropic tv of the two dimensional defined by : with the boundary conditions and .( [ eq : tv1 ] ) belongs to linear inverse problems with nonquadratic regularizers .nonquadratic regularizers include wavelet representations , sparse regression and total variation , etc .these problems can be solved by a signal processing technique called compressive sensing ( cs ) in the literature .ista is very convenient to solve cs problems with norm regularization .the non - linear shrinkage operation , or so called soft thresholding , is for instance , the ista and its derivative versions along with the shrinkage operation have been verified to solve wavelet - based reconstruction for magnetic resonance imaging ( mri ) efficiently . in the next section we will explain how to implement ista to ect image reconstruction using tv regularization and prove its effectiveness .in this section we will present an iterative reconstruction technique for ect . as in ist, the iterative soft thresholding is applied to penalize the total variation of the ect image .some of the contents have been introduced in our conference paper . herewe provide the full theoretical analysis of this algorithm and its convergence rate . in the ect model , the permittivity distribution inside the pipecan usually be formulated as the 2d image / matrix .set is expressed as a column vector . denotes the pixel of the position in the imaging region .its magnitude is proportional to the permittivity difference and is outside of the imaging region .we use to represent the gradients of the image respectively which correspond to the horizontal and vertical finite differences . in detail, the gradient transforms are used to calculate the gradients : where are transform matrices .each element of corresponds to the same element in .likewise , given , an inverse transform can be carried out by solving a least squares ( ls ) problem of using linear algebra we can obtain the standard ls solution : approximates the laplacian operator for the image .it is an approximation of the fourier transform version but only considers the pixels within the imaging region .( a ) ( b ) to consider an isotropic form of tv , a single vector is used to represent the gradient magnitude , where elements of are given by .then ( [ eq : tv1 ] ) can be reformulated to equation ( [ eq : tvist1 ] ) is different from the conventional minimization problem .however , we can still use the iterative update idea to pursue the solution . instead of updating in iterations , herewe calculate and by updating it to their steepest descent . following the second step in ( [ eq : x_k+1_f1 ] ) ,the residuals are calculated and projected to the and directions , respectively where due to the symmetry of matrix . as a result , according to the ist algorithm the gradients can be updated : where due to the requirements of lipschitz continuous , and are the iterative gradient solution that can be derived from if we only consider the least squares error .the next step is to optimize the with where is equal to multiplied by some constant .similar to ( [ eq : shrinkage ] ) , a shrinkage operator can be used .while here the difference to the conventional shrinkage is that rather than set a soft thresholding on directly , we decrease the magnitudes of by making it proportional to the magnitudes of after a soft thresholding on to reduce the total variation .the magnitude vector can be calculated element - wise by so for the ect tv - ist , the soft thresholding process can be carried out where is the shrinkage operator defined in ( [ eq : shrinkage ] ) , and this equation is calculated element - wise . by using this new soft thresholding process we are able to eliminate small variation and meanwhile reduce the large variation in directions .finally , the new reconstructed image can be updated by ( [ eq : ls ] ) , and the new gradients of the image are updated by multiplying the transform matrices which returns to the beginning of the section and completes one iteration in the algorithm .algorithm 1 sums up the total variation - iterative soft thresholding algorithm .+ -0.1 cm [ cols="<",options="header " , ] similar to the weighted minimization , it is natural to incorporate the reweighting technique in the total variation constraints .then the tv optimization problem for ect is transformed to in the following steps , ( [ eq : g12 ] ) and ( [ eq : hat_g12 ] ) remain the same for reweighted tv - ist in each iteration .the difference occurs in the soft thresholding step .because we want to pursue the minimal norm of the weighted tv , the thresholding process needs to be changed adaptively : where this operation should be done element - wise .it is a weighted version of the 2d soft threshold update . by using the weighted for thresholding, the norm behaves more like the norm .all the non - zero entries of above the threshold will be calculated more equally in the weighted norm , similar to the definition of the norm which see all non - zero entries contribute equally . finally , the weights vary depending on .specifically , the weights can be updated element by element as the second step of algorithm 3 : where is a parameter that is set slightly smaller than the expected nonzero magnitudes of . the value for be determined from experience , but in general should be small . moreover , in practice since the reconstructed result and its gradient evolves gradually after each iteration , we insert the weights updating step into the tv - ist algorithm every iterations . hence the weights can be updated every iterations and we can use the parameter to make a tradeoff between calculation speed and weights update .meanwhile , the auxiliary vectors can also be adopted to accelerate the convergence and the reweighted tv - ist becomes reweighted tv - fist .the faster implementation is always used here and henceforth the reweighted tv - fist will be referred to as reweighted tv - ist for simplicity . before demonstrating the reweighted tv - ist algorithm, we explain two techniques that can be used to compensate for the non - linearity in our algorithm . in the ect model ,the non - linear effects can be resolved in two aspects .the first one is the approximation of the linear model in ( [ eq : linearc ] ) .the second non - linear effect lies on the accuracy of the measurements of the sensitivity matrix .two techniques are introduced here to address the non - linear effect , respectively .however , only the first of these is implemented in the simulations as it is computationally simpler . from ( [ eq : trianglec ] ) to ( [ eq: linearc ] ) the quadratic and higher order terms have been neglected to reduce the ect model to a linear model .however this approximation causes errors due to the higher order terms that have been neglected . to offset this bias ,a fitting curve has been proposed : where denotes the measurement at electrode when electrode is under the voltage while other electrodes are grounded , .this setting is to make sure that when tends to infinity and the slope is at .this approach may reduce the non - linear error by around .the non - linear sensitivity matrix can be defined as combining ( [ eq : m ] ) and ( [ eq : s ] ) we have which adjusts to a non - linear sensitivity matrix , where the permittivity of the area of interest is assumed to vary from to , whose values can be determined before the experiments .an adaptive sensitivity matrix model has also been proposed for use with landweber iterations .we introduce this feedback iteration to our reweighted tv - ist algorithm .the sensitivity map for an electrode pair can be calculated from the potential distribution where are the gradient values of the potential with electrode in the row and column vectors , respectively ; and the potential value of each pixel can be computed after iterations using the finite difference method ( fdm ) depending on the potential values of the surrounding four pixels : where are the location indexes . in summary ,the reweighted tv - ist algorithm for non - linear ect uses the tv penalties in the cost function to pursue the optimal solution iteratively and meanwhile makes use of the superiority of updated reweighted norms and the auxiliary method s fast convergence .it is distinct from conventional fista for total variation minimization and designed to be suitable for ect reconstruction specifically .compared to the conventional linear tv - ist , our non - linear reweighted tv - ist has two differences .firstly the reweighted term has been adopted in the cost function to pursue a more sparse total variation in the optimization process .this should produce clearer edges between areas with different permittivities .secondly , two methods are introduced to reduce the non - linear effects .the methods add an extra step in the algorithm ( step 2 in algorithm 4 ) to update the sensitivity matrix during the calculation .the two methods of non - linearity correction introduced in this section have similar effects .one is derived from the second order terms of the taylor series expansion , while the other represents the non - linearity from a potential distribution perspective . in algorithm 4 , either ( [ eq : ss ] ) or ( [ eq : s_ij ] ) can be used to compensate the non - linearity in ( [ eq : c ] ) .herein we only consider the correction obtained using ( [ eq : ss ] ) as this implementation is faster .to test the proposed algorithm , numerical simulations were performed on an ect model .the results of reweighted tv - fist algorithm are compared with the performances of several widely used algorithms in practice , which include lbp , art ( relaxed kacmarz iteration ) and sirt ( relaxed cimmino iteration ) .all reconstructions are carried out on a standard desktop pc with an amd phenom(tm ) ghz processor and gb ram .the simulations are run in matlab 2009b and iterations were run throughout for each of these algorithms .the ect system ( normally with or electrodes here ) was modeled using the comsol multiphysics software package , and the sensitivity matrix was generated from comsol for all reconstructions .as in , the normalised capacitance was used to help minimise the effect of non - linearity introduced by the wall of the sensor .firstly , we implement various algorithms on a phantom consisting of an arc - shaped part and a circular object in a pixel image , as shown in fig .[ fig:1 ] . in the ect system we use electrodes , which can provide independent capacitance measurements .the smaller round object has image intensity ( a.u . ) of and the larger object has an intensity of ; black and white in fig .[ fig:1 ] correspond to intensities of 0 and 1 , respectively . from fig .[ fig:1 ] ( b ) one can see that the two objects can be recovered approximately by the lbp reconstruction , however the shape is significantly smoothed and broadened compared with the true image in ( a ) .the landweber , art and sirt methods show a similar recovered result in ( c - e ) .errors in the permittivity distribution make precise identification of the boundary of the objects challenging .fig [ fig:1 ] ( f ) shows the reconstruction using reweighted tv - ist .the boundaries of both objects are clearly resolved with the correct intensity .the only significant error occurs at the wall of the system .the error at the wall is likely caused by non - linearity at the wall , or the use of the isotropic form of tv which can introduce smoothing at sharp points in the image . in the second simulation ,the ect system consists of electrodes , which can provide independent capacitance measurements .the tested permittivity distribution was the ` two bubbles ' image , as shown in fig .[ fig : wwobubble](a ). it is a phantom image consisting of a circular pipe containing two circular objects in a pixel image . in the simulation , the relative permittivity of the cylindrical wall and the background was set to ; the relative permittivity of the two circular objects were and for the large and small objects , respectively .[ fig : wwobubble](a ) is different from the normal ect permittivity distributions considered since the background has a high permittivity while the two bubbles have low permittivity .the lbp reconstruction of this image , shown in fig .[ fig : wwobubble](b ) , fails as both bubbles blur into a single object .the poor reconstruction arises from the close proximity of the two bubbles and the use of a high permittivity background .landweber iterations , fig .[ fig : wwobubble](c ) , gives a better result with the two bubbles resolved , but the bubbles still appear heavily smoothed .similar results were obtained for the art and sirt reconstructions . the linear tv - ist algorithm ,[ fig : wwobubble](d ) , recovers the sharp boundaries around the two bubbles .however , a high permittivity `` bridge '' is seen connecting the two bubbles and the permittivity of the smaller bubble is over estimated .the proposed reweighted tv - ist result is shown in fig .[ fig : wwobubble](g ) .the outline of the two bubbles is recovered fairly accurately , with only a slight tendency of the two bubbles to merge together and the size of the two bubbles overestimated by % .the `` bridge '' seen using the standard tv - ist algorithm has been eliminated .the permittivity is also recovered fairly well with the permittivity of the large bubble found to be and the permittivity of the small bubble , which compare with the input permittivities of and , respectively .the non - linear reweighted reconstruction is shown in fig .[ fig : wwobubble](h ) .the recovered bubble shapes are slightly more `` square '' than the input bubble shapes , but otherwise the outline of both bubbles is recovered well . the size of each bubble is accurate to within % of the true bubble size .the permittivity in the large and small bubbles was and , respectively , in good agreement with the true values .the reconstruction quality is sensitive to the choice of the parameters , , and , as well as the number of iterations performed . however , overallthese results demonstrate that the introduction of the reweighted tv - ist algorithm , including non - linearity correction , significantly improves the quality of the reconstructed images for piecewise smooth input permittivity distributions .the re - weighting approach enables the solution to approach the true -norm solution closely , while the updates to the sensitivity matrix during image reconstruction help mitigate against the non - linearity effects .in this paper , a non - linear reweighted total variation algorithm for reconstruction of images obtained from ect measurements has been proposed and analyzed .the proposed algorithm penalises the -norm of the spatial finite differences of the image ( total variation ) by using an iterative thresholding approach . a varying weight calculated in each iterationis used to make sure that the result converges towards the desired -norm .in addition , the non - linearity of the governing equations was considered and a straightforward approach to update the sensitivity matrix was introduced accordingly .the proposed algorithm was verified on two simulated permittivity distributions .it is shown that the reweighting significantly increases the quality of the reconstructed images recovering sharper boundaries with fewer artefacts than existing algorithms including lbp , art , sirt and our previous implementation of tv - ist .the incorporation of the updated sensitivity matrix to approximate the non - linearity of the ect sensor further increased the accuracy of the reconstructed images , most notably in recovering quantitative permittivity values in each domain .the new algorithm here promises to increase the quality of ect imaging .we anticipate even greater benefits if the algorithm can be combined with recently proposed enhanced sensing strategies .this work was partially supported by the epsrc grant reference : ep / k008218/1 .the authors would like to thank t.c .chandrasekera and yi li for assisting with the comparison to existing image reconstruction algorithms .q. marashdeh , w. warsito , l .- s . fan , and f. l. teixeira , `` nonlinear forward problem solution for electrical capacitance tomography using feed - forward neural network , '' _ ieee sensors journal _ ,vol . 6 , pp .441449 , 2006 .m. b. haddadi and r. maddahian , `` a new algorithm for image reconstruction of electrical capacitance tomography based on inverse heat conduction problems , '' _ ieee sensors journal _ , vol .16 , pp . 17861794 , 2016 .w. yang , d. spink , t. york , and h. mccann , `` an image - reconstruction algorithm based on landweber s iteration method for electrical - capacitance tomography , '' _ measurement science and technology _ , vol .10 , pp.1065 , 1999 .l. h. peng , h. merkus , and b. scarlett , `` using regularization methods for image reconstruction of electrical capacitance tomography , '' _ particle and particle systems characterisation _ , vol .96104 , 2000 . t. c. chandrasekera , y. li , j. s. dennis , and d. j. holland , `` total variation image reconstruction for electrical capacitance tomography , '' in _ ieee international conference on imaging systems and techniques ( ist ) , _ , jul .2012 , pp . 584589 .j. ye , h. wang , and w. yang , `` image reconstruction for electrical capacitance tomography based on sparse representation , '' _ ieee transactions on instrumentation and measurement _ , vol .64 , pp . 89102 , 2015 . w. xu , m. wang , j. f. cai , a. tang , `` sparse error correction from nonlinear measurements with applications in bad data detection for power networks , '' _ ieee transactions on signal processing _61 , pp.61756187 , 2013 .m. soleimani and w. r. b. lionheart , `` nonlinear image reconstruction for electrical capacitance tomography experimental data using experimental data , '' _ measurement science and technology _ , vol .16 , pp . 19871996 , 2005 .y. liu , j. ma , y. fan , and z. liang , `` adaptive - weighted total variation minimization for sparse data toward low - dose x - ray computed tomography image reconstruction , '' _ physics in medicine and biology _ , vol .57 , pp . 79237956 , 2012 .m. guerquin - kern , m. haberlin , k. pruessmann , and m. unser , `` a fast wavelet - based reconstruction method for magnetic resonance imaging , '' _ ieee transactions on medical imaging _ , vol .30 , pp . 16491660 , 2011 .d. l. donoho and m. elad , `` optimally sparse representation in general ( nonorthogonal ) dictionaries via minimization , '' _ proceedings of the national academy of sciences _ , vol .21972202 , 2003 .a. beck and m. teboulle , `` fast gradient - based algorithms for constrained total variation image denoising and deblurring problems , '' _ ieee transactions on image processing _ , vol .18 , pp . 24192434 , 2009 .g. villares , l. begon - lours , c. margo , y. oussar , j. lucas , and s. hole , `` a non - linear model of sensitivity matrix for electrical capacitance tomography , '' in _ proceedings of the electrostatics joint conference _ , jun .2010 .t. c. chandrasekera , y. li , d. moody , m. schnellmann , j. s. dennis , and d. j. holland , `` measurement of bubble sizes in fluidised beds using electrical capacitance tomography , '' _ chemical engineering science _ , vol .679 687 , 2015 .
a new iterative image reconstruction algorithm for electrical capacitance tomography ( ect ) is proposed that is based on iterative soft thresholding of a total variation penalty and adaptive reweighted compressive sensing . this algorithm encourages sharp changes in the ect image and overcomes the disadvantage of the minimization by equipping the total variation with an adaptive weighting depending on the reconstructed image . moreover , the non - linear effect is also partially reduced due to the adoption of an updated sensitivity matrix . simulation results show that the proposed algorithm recovers ect images more precisely than existing state - of - the - art algorithms and therefore is suitable for the imaging of multiphase systems in industrial or medical applications . electrical capacitance tomography ( ect ) , iterative reconstruction , reweighted total variation , non - linear effect .
the easiest to understand and to manipulate , and the most natural states of the quantum harmonic oscillator are the number states .number states are eigenstates of the hamiltonian and , of course , are also eigenstates of the number operator , where and are the well known creation and annihilation operators , respectively .however , for any , no matter how big , the mean field is zero ; i.e. , , and we know that a classical field changes sinusoidally in time in each point of space ; thus , these states can not be associated with classical fields [ 1 , 2 ] . in the first years of the sixties of the past century , glauber [ 3 ] and sudarshan [ 4 ] introduced the coherent states , and it has been shown that these states are the most classical ones . coherent states are denoted as , and one way to define them is as eigenstates of the annihilation operator ; that is , .an equivalent definition is obtained applying the glauber displacement operator to the vacuum : ; we see then coherent states as vacuum displaced states .coherent states also have the very important property that they minimize the uncertainty relation for the two orthogonal field quadratures with equal uncertainties in each quadrature [ 1 , 2 ] .since then , other states have been introduced . in particular , squeezed states [ 5 ]have attracted a great deal of attention over the years because their properties allow to reduce the uncertainties either of the position or momentum , while still keeping the uncertainty principle to its minimum .because of this , they belong to a special class of states named minimum uncertainty states . once produced , for instance as electromagnetic fields in cavities , they may be monitored via two level atoms in order to check , or measure , that such states have been indeed generated [ 6 , 7 ] .based on the above properties , we can think on the eigenstates of the position as limit cases of the squeezed states . asthe squeezed states are minimum uncertainty states , we can reduced to zero the uncertainty in the position , while the uncertainty in the momentum goes to infinity , so that we keep the uncertainty principle to its minimum .of course , there is also the option to reduce to zero the uncertainty in the momentum , while the position gets completely undefined , obtaining that way the possibility to define momentum eigenstates . in sections 1 and 2, we analyze the possibility of define the position eigenstates as the limit of extreme squeezing of the squeezed states . in what follows, we will use a unit system such that .there are two equivalent forms to define the squeezed states . in the first one , introduced by yuen [ 8 ] ,squeezed states are obtained from the vacuum as where \ ] ] is the so - called squeeze operator . in this view , squeezed states are created displacing the vacuum , and after , squeezing it .note that when the squeeze parameter is zero , the squeezed states reduce to the coherent states . in this work, we will consider only real squeeze parameters , as that is enough for our intentions . in the way followed by caves [ 9 ] ,the vacuum is squeezed and the resulting state is then displaced ; that means , that in this approach both definitons of the squeezed states agree when the squeeze factor is the same , , and when the modified amplitude of the caves approach is given by being and to analyze the uncertainties in the position and in the momentum of the squeezed states , we introduce , following loudon and knight [ 5 ] , the quadrature operators and where is the position operator and the momentum operator . note that the quadrature operators are essentially the position and momentum operators ; this definition just provides us with two operators that have the same dimensions . in order to show that really the squeezed states areminimum uncertainty states , we need to calculate the expected values in the squeezed state ( 1 ) of the quadrature operators ( 7 ) and ( 8) , and its squares . using ( 7 ) and ( 1 ) , we get the action of the squeeze operator on the creation and annihilation operators is obtained using the hadamard s lemma [ 10 , 11 ] , such that therefore , as and , it is easy to see that and that so , we obtain for the uncertainty in the quadrature operator , proceeding in exactly the same way for the quadrature operator , we obtain as we already said , we can then think in the position eigenstates and in the momentum eigenstates as limit cases of squeezed states .indeed , when the squeeze parameter goes to infinity , the uncertainty in the position goes to zero , and the momentum is completely undetermined .of course , when the squeeze parameter goes to minus infinity , we have the inverse situation , and we can think in define that way the momentum eigenstates . in the two following sections , we use the yen and the caves definitions of the squeezed states to test this hypothesis .from equation ( 14 ) above , we can see that in the limit the uncertainty for position vanishes and so a position eigenstate should be obtained ( from now on , we consider real ) , we have written a sub index in the position eigenstate in order to emphasis that fact .following the yuen definition , so we now write the squeeze operator as [ 12 ] where , as we already said , and .so , now , we develop the first operator ( from right to left ) in power series , we use the definition of the coherent states , , to obtain as , , which means that the only term that survives from the sum is , and then that would give an approximation for how to obtain a position eigenstate from the vacuum .however , note that the above expression does not depend on and therefore can not be correct .we now squeeze the vacuum and after we displace it . thus , in this case , we use again expression for the squeeze operator [ 12 ] , where and are defined in ( 5 ) and ( 6 ) , and we write the displacement operator as [ 12 ] , to obtain as and , we cast the previous formula as inserting two times the identity operator , written as , we get , and using the hadamard s lemma [ 10 ] , it is easy to prove that for any well behaved function ; thus \exp \left[-\frac{\nu } { 2\mu } \left(\hat{a}^{\dagger } -\frac{x}{\sqrt{2}}\right)^2\right]|0\rangle .\ ] ] after some algebra , \exp \left[-\frac{\nu } { 2\mu } \hat{a}^{\dagger^{2}}+\frac{x}{\sqrt{2}}\left(1+\frac{\nu } { \mu } \right)\hat{a}^{\dagger } \right]|0\rangle .\ ] ] we take now the limit when , or , so we get an expression that gives us the position eigenstates as an operator applied to the vacuum .unlike the yuen case , expression ( 21 ) , now we have an dependence and it looks like a better candidate to be the position eigenstate .in fact , in the next section , we will show that this really is an eigenstate of the position .we will try now an alternative approach to the eigenstates of the position .we can write a position eigenstate , simply by multiplying it by a proper unit operator therefore the position eigenstate may be written as [ 13 ] with ; such that may be re - written as that may be added via using the generating function for hermite polynomials [ 14 ] to give the above expression allows us to write the position eigenstate as an operator applied to the vacuum .note that this expression is the same as the one obtained using the caves definition for the squeezed states , formula ( 28 ) .we prove now that indeed ( 32 ) is an eigenvector of the postion operator ; for that , we write the position operator as , thus inserting the identity operator in the above expression as , we get as , , and , we obtain as we wanted to show. we can write ( 32 ) in terms of coherent states .we have thus with the expressions obtained , it is easy to show that the squeezed states have the form of a gaussian wave packet . to confirm this , we use the above expression to state that we write as , where we have just inserted the identity operator , and we use that , for any well behaved function , to obtain }{\pi ^{1/4}}\langle \alpha |e^{-\frac{\hat{a}^{\dagger^{2 } } } { 2}}e^{\frac{r}{2}\hat{a}^2-r\hat{a}^{\dagger } \hat{a}}|\sqrt{2}x\rangle .\ ] ] as the coherent states are eigenfunctions of the annihilation operator , it is very easy to show that , so }{\pi ^{1/4}}\langle \alpha |e^{\frac{r}{2}\hat{a}^2-r\hat{a}^{\dagger}\hat{a}}|\sqrt{2}x\rangle .\ ] ] in the appendix , we disentangle the operator as , and we get }{\pi ^{1/4}}\langle \alpha |e^{-r\hat{a}^{\dagger } \hat{a}}e^{\frac{1-e^{2r}}{4}\hat{a}^2}|\sqrt{2}x\rangle .\ ] ] it is very easy to see that , and that , thus \right\rbrace \left\langle \alpha|\sqrt{2}e^{-r}x\right\rangle .\ ] ] finally , as , we have \right\},\ ] ] as we wanted to show .we can now find the wave function of a coherent state as a function of the position [ 15 ] .we use equation ( 32 ) , that express the eigenstates of the position as an operator acting on the vacuum , and get that {\pi } } e^{-\frac{\beta^{*^{2 } } } { 2}+\sqrt{2 } \beta ^ * x } \left\langle \beta|0\right\rangle = \frac{e^{-\frac{x^2}{2 } } } { \pi^{1/4 } } e^{-\frac{\beta^{*^{2 } } } { 2}-\frac{\left| \beta \right| ^2}{2}+\sqrt{2 } \beta ^ * x}\ ] ] as and .the husimi -function [ 16 ] can be calculated from ( 45 ) simply as that after some algebra , can be re - written as \ ] ] in figures 1 and 2 , we plot the husimi -function for different values of .-function for and for .,title="fig : " ] -function for and for .,title="fig : " ] -function for and for .,title="fig : " ] -function for and for .,title="fig : " ]we have found an operator that applied to the vacuum gives us the eigenstates of the position .we did that by two ways ; first , using the caves definition of the squeezed states , we took the limit of extreme squeezing in the position side , to get the position eigenstate .second , we used the expansion of an arbitrary wave function in the base of the harmonic oscillator ; i.e. , we wrote an arbitrary wave function in terms of hermite polynomials .the expressions obtained allows us to show certain properties of the squeezed states , and also allow us to write in a very easy way the husimi _q_-function of the position eigenstates . the same procedure can be followed to find the eigenstates of the momentum , but taken the limit when the squeeze parameters goes to .+ we can also conclude that from the point of view of this work , the caves approach to the squeeze states is more adequate , since it gives the correct eigenstates of the position ; while the yuen definition , formula ( 1 ) , gives an expression that is incorrect .so , we must first squeeze the vacuum , and after , we displace it .in this appendix , we show how to disentangle the operator .we define and we suppose that ( 48 ) can be rewritten as \exp \left[g(r)\hat{a}^2\right],\ ] ] where and are two unknown well behaved functions ; as , being the identity operator , these functions most satisfy the conditions . at first sight , one can think that in the proposal ( 45 ) should be a term of the form ] .we differentiate with respect to , to find \exp \left[g\hat{a}^2\right]+\frac{dg}{dr}\exp \left[f\hat{a}^{\dagger } \hat{a}\right]\hat{a}^2\exp \left[g\hat{a}^2\right],\ ] ] where for simplicity in the notation , we have dropped all -dependency ; we write the identity operator as \exp \left[f\hat{a}^{\dagger } \hat{a}\right]$ ] in the second term , to obtain \exp \left[g\hat{a}^2\right]+\frac{dg}{dr}\exp \left[f\hat{a}^{\dagger } \hat{a}\right]\hat{a}^2\exp \left[-f\hat{a}^{\dagger } \hat{a}\right]\exp \left[f\hat{a}^{\dagger } \hat{a}\right]\exp \left[g\hat{a}^2\right].\ ] ] using the hadamard s lemma [ 10 , 11 ] , it is very easy to prove that \hat{a}^2\exp \left[-f\hat{a}^{\dagger } \hat{a}\right]=e^{-2f}\hat{a}^2,\ ] ] so equating this equation to the one obtained differentiating the original formula for , equation ( 44 ) , we get the following system of first order ordinary differential equations the solution of the first equation , that satisfies the initial condition , is the function . substituting this solution in the second equation and solving it with the initial condition , we obtain .thus , finally we write c. gerry and p. knight , _ introductory quantum optics_. cambridge university press ( 2005 ) .j.c . garrison and r.y .chiao , _ quantum optics_. oxford university press ( 2008 ) .glauber , _ coherent and incoherent states of the radiation field _ , _ phys ._ , 2766 ( 1963 ) .sudarshan , _ phys ._ , 277 ( 1963 ) .r. loudon and p.l .knight , special issue of _ j. of mod . opt ._ , 709 ( 1987 ) .satyanarayana , p. rice , r. vyas , and h.j .carmichael , _ j. opt .soc.am._ b , 228 ( 1989 ) .h. moya - cessa and a. vidiella - barranco , _ j. of mode ._ , 2481 ( 1992 ) .yuen , _ phys .rev . _ a , 2226 ( 1976 ) .caves , _ phys ._ d , 1693 ( 1981 ) .h ' ector manuel moya - cessa and francisco soto - eguibar , _ introduction to quantum optics_. rinton press , ( 2011 ) .h ' ector manuel moya - cessa and francisco soto - eguibar , _ differential equations . an operational approach_. rinton press , ( 2011 ) .werner vogel and dirk - gunnar welsch , _ quantum optics _ , third , revised and extended edition .schleich , _ quantum optics in phase space . _ wiley - vch , ( 2001 ) .g. arfken , _ mathematical methods for physicists_. academic press , inc ., 3rd edition , ( 1985 ) .u. leonhardt , _ measuring the quantum state of light . _ cambridge university press , ( 1997 ) .k. husimi , _ proc ._ , 264 ( 1940 ) .
the squeezed states are states of minimum uncertainty , but unlike the coherent states , in which the uncertainty in the position and the momentum are the same , these allow to reduce the uncertainty , either in the position or in the momentum , while maintaining the principle of uncertainty in its minimum . it seems that this property of the squeezed states would allow you to get the position eigenstates as a limit case of them , doing null the uncertainty in the position and infinite at the momentum . however , there are two equivalent ways to define the squeezed states , which lead to different expressions for the limit states . in this work , we analyze these two definitions of the squeezed states and show the advantages and disadvantages of using such definition to find the position eigenstates . with this idea in mind , but leaving aside the definitions of the squeezed states , we find an operator applied to the vacuum that gives us the position eigenstates . we also analyze some properties of the squeezed states , based on the new expressions obtained for the eigenstates of the position .
waveform / code design , as one of the major problems in radar signal processing , active sensing , and wireless communications , has attracted significant interests over the past several decades . in radar signal processing and active sensing applications, waveform design plays an essential role because `` excellent '' waveforms can ensure higher localization accuracy , enhanced resolution capability , and improved delay - doppler ambiguity of the potential target .moreover , designing waveforms with robustness or adaptiveness is also required for the scenarios with harsh environments that include heterogeneous clutter and/or active jammers .in addition , with the advance of multiple - input multiple - output ( mimo ) radar , the problem of joint multiple waveform design is gaining even more importance and tends to grow to large scale . in order to obtain waveforms with desired characteristics ,existing approaches usually resort to manipulations with correlation properties , such as the auto- and cross - correlations between different time lags of waveforms , which serve as the determinant factors for evaluating the quality of designed waveforms .perfect auto- and cross - correlation properties indicate that the emitted waveforms are mutually uncorrelated to any time - delayed replica of them , meaning that the target located at the range bin of interest can be easily extracted after matched filtering , and the sidelobes from other range bins are unable to attenuate it .for example , in the applications such as the spot and barrage noise jamming suppression and synthetic aperture radar imaging , waveforms with deep notches towards the time lags ( or equivalently , frequency bands ) , where the jamming or clutter signals are located , are highly desired . on the other hand ,it is preferred from the hardware perspective that the waveforms maintain the constant - modulus property , which can reduce the cost of developing advanced amplifiers .there is a number of existing waveform design methods based on consideration of the correlation properties . the integrated sidelobe level ( isl ) , which serves as an evaluation metric for correlation levels of waveforms , or equivalently, the accumulated sidelobes at all time lags , is typically used .if the receiver is fixed to be the matched filter , the waveform design methods are focused on the waveform quality itself .corresponding waveform designs use the fact that the matched filter can be implemented in terms of the correlation between the waveform and its delayed replica . for example, the method of has proposed to design unimodular waveform in frequency domain using a cyclic procedure based on iterative calculations .a surrogate objective which is minimized by a cyclic algorithm has been introduced , and the methods associated with the isl and weighted isl ( wisl ) minimization therein have been named as can and wecan , respectively .these methods have been later extended to multiple waveform design in . if the receiver is not fixed and therefore has to be jointly optimized with the transmitted waveforms , the focus typically shifts to the so - called mismatched filter ( also called instrumental variable filter ) design at the receiver .such designs add flexibility as they enable to consider constraints which are difficult to address otherwise . the receive filter is generally mismatched because it trades off the signal - to - noise ratio in order to improve the signal - to - interference - plus - noise ratio .the corresponding design techniques are typically based on alternating optimization where minimum variance distortionless response ( mvdr ) filter design is involved .given the waveforms , finding the optimal mvdr receive filter is typically a computationally simpler problem than the waveform design itself .therefore , our focus here is the development of computationally efficient algorithms for addressing the core problem of waveform design when the optimal receive filter is the matched filter .the computational complexity of algorithms is of crucial importance for the isl and wisl minimization - based unimodular waveform design problems .indeed , the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms . however , such problems are non - convex , while classical large - scale optimization approaches are developed for convex problems with relatively simple objective functions and constraints .the isl and wisl objective functions as well as the unimodular constraint to the desired waveforms are in fact complex to deal with and the required accuracy of waveform design is high .the aforementioned can and wecan use a cyclic procedure based on iterative calculations . although large code length up to several thousands is allowed bycan and wecan , the cost in terms of time for these algorithms can reach several hours or even days when the code length and required number of waveforms grow large .this is a significant limitation that restricts the design of waveforms in real time . in large - scale optimization ,the targeted computational complexity per iteration of an algorithm is linear in dimension of the problem or at most quadratic . to reduce the computational complexity to a reasonable one ,many relevant works resort to the majorization - minimization ( mami ) technique , which is the basic technique for addressing large - scale and/or non - convex optimization problems with complex objectives .for example , have dealt with multistatic radar waveform design , where an information - theoretic criterion has been utilized , while have been concerned with single- and multiple - waveform designs .in addition to the computational complexity , another important characteristic of large - scale optimization algorithms is the convergence speed / rate .although the analytic bounds on the convergence rate may be hard / impossible to derive even for some existing large - scale convex optimization algorithms , the design of algorithms with provably faster convergence speed to tolerance than that of the other algorithms is possible even for non - convex problems considered here . in this paper , we focus on the isl and wisl minimization - based unimodular waveform designs for the matched filter receiver , aiming at developing fast algorithms that reduce the computational complexity and have faster convergence speed than the existing algorithms .the paper is based on a more detailed study of inherent algebraic structures of the objective functions , and concerning mami , also designing better majorization functions .the principal goal is to enable the real time waveform design even when the code length and number of waveforms are large .although our work also employs the mami approach , it differs from the previous works in many ways .different from , we formulate the isl minimization - based unimodular waveform design problem as a non - convex quartic problem by transforming the objective into frequency domain and rewriting it as a norm - based objective . moreover , we find out and use inherent algebraic structures in wisl expression that enable us to derive the corresponding quartic form into an alternative quartic form which in turns allows to apply the quartic - quadratic transformation .this equivalent form is based on eigenvalue decomposition , which we prove to be unnecessary to compute in our corresponding algorithm .then the isl and wisl minimization problems in the form of non - convex quartic optimization are simplified into quadratic forms .it allows us to utilize the mami technique where the majorization functions also differ from those of and .our algorithms have lower or comparable computational burden , faster convergence speed , and demonstrate better correlation properties than the existing state - of - the - art algorithms .the paper is organized as follows . in section [ sec : sigmodel ] , the signal model and the isl and wisl minimization - based unimodular waveform design problems are presented . in section [ sec : optviamm ] , new algorithms for the isl and wisl minimization problems are detailed .simulation results are presented in section [ sec : simulation ] , while the paper is concluded in section [ sec : conclu ] ._ notations _ : we use bold uppercase , bold lowercase , and italic letters to denote matrices , column vectors , and scalars , respectively .notations , , and are used for euclidean norm of a vector , frobenius norm of a matrix , and absolute value , respectively .similarly , , , and stand for conjugate , transpose , and conjugate transpose operations , respectively , while , , and respectively denote column - wise vectorization of a matrix , largest eigenvalue of a matrix , and maximization operations .notations and stand respectively for the floor function and modulo operation with the first argument being the dividend , while denotes the operation of constructing a hermitian toeplitz matrix from a vector that coincides with the first column of a matrix and is the operator that picks up diagonal elements from a matrix and writes them into a vector ( for matrix argument ) or forms a diagonal matrix with main diagonal entries picked up from a vector ( for vector argument ) . in addition , stands for the matrix trace , stands for the real part of a complex value , {i , j } ] . herethe column corresponds to the launched waveform .let the element of be where is an arbitrary phase value ranging between and . when the number of waveforms reduces to one , the waveform matrix shrinks to a column vector .the isl for the set of waveforms can be expressed as where is the cross - correlation between the and waveforms at the time lag .the first term on the right - hand side of is associated with the auto - correlations , while the second term represents the cross - correlations of the waveforms . likewise , the wisl for the waveforms can be expressed as where are real - valued symmetric weights , i.e. , , used for controlling the sidelobe levels corresponding to different time lags .if takes zero value , it means that the sidelobe level associated with the time lag is not considered . if all the controlling weights take the value , then in coincides with in . the basic unimodular waveform design problem is then formulated as the synthesize of unimodular and mutually orthogonal waveforms which have as good as possible auto- and cross - correlation or weighted correlation properties . using , the wisl minimization - based unimodular waveform design problem can be formally expressed as where the constraints ensure the modularity of waveforms , while the orthogonality between waveforms is guaranteed by the objective .obviously , if all the controlling weights take the value , the problem becomes the isl minimization - based unimodular waveform design problem .in this section , we develop fast algorithms for the isl and wisl minimization - based unimodular waveform designs .the algorithms make use of the mami technique and exploit inherent algebraic structures in the objective function , which allows to reduce the computational complexity .the isl in can be rewritten in the matrix form as where is the following waveform correlation matrix \label{eq : rp}\end{aligned}\ ] ] and is the kronecker delta function . transforming into frequency domain and performing some derivations ,the isl can be expressed as where with being the row of the waveform matrix , i.e. , ^ { \mathrm t } ] and the matrix with defined as ^ { \mathrm t} ] and , and the vector is defined as is applied to a matrix argument , which means that the magnitude is found for each element of the matrix , that is , the element - wise magnitude . ] via the matrix ] = < .75 < .25 convergence finally , according to the mami procedure and using the closed - form solution to the majorization problem , the isl minimization - based unimodular waveform design algorithm is summarized in algorithm [ canmm ] . thereexist accelerated schemes for mami , such as the squared iterative method ( squarem ) of , which can be straightforwardly applied to speed up algorithm [ canmm ] .the squarem scheme is an extension of the scalar steffensen type method , to vector fixed - point iteration empowered with the idea of `` squaring '' .it is an `` off - the - shelf '' acceleration method that requires nothing extra to the parameter updating rules of an original algorithm , except possibly the computationally cheap projection to feasibility set , and it is guaranteed to converge .different stopping criteria can be employed in algorithm [ canmm ] .for example , it can be the absolute isl difference between the current and previous iterations normalized by the initial isl , or it can be the norm of the difference between the waveform matrices obtained at the current and previous iterations . in terms of the per iteration computational complexity of algorithm [ canmm ] , the straightforward calculation of according to requires operations , the calculation of costs operations , while the computational burden of the matrix to matrix product in is operations .therefore , the total computational complexity is operations .however , and can be computed by means of the fast fourier transform ( fft ) at the order of complexity and , respectively .similarly , using the toeplitz structure of , the product can also be calculated at a reduced complexity , which is the highest in algorithm [ canmm ] .thus , the order of complexity of algorithm [ canmm ] is , which is nearly linear in the dimension of the problem , as required in large - scale optimization .the wisl in can be written in a matrix form as where are defined in . in the frequency domain, can be expressed as where is defined in and is the weighted spectral density matrix .let us also define the toeplitz matrix constructed by the weights as follows .\label{eq : gamatrix}\end{aligned}\ ] ] then the matrix in can be rewritten in the vector - matrix form as substituting into , we arrive to the following wisl expression expanding the squared norm in the sum of yields using the facts that the desired waveforms are orthogonal and unimodular , i.e. , , and also that , we find that therefore , the second and third terms of are constant and immaterial for optimization . with this observation , the wisl minimization problem can be rewritten as [ eq : wislfreopt ] the hadamard product of two matrices appears under the frobenius norm in , and the resulting matrix there is complex . as a result , we can not arrive to a proper quartic form with respect to by directly expanding the squared norm of .instead , we need to convert it into a proper one . towards this end , let us consider the eigenvalue decomposition of , which in general may be indefinite and can be expressed as where and are the eigenvalue and eigenvector , respectively , , equals when is negative , otherwise it is the same as , and is the rank of . substituting into and expanding the frobenius norm , the objective function , called hereafter as , can be rewritten as applying the property ( also holds when is replaced by ) to together with the mixed - product property of the kronecker product , the objective function can be rewritten as where the hermitian matrices and are defined as and . substituting to , the wisl minimization problem becomes [ eq : optwecaniconstr2nd ] [ eq : optwecani ] the objective function takes a proper quartic form with respect to that enables us to design an algorithm based on the mami approach . by means of the trace and vectorization operations for matrices , and similar to the previous subsection , we can transform , denoted for brevity as , into the following form where has been defined before , is the matrix defined as with replacing the objective function with , the optimization problem can be rewritten as [ eq : optwecanii ] where takes a quadratic form , to which a majorant can be applied . yet before applying the majorization procedure , we present the following result that will be used later .[ lemmaii ] given a set of arbitrary complex vectors and an arbitrary hermitian matrix , the following generalized inequality holds , where .let and be respectively the sets of eigenvalues ( in descending order ) and corresponding eigenvectors of the matrix , i.e. , . using this expression and elementary properties of the hadamard product ,the inequality can be derived as the proof is complete .applying lemma [ lemmaii ] by taking , , and , we obtain the following inequality note that for a given matrix in , the largest eigenvalue of in , i.e. , , is fixed , and it can be found that .moreover , the diagonal elements of take values either zero or .therefore , we can replace the matrix in with an identity matrix magnified by without disobeying the inequality .using with that satisfies , the objective function can be majorized by the following function due to the property , the first and second terms in are constant and therefore immaterial for optimization . ignoring these terms ,can be majorized by the problem [ eq : optwecaniii ] to further simplify , we will need the following result that relates hadamard and kronecker products .[ lemmaiii ] given two matrices and of the same size and the selection matrix ] , the following equality holds . under the condition that is an integer , can be decomposed as where the matrices and are constructed in the same way as but have the reduced size , and are respectively the column and row indices of the element in the matrix with linear ( column - wise ) index . the proof of appears in lemma 1 of .the remaining results are the elementary properties of the selection matrix .applying lemma [ lemmaiii ] by taking , , and , and substituting into , the objective function , denoted for brevity as , can be rewritten as {n , n ' } \mathbf { \bar{e } } _ n \boldsymbol{\bar { \phi } } \mathbf { \bar{e}}_{n'}^{\mathrm{h } } { \aftergroup\egroup\originalright } ) \mathrm { vec } \big ( \mathbf { \tilde{y}}^{(k ) } \big ) \nonumber \\ & \quad\ ; - \tfrac{\lambda_{\boldsymbol{\tilde{\phi}}}}{2 } \big ( \mathrm{vec } \big ( \mathbf{\tilde{y } } \big ) \big)^{\mathrm{h } } \mathrm{vec } \big ( \mathbf { \tilde{y}}^{(k ) } \big ) \label{eq : longeqii}\end{aligned}\ ] ] where the latter expression in is obtained by expanding the kronecker product in the prior expression for the objective .using and , and applying the properties and , the objective can be further rewritten as _ {n , n ' } \mathbf { y } ^ { \mathrm{h } } \big ( \mathbf { y } ^ { \mathrm { t } } \otimes \mathbf { i } _ { m p } \big ) \big ( \mathbf { \hat{e } } _ { u{\mathopen{}\mathclose\bgroup\originalleft } ( n { \aftergroup\egroup\originalright } ) } \nonumber \\ & \otimes \mathbf { \hat{e } } _ { v { \mathopen{}\mathclose\bgroup\originalleft } ( n { \aftergroup\egroup\originalright } ) } \big ) \big ( \mathbf { a } _ p^ { \mathrm { t } } \otimes \mathbf { i } _ { mp } \big)^ { \mathrm { h } } \mathrm{vec } \big ( \mathbf { a } _ p \big ) { \mathopen{}\mathclose\bgroup\originalleft } ( \mathrm{vec } \big ( \mathbf { a } _ p \big ) { \aftergroup\egroup\originalright})^{\mathrm{h } } \big ( \mathbf { a } _ p^ { \mathrm { t } } \nonumber \\ & \otimes\mathbf { i } _ { mp } \big ) \big ( \mathbf { \hat{e } } _ { u{\mathopen{}\mathclose\bgroup\originalleft } ( n ' { \aftergroup\egroup\originalright } ) } \otimes \mathbf { \hat{e } } _ { v { \mathopen{}\mathclose\bgroup\originalleft } ( n ' { \aftergroup\egroup\originalright } ) } \big)^ { \mathrm { h } } \big ( ( \mathbf { y } ^{(k ) } ) ^ { \mathrm { t } } \otimes \mathbf { i } _ { m p } \big)^ { \mathrm { h } } \mathbf { y } ^{(k ) } \nonumber \\ & - \tfrac{\lambda_{\boldsymbol{\tilde{\phi}}}}{2 } \mathbf{y}^{\mathrm h } \big ( \mathbf{y}^ { \mathrm { t } } \otimes \mathbf{i}_{mp } \big ) \big ( ( \mathbf{y}^{(k ) } ) ^ { \mathrm { t } } \otimes \mathbf{i}_{mp } \big)^ { \mathrm { h } } \mathbf{y}^{(k)}. \label{eq : longeqiii}\end{aligned}\ ] ] applying the mixed - product property of the kronecker product together with the property to , we obtain _ { n , n ' } \mathbf { \hat{e } } _ { v{\mathopen{}\mathclose\bgroup\originalleft } ( n { \aftergroup\egroup\originalright } ) } \mathbf { a } _ p { \mathopen{}\mathclose\bgroup\originalleft } ( \big ( \mathbf { y } ^{(k ) } \big)^ { \mathrm { h } } \mathbf { \hat{e } } _ { u{\mathopen{}\mathclose\bgroup\originalleft } ( n ' { \aftergroup\egroup\originalright } ) } { \aftergroup\egroup\originalright}. { \aftergroup\egroup\originalright}. \nonumber \\ & \quad \times \!\ ! { \mathopen{}\mathclose\bgroup\originalleft}. \vphantom{\sum_{l=1}^{m^2 p^2 } } { \mathopen{}\mathclose\bgroup\originalleft}. \mathbf { a } _p \mathbf { a } _ p^ { \mathrm { h } } \mathbf { \hat{e } } _ { v{\mathopen{}\mathclose\bgroup\originalleft } ( n ' { \aftergroup\egroup\originalright } ) } \mathbf { y } ^{(k ) } { \aftergroup\egroup\originalright } ) \mathbf { a } _ p^ { \mathrm { h } } \mathbf { \hat{e } } _ { u{\mathopen{}\mathclose\bgroup\originalleft } ( n { \aftergroup\egroup\originalright } ) } \ ! - \tfrac{\lambda_{\boldsymbol { \tilde { \phi } } } } { 2 } \mathbf { y } ^ { ( k ) } ( \mathbf { y } ^ { ( k ) } ) ^ { \mathrm { h } } \big ) \!\ ! { \aftergroup\egroup\originalright } ) \mathbf { y } \nonumber \\ & = \mathbf { y } ^ { \mathrm{h } } { \mathopen{}\mathclose\bgroup\originalleft } ( \mathbf{b}^{(k ) } - \tfrac{\lambda_{\boldsymbol { \tilde { \phi } } } } { 2 } \mathbf { y } ^ { ( k ) } ( \mathbf { y } ^ { ( k ) } ) ^ { \mathrm { h } } { \aftergroup\egroup\originalright } ) \mathbf { y } \label{eq : optwecanivobj}\end{aligned}\ ] ] where and is an hermitian matrix composed of block matrices , i.e. , \label{eq : bkmatrix}\end{aligned}\ ] ] with the block being a toeplitz matrix whose first row and column coincide with the vectors and , respectively . here ,the ( ) elements of and are respectively given by where , is the set of non - negative indices associated with the non - zero isl controlling weights ( always including zero index for the sake of simplicity ) , and is the complementary set of with the full set defined as ] . the wecan algorithm for the large code length in this examplecosts significantly more time and shows the worst auto- and cross - correlations , and therefore , is not shown in fig .[ fig : correvaluation ] .it can be seen from fig .[ fig : correvaluation ] that the auto - correlations associated with the time lags \cup [ 1 , 19] ] for both generated sets of waveforms are well controlled , while the waveform correlations associated with other time lags are not controlled .therefore , the latter results in much higher correlation levels . under the condition of using the same tolerance parameter , the correlation levels corresponding to the time lags of interest obtained by the proposed wislnew algorithm are significantly better than those obtained by the wislsong algorithm . the largest gap between the obtained correlation levels by these two algorithms has reached about db . moreover , the proposed wislnew algorithm needs significantly shorter time than the wislsong algorithm as it has been discussed above .in this paper , we have developed two ( one based on isl and the other based on wisl minimization ) new fast algorithms for designing single / multiple unimodular waveforms / codes with good auto- and cross - correlation or weighted correlation properties . sincethe corresponding optimization problems are non - convex and may be large - scale , the proposed algorithms are based on the mami framework and utilize a number of newly found inherent algebraic structures in the objective functions of the corresponding optimization problems .these properties have enabled us to reduce the computational complexity of the algorithms to the level which is suitable for large - scale optimization and at least similar to or lower than that of the existing algorithms . moreover ,the proposed algorithms also show faster convergence speed to tolerance and provide waveforms of better quality than those of the existing state - of - the - art algorithms .y. yang and r. s. blum , `` mimo radar waveform design based on mutual information and minimum mean - square error estimation , '' _ ieee trans ._ , vol .43 , no . 1 ,330343 , jan . 2007 .a. de maio , s. d. nicola , y. huang , z .- q .luo , and s. zhang , `` design of phase codes for radar performance optimization with a similarity constraint , '' _ ieee trans . signal process ._ , vol .57 , no . 2 , pp . 610621 ,2009 .h. hao , p. stoica , and j. li , `` designing unimodular sequences sets with good correlations including an application to mimo radar , '' _ ieee trans . signal process ._ , vol .57 , no . 11 , pp . 43914405 , nov .a. de maio , y. huang , m. piezzo , s. zhang , and a. farina , `` design of optimized radar codes with a peak to average power ratio constraint , '' _ ieee trans . signal process ._ , vol .59 , no . 6 , pp . 26832697 , jun .a. aubry , a. de maio , a. farina , and m. wicks , `` knowledge - aided ( potentially cognitive ) transmit signal and receive filter design in signal - dependent clutter , '' _ ieee trans ._ , vol . 49 , no . 1 ,93117 , jan .2013 .q. he , r. s. blum , and a. m. haimovich , `` noncoherent mimo radar for location and velocity estimation : more antennas means better performance , '' _ ieee trans . signal process ._ , vol .58 , no . 7 , pp . 36613680 , jul .2010 .l. zhao , j. song , p. babu , and d. p. palomar , `` a unified framework for low autocorrelation sequence design via majorization - mimimization , '' _ ieee trans .signal process ._ , vol .65 , no . 2 ,pp . 438453 , jan .2017 .m. m. naghsh , m. modarres - hashemi , s. shahbazpanahi , m. soltanalian , and p. stoica , `` unified optimization framework for multi - static radar code design using information - theoretic criterion , '' _ ieee trans . signal process ._ , vol .61 , no . 21 , pp .54015416 , nov .a. khabbazibasmenj , f. roemer , s. a. vorobyov , and m. haardt , `` sum - rate maximization in two - way af mimo relaying : polynomial time solutions to a class of dc programming problems , '' _ ieee trans .signal process ._ , vol . 60 , no . 10 , pp .54785493 , oct .m. soltanalian , a. gharanjik , b. shankar , and b. ottersten . ( 2017 ) grab - n - pull : a max - min fractional quadratic programming framework with applications in signal processing .submitted to _ieee trans . signal process ._ [ online ] .available : http://msol.people.uic.edu/papers/gnp_pre.pdf y. li , s. a. vorobyov , and z. he , `` design of multiple unimodular waveforms with low auto- and cross - correlations for radar via majorization - minimization , '' in _ proc .24th european signal process .( eusipco ) _ , budapest , hungary , aug.sep .2016 , pp . 22352239 .y. li and s. a. vorobyov , `` efficient single / multiple unimodular waveform design with low weighted correlations , '' in _ proc .conf . acoust . , speech , , signal process .( icassp ) _ , new orleans , usa , mar .
in this paper , we develop new fast and efficient algorithms for designing single / multiple unimodular waveforms / codes with good auto- and cross - correlation or weighted correlation properties , which are highly desired in radar and communication systems . the waveform design is based on the minimization of the integrated sidelobe level ( isl ) and weighted isl ( wisl ) of waveforms . as the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms , the main issue turns to be the development of fast large - scale optimization techniques . the difficulty is also that the corresponding optimization problems are non - convex , but the required accuracy is high . therefore , we formulate the isl and wisl minimization problems as non - convex quartic optimization problems in frequency domain , and then simplify them into quadratic problems by utilizing the majorization - minimization technique , which is one of the basic techniques for addressing large - scale and/or non - convex optimization problems . while designing our fast algorithms , we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms , and in the case of wisl minimization , to derive additionally an alternative quartic form which allows to apply the quartic - quadratic transformation . our algorithms are applicable to large - scale unimodular waveform design problems as they are proved to have lower or comparable computational burden ( analyzed theoretically ) and faster convergence speed ( confirmed by comprehensive simulations ) than the state - of - the - art algorithms . in addition , the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts . correlation , majorization - minimization , mimo radar , waveform design .
solar and stellar oscillations are a powerful tool to probe the interior of stars . in this paperwe classify stellar oscillations into solar - like or deterministic .solar - like oscillations are stochastically excited by turbulent convection and are present in the sun and other main - sequence , subgiant , and giant stars ( see _ e.g. _ , and references therein ) .deterministic oscillations are seen in classical pulsators and have mode lifetimes much longer than any typical observational run ; one of the best studied objects in this class is the pre - white dwarf pg1159 also known as gw vir . in practice , observations of solar - like or deterministic pulsations always have an additional stochastic component due to instrumental , atmospheric , stellar , or photon noise .an important aspect of helio- and asteroseismology is the determination of the parameters of the global modes of oscillation , especially the mode frequencies . in the case of the sun , it is known that the measurement precision is limited by the stochastic nature of the oscillations ( realization noise ) . and have shown that realization noise is expected to scale like , where is the total duration of the observation . a common practice is to extract the solar mode parameters from the power spectrum using maximum likelihood estimation ( mle , see _e.g. _ ; ; ; ; ) . in its current form ,however , this method of analysis is only valid for uninterrupted time - series .this is a significant limitation because gaps in the data are not uncommon ( daily cycle , bad weather , technical problems ) .the gaps complicate the analysis in fourier space : the convolution of the data with the observation window leads to correlations between the different fourier components .the goal of this paper is to extend the fourier analysis of solar and stellar oscillations to time series with gaps , using appropriate maximum likelihood estimators based on the correct statistics of the data .section [ section : problem ] poses the problem of the analysis of gapped time series in fourier space . in section [ section : pdf_correlation ] we derive an expression for the joint probability density function ( pdf ) of the observations , taking into account the frequency correlations .our answer is consistent with an earlier ( independent ) derivation by .based on this pdf , we derive maximum likelihood estimators in section [ section : likelihood_estimation ] . in section [ section : fitting_nocorr ]we recall the `` old method '' of maximum likelihood estimation based on the unjustified assumption that frequency bins are statistically independent .section [ section : test_setup ] explains the set up of the monte - carlo simulations , used to test the fitting methods on artificial data sets . in section[ section : fitting_results ] we present the results of the monte - carlo simulations and compare the new and old fitting methods . for the sake of simplicity, we consider only one mode of oscillation at a time ( solar - like or sinusoidal ) .we present several cases for which our new fitting method leads to a significant improvement in the determination of oscillation parameters , and in particular the mode frequency .let us denote by the time series that we wish to analyse .it is sampled at times , where is an integer in the range , and one minute is the sampling time .all quantities with a tilde are defined in the time domain .the total duration of the time series is . by choice ,all of the missing data points were assigned the value zero : this enables us to work on a regularly sampled time grid .formally , we write where is the uninterrupted time series that we _ would _ have observed if there had been no gaps and is the window function defined by if an observation is recorded at time and otherwise .the is drawn from a random process , whose statistical properties will be discussed later .we define the discrete fourier transform of by where is the frequency and .note that and , where the star denotes the complex conjugate .the fourier transform has periodicity or twice the nyquist frequency .our intention is not to fit the complete fourier spectrum , but a rather small interval that contains one ( or a few ) modes of stellar oscillation .thus , we extract a section of the data of length starting from a particular frequency , as shown in figure [ fig : convolution_scheme](c ) .this subset of the data is represented by the vector ^t ] of length is defined by and ], we can rewrite as where is the diagonal matrix we emphasize that , although the are uncorrelated random variables , the are correlated because of the multiplication of by the window matrix [ equation ( [ equ : fourier_signal ] ) ] .in this section we derive an expression for the joint probability density function of the observed signal .this problem had already been solved by .we reach the same conclusion , independently and with more compact notations .we start by rewriting the master equation , equation ( [ equ : fourier_signal ] ) , as where is a matrix with rank and singular value decomposition here the superscript denotes the hermitian conjugate and and are unitary matrices of dimensions and respectively , _i.e. _ and .the matrix can be written as , \quad \lambda=\mbox{diag}(\lambda_0,\lambda_1 , \dots , \lambda_{m-1 } ) , \label{equ : svd}\ ] ] where are the ( positive ) singular values of the matrix .thus , there exists a vector such that \xi .\label{equ : fourier_signal_xi}\ ] ] since has unit covariance matrix and is unitary , the vector is a complex gaussian random vector of size with unit covariance matrix .it is obvious from equation ( [ equ : fourier_signal_xi ] ) that there exists a lower - rank complex gaussian random vector of length , ^t ] , we have where we used the notation . since is unitary and is diagonal and real , the jacobian of the transformation is given by combining equations ( [ equ : pdf_transorm ] ) , ( [ eq.pdf_eta ] ) , and ( [ equ : jacobian ] ) , we get the joint pdf of the observed vector : the above expression is , perhaps , more elegantly written as in terms of , the moore - penrose generalized inverse of , where is the transpose of in which the singular values are replaced by their inverse .one may ask , after the fact , if the quantity in equation ( [ equ : pdf_transorm ] ) is always defined . the answer would appear to be yes since the moore - penrose generalised inverse of is perfectly well defined .it is not excluded , however , that some singular values could be infinitesimally small .we have not encountered any such difficulty with the test cases given in section [ section : fitting_results ]. should be ill - conditioned in other cases , a simple truncated svd would help avoiding a numerical problem . before discussing the implementation of the method in section [ section : likelihood_estimation ] , we should like to draw attention to a parallel between fitting data with temporal gaps and fitting data with spatial gaps .in order to understand this analogy , we refer the reader to the work of ( 1998 , section 3.3.4 ) who discuss how to interpret the spatial leaks of non - radial oscillations that arise from the fact that only half of the solar disk can be observed from earth .their approach is similar to the one developed in this paper .let us assume that the stellar oscillation model that we are trying to fit to the data depends on a set of parameters .these parameters may be the amplitude , the phase , the frequency , the line asymmetry , the noise level , _ etc_. the basic idea of maximum likelihood estimation is to pick the estimate that maximizes the likelihood function . the likelihood function is another name for the joint pdf [ equation ( [ equ : pdf ] ) ] evaluated for the sample data . in practice ,one minimizes rather than maximizing the likelihood function itself . in the above expression , the quantities and all depend implicitly on the model parameters through the covariance matrix .the vector also depends on the model parameters in the case deterministic oscillations .the probability of observing the sample data is greatest if the unknown parameters are equal to their maximum likelihood estimates : the method of maximum likelihood has many good properties . in particular , in the limit of a large sample size ( large ) , the maximum likelihood estimator is unbiased and has minimum variance .what is particularly new about our work is the minimization of the likelihood function given by equation ( [ equ : likelihood ] ) .we use the direction set method , or powell s algorithm , to solve the minimization problem with a computer . in practice ,the result of the fit depends on the initial guess and the fractional tolerance of the minimisation procedure ( the relative decrease of in one iteration ) .the dependence of the fitted parameters on the initial guess is due to the fact that the function may have local minima in addition to the global minimum .we will address this issue in more detail in section [ section : fitting_results ] . in the case of solar - like oscillations, there is no deterministic component and the log - likelihood becomes if background white noise is the only stochastic component then the log - likelihood function becomes splitting the unknowns into the parameters describing the oscillations , and the noise level , the minimization problem reduces to finding the most likely estimates where .the noise level is explicitly given by \| .\ ] ]maximum likelihood estimation has been used in the past to infer solar and stellar oscillation parameters , even in the case of gapped time series .the joint pdf of the observations was assumed to be the product of the pdfs of the individual , as if the frequency bins were uncorrelated . for comparison purposes, we briefly review this ( unjustified ) approach . according to equation ( [ equ : fourier_signal_2 ] ) ,the pdf of is a normal distribution with mean and variance under the ( wrong ) assumption that the are independent random variables , the joint pdf of becomes where the superscript `` nc '' stands for `` no correlation '' .this joint pdf uses the correct mean ( ) and variance ( ) of the data , but it ignores all the non - vanishing cross - terms ] .thus , in the case of purely solar - like oscillations , we recover the standard expression : while the above expression is perfectly valid for uninterrupted data , it is not justified when gaps are present .the parameters that minimize are not optimal , as will be shown later using monte - carlo simulations .when , the `` no - correlation '' log - likelihood function simplifies to the minimization problem becomes where depends on the oscillation parameters .the noise level is explicitly given by far we have considered a general signal which includes a deterministic component and a stochastic component .the parametrisation of each component depends on prior knowledge about the physics of the stellar oscillations .solar - like pulsations are stochastic in nature and no deterministic component is needed in this case . on the other hand ,long - lived stellar pulsations are treated as deterministic .some stars may support both deterministic and stochastic oscillations . in this section , we model the two cases separately .we want to test the fitting method [ equations ( [ equ : likelihood ] ) and ( [ equ : likelihood - estimator ] ) ] by applying it to simulated time series with gaps . for comparison, we also want to apply the old fitting method ( section [ section : fitting_nocorr ] ) to the same time series .we need to generate many realizations of the same random process in order to test the estimators for bias and precision : this is called monte - carlo simulation . in section[ section : window_function ] we discuss the generation of the synthetic window functions .we then discuss the parametrisation of the solar - like oscillations ( section [ section : model_solar_like ] ) and the deterministic oscillations ( section [ section : model_deterministic ] ) used to simulate the unconvolved signal . ]used in this paper . from top to bottom , the duty cycle is ( a ) 100% , ( b ) 66% , ( c ) 30% , and ( d ) 15% .the main periodicity of the window is 24 hours for cases ( b ) and ( c ) , and 48 hours for window ( d ) .all windows are truncated at frequency .,scaledwidth=80.0% ] we generate three different observation windows , corresponding to different duty cycles .the observation windows are first constructed in the time domain . by definition , set to one if an observation is available and zero otherwise .the total length of all time series is fixed at days ( frequency resolution ) .a window function is characterized by two main properties : the duty cycle ( fraction of ones ) and the average periodicity .a typical window function for a single ground - based site has a 24-hour periodicity . in order to deviate slightly from purely periodic window functions we introduce some randomness for the end time of each observation block .figure [ figure : windowfunction](b)(d ) shows the power spectra of the three window functions .the first and second window functions have a main periodicity of 24 hours and duty cycles of and respectively .two side lobes occur at frequencies and . the non - vanishing power between the side lobes is due to the deviation from a purely periodic window .the third window function has a main periodicity of 48 hours and a duty cycle of only 15% .all of these window functions are not unrealistic .we apply a sharp low - pass filter at frequency ( ) to all window functions .the power at higher frequencies corresponds to about 5% of the total power in the windows .this truncation is needed to apply the fitting algorithm , which assumes that there exists a frequency beyond which the power in the window vanishes , _i.e. _ that the window function is band limited .we generate the realizations of the unconvolved solar - like oscillation signal directly in the fourier domain .we consider a purely stochastic signal ( ) and a single mode of oscillation .since we assumed stationarity in the time domain , the fourier spectrum of the unconvolved signal for one single mode can be written as ^{1/2 } \eta_i , \quad i=0,1,\dots , m+2p-1 , \ ] ] where describes the line profile of the mode in the power spectrum , is the mode s maximum power , is the variance of the background noise , and is a centered complex gaussian random variable with unit variance and independent real and imaginary parts .solar - like oscillations are stochastically excited and intrinsically damped by turbulent convection .the expectation value of the power spectrum is nearly lorentzian , except for some line asymmetry ( _ e.g. _ , ) . herewe use a simple asymmetric line profile : where is the resonant frequency , is the asymmetry parameter of the line profile ( ) , and is a measure of the width of the line profile .we refer to as the signal - to - noise ratio in the power spectrum .as tends to zero , becomes the full width at half maximum ( fwhm ) of the power spectrum and the mode lifetime .there are five model parameters , .once the unconvolved signal has been generated in the fourier domain , the observed signal is obtained by multiplication with the window matrix , as explained above . in the time domain , we consider a purely sinusoidal function on top of white background noise : the first term describes the deterministic component of the signal , where is the amplitude , the mode frequency , and the phase of the mode .the second term is stochastic noise with standard deviation .the are normally distributed independent real random variables with zero mean and unit variance . the observed signal is obtained by multiplying by the window in the time domain .the model parameters are .we have defined the signal and the noise in the time domain , but a definition of signal - to - noise ratio in the fourier domain is desirable . on the one hand ,the variance of the noise in the fourier domain is where is the total power in the window . on the other hand ,the maximum power of the signal in fourier space is , where is the power of the window at zero frequency .thus , by analogy with the solar - like case , it makes sense to define the signal to noise ratio in the fourier domain as in practice we fix and and deduce the corresponding noise level .several hundreds of realizations are needed in order to assess the quality of a fitting method .we do not intend to test all possible combinations of mode parameters but we want to show a few cases for which the new fitting method provides a significant improvement compared to the old fitting method .figure [ fig : realization_fit_stoch ] shows one realization of a simulated mode of solar - like oscillation with input parameters , , , , and . the signal to noise ratio is and the window function is 30% full ( see figure [ figure : windowfunction](c ) ) . the mode lifetime is hours . figure [ fig : realization_fit_stoch](a ) displays the real and imaginary parts of the fourier transform , together with the standard deviation of the data ( nc fit in blue , new fit in red , expectation value in green ) .figure [ fig : realization_fit_stoch](b ) shows the power spectrum and the fits .notice the sidelobes introduced by the convolution of the signal with the window functions .the `` no - correlation '' fit is done on the power spectrum [ equation ( [ eq.nc ] ) ] , while the new fit is performed in complex fourier space [ equation ( [ equ : likelihood ] ) ] .hz , linewidth , and .the window function is 30% full .panel ( a ) shows the real and imaginary parts of the fourier spectrum .panel ( b ) shows the power spectrum .the vertical dashed lines represent the width of the window function .also shown are the new fit ( red ) , the old fit ( blue ) , and the expectation value ( green ) . ] each fit shown in figure [ fig : realization_fit_stoch ] corresponds in fact to the best fit out of five fits with different initial guesses . for each realization, we use the frequency guesses for .the last two frequency guesses correspond to the frequencies of the two main sidelobes of the window function ( figure [ figure : windowfunction]c ) . for the other parameters , we choose random guesses within % of the input values .the reason for using several guesses is to ensure that the fit converges to the global maximum of the likelihood , not to a nearby local maximum , _i.e. _ that the estimates returned by the code are the mle estimates defined by equation ( [ equ : likelihood - estimator ] ) . in some cases , the global maximum coincides with a sidelobe at from the main peak .we note that the new fitting method requires a much longer computing time than the old nc method : typically , three hours on a single cpu core for a single realization ( five guesses , five fits ) . for the particular realization of figure [ fig : realization_fit_stoch ] ,the new fit is closer to the expectation value ( _ i.e. _ is closer to the answer ) than the old nc fit .no conclusions should be drawn , however , from looking at a single realization . in order to test the reliability of each fitting method, we computed a total of 750 realizations with the same input parameters as in figure [ fig : realization_fit_stoch ] and the same window function ( 30% full ) .the quality ( bias and precision ) of the estimators can be studied from the distributions of the inferred parameters . as shown by the distributions of figure [ fig : distr_stoch_sn30_wa ]the new fitting method is superior to the old nc method .this is true for all the parameters , in particular the mode frequency .the distributions for the mode frequency ( figure [ fig : distr_stoch_sn30_wa](a ) ) are quite symmetric and gaussian - like , although the old fitting method leads to a significant excess of values beyond the two- mark .we note that , in general , the old fitting method is more sensitive to the initial frequency guess .also the estimates of the linewidth and the mode power are significantly more biased with the old fitting method than with the new one ( figures [ fig : distr_stoch_sn30_wa]b , [ fig : distr_stoch_sn30_wa]c ) .it is worth noting that the fits return a number of small /large estimates away from the main peaks of the distributions , less so for the new fits .these values correspond to instances when the signal barely comes out of the noise background .the new fit returns the noise level [ ] with a higher precision and a lower number of underestimated outliers than the old method ( the ouliers are represented by the vertical bars in figure [ fig : distr_stoch_sn30_wa](d ) ) . although the estimation of the asymmetry parameter is unbiased with the new fitting method ( figure [ fig : distr_stoch_sn30_wa](e ) ) , the uncertainty on is so large that it probably could have been ignored in the model . , ( b ) linewidth , ( c ) mode power , ( d ) noise level , and ( e ) asymmetry parameter . the black lines show the results obtained with the new fitting method and the grey lines show the old `` no - correlation '' fits .the vertical dashed line in each plot indicates the input value .the horizontal lines in panel ( a ) are intervals containing 68% of the fits for the new ( black line ) and the old ( grey line ) fitting methods .the thick black and grey vertical lines in panel ( d ) give the numbers of outliers with . ]quantitative estimates of the mean and the dispersion of the estimators are provided in table 1 . because the distributions of the estimated parameters are not always gaussian and may contain several outliers , we compute the median ( instead of the mean ) and the lower and upper bounds corresponding to % of the points on each side of the median ( instead of the one- dispersion ) .this definition has the advantage of being robust with respect to the outliers .the notation in the first row of table [ table : freq_uncertainty ] means that the median mode frequency is and that of the fits belong to the interval ] & & & + & & & + & & & + & & & + the numbers from the last two columns in table 1 confirm the analysis of figure [ fig : distr_stoch_sn30_wa ] .the mode frequency can be measured with a precision of , which is exactly twice better with the new fitting method than with the old one .this gain in precision is very significant and potentially important .since measurement uncertainty scales like , one may equate the gain in using the proper fitting procedure to an effective increase in the total length of the time series by a factor of four .as seen in table 1 , the linewidth , the mode power , the background noise , and the line asymmetry parameter are all less biased and more precise with the new fitting method than the old one .notice that the larger dispersions in the old - fit case are due in part to non - gaussian distributions with extended tails . , title="fig : " ] + , title="fig : " ] cccccc & + duty cycle & main period & average gap & new fitting & old fitting + + 100% & & & & + 66% & 24 hours & 7.4 hours & & + 30% & 24 hours & 16.4 hours & & + 15% & 48 hours & 40.7 hours & & + ] as a function of the window duty cycle [ .the window functions are as defined in section [ section : window_function ] .the red curve shows the 1- monte - carlo mle uncertainties for the new fitting method .the black curve shows the 1- monte - carlo mle uncertainties for the old no - correlation fitting method .the blue curves show the mean cramr - rao lower bounds ( formal error bars ) .the square symbol with a cross at in the left panel is a rough estimate ( see text ) . in the left panel ,the input linewidth is ( see also numbers in table 1 ) . in the right panel , the input linewidth is , all of the other parameters being the same as in the left panel . in both panelsthe signal - to - noise ratio is . for reference ,the dashed lines have slope . ] here we study how bias and precision change as the window function changes , in particular as the duty cycle changes .we consider the four window functions defined in section [ section : window_function ] with duty cycles [ equal to 15% , 30% , 66% , and 100% .first we consider input parameters of solar - like oscillations that are exactly the same as in the previous section : , , , , and . figure [ fig : distribution_window ] shows the distributions of the inferred mode frequencies and linewidths , using the old ( figures [ fig : distribution_window]a , [ fig : distribution_window]b ) and the new ( figures [ fig : distribution_window](c ) , [ fig : distribution_window](d ) ) fitting methods .each fit is the best fit from five different guesses ( see section [ sec : results_solar_like ] ) .the distributions for the 100%-window are identical for the two fitting methods ; this is expected since the old and new fitting methods are equivalent in the absence of gaps . the precision on using the old `` no - correlation '' mle drops fast as the duty cycle decreases ( figure [ fig : distribution_window](a ) ) .this drop is much faster than in the case of the fits that take the frequency correlations into account ( figure [ fig : distribution_window](c ) ) . when the duty cycle is , the frequency estimate is five times better with the new than the old method .the difference is perhaps even more obvious for the linewidth .for the 15% window , it is almost impossible to retrieve with the old fitting method ( figure [ fig : distribution_window]b ) , while the new method gives estimates that are almost as precise as in the no - gap case ( figure [ fig : distribution_window]d ) .the estimates of are significantly less biased with the new method .figure [ fig : distribution_window ] confirms the importance of using the correct expression for the likelihood function .table [ table : freq_uncertainty2 ] gives the medians and half - widths of the distributions .the one- dispersions are plotted as a function of the duty cycle in the left panel of figure [ fig : sigma_nu0 ] .the improvement in the fits is quite spectacular .for example , when the dispersion on is five times less with the new fitting method ( vs. ) . with the old method ,the uncertainty on increases much faster than as the duty cycle drops ( between the 30% and 15% windows ) . this steep dependence on is worse than `` predicted '' by . in his paper , libbrecht suggested to use the uncertainty where ^ 3 $ ] and is an `` effective '' noise - to - signal ratio .he suggested that the main effect of the gaps is to increase the noise - to - signal ratio , presumably by a factor , itself proportional to .this leads however to a dependence of on which , in our particular case , is closer to than .we suspect that the libbrecht formula underestimates the dispersion because it ignores the frequency correlations .the new fitting method returns a uncertainty that is much less sensitive to the duty cycle , with a variation like ( red curve , left panel of figure [ fig : sigma_nu0 ] ) .this is quite remarkable . that the frequency uncertainty could remain nearly constant for is not really surprising since the average gap ( see numbers in table 2 ) is less than the mode lifetime hoursthis regime was studied by using a gap - filling method : as long as the signal - to - noise ratio is large enough , the signal can be reconstructed .why the new fit is doing such a good job for duty cycles is , however , puzzling ( at first sight ) , since the average gap ( hours ) is larger than the mode lifetime .this can be understood as follows . for small duty cycles ,the time - series is effectively a collection of nearly independent blocks of data , which , for the 30% window function , are eight - hour long on average . since mle simulations tell us that the uncertainty on the mode frequency for an uninterrupted series of eight hours is about , we would expect for the gapped time series ( days , 24-hour periodicity ) to be able to reach the uncertainty .this value , represented by the box with a cross in figure 6 , is found to be very close to the mle estimate from the new fits .hence , what matters at very low duty cycle is the number of independent blocks of continuous data .the new fitting method captures this very well , which is satisfying . by comparison, the old no - correlation fitting method does poorly ( black line ) . in order to further investigate this last point , we ran another set of simulations using a mode linewidth corresponding to a mode lifetime hours , which is significantly smaller than the average gap lengths of the 30% and the 15% windows .the other input parameters remained the same as above .we computed and fitted 1350 realizations .the results are shown in the right panel of figure 6 .for the new fitting method , the dependence of the frequency uncertainty on the duty cycle is about , which is comparable to the previous simulations with .we conclude that it is really worth solving for the correct minimization problem and that fitting for the phase information in complex fourier space is important to get a good match between the model and the data . of course, this can only be done properly when we have a perfect knowledge of the model , which is the case with these numerical simulations , but is rarely the case with real observations . ( see figure 5c ) and the right panel is for .the different curves correspond to different window functions , as indicated in the legend .the means of these distributions ( cramr - rao lower bounds ) give the blue curves plotted in figure [ fig : sigma_nu0 ] . ]monte - carlo simulations are very useful in order to assess the variance and the bias of a particular estimator .when fitting real observations , however , the variance of the estimator can not be computed directly by monte - carlo simulation since the input parameters are , by definition , not known .hopefully , the fit can return a formal error from the shape of the likelihood function in the neighborhood of the global maximum .the cramr - rao lower bound achieves minimum variance among unbiased estimators .it is obtained by expanding about its minimum .the formal error on the parameter is given by where is the element on the diagonal of the inverse [ of the hessian matrix with elements the cramr - rao formal errors have been used in helioseismology by , for example , , , and .we have computed the formal error on the mode frequency for many realizations and for all window functions .the resulting distributions are shown in figure 7 . the mean formal error from each distributionis plotted in figure 6 .overall the cramr - rao lower bound is remarkably close to the monte - carlo mle uncertainty using the new fitting method ; they are even undistinguishable when .this is useful information as it means that , on average , the hessian method provides reasonable error estimates .it should be clear , however , that the distributions shown in figure 7 show a significant amount of scatter : the formal error from the hessian may be misleading for particular realizations . .the observation window has a duty cycle of 30% .the simulated data is the thick grey line .the thin black line shows the fit to the data using the new fitting method .the fit with the old method is not shown since it is almost identical . ]figure [ fig : realization_fit_determin ] shows the fourier spectrum of a simulated time series containing a sinusoidal mode of oscillation on top of a white noise background as described in section [ section : model_deterministic ] . in this particular case the observation window with a duty cycle of 30% is used ( see figure [ figure : windowfunction](c ) ) .the input parameters of the sinusoidal function are the mode frequency , the amplitude , and the phase .the signal - to - noise ratio is .the fit shown in figure [ fig : realization_fit_determin ] was obtained with the new fitting method .since we found no significant difference between the old and the new fitting methods in this case , the old fitting method is not shown .differences between the data and the fit are essentially due to the noise . .the window function with a duty cycle of 30% is used .the black and the grey lines are for the new and old fitting methods respectively .the vertical dashed line in each plot indicates the input value .the parameters shown are ( a ) the mode frequency [ ] , ( b ) the logarithm of the mode amplitude [ , ( c ) the phase of the oscillation [ , and ( d ) the logarithm of the noise level [ ( see section 6.3 ) .notice that the estimate of the noise is biased when frequency correlations are ignored ( old nc fit ) , although by a very small amount .] we computed 500 realizations of sinusoidal oscillations with the same mode parameters ( frequency , amplitude , phase ) as above , the same observation window ( 30% full ) , but with a signal - to - noise ratio .the resulting distributions of the inferred parameters obtained with the two fitting methods are shown in figure [ fig : distribution_deterministic ] .for this simulation , the known input values were used as an initial guess to speed up the minimization ; we checked on several realizations that it is acceptable to do so when the signal to noise ratio is large .the distributions of the inferred parameters ( figure [ fig : distribution_deterministic ] ) show that for sinusoidal oscillations , the new fitting method does not provide any significant improvement compared to the old fitting method .we emphasize that the fitting parameters can be determined with a very high precision when the noise level is small .in particular , we confirm that the uncertainty of the frequency estimator can be much smaller than ( see figure [ fig : distribution_deterministic](a ) ) .figure [ fig : fsn_det ] shows the median and the standard deviation of the mode frequency for different signal - to - noise ratios .each symbol and its error bar in figure [ fig : fsn_det ] is based on the computation of 500 realizations of sinusoidal oscillations with the same mode parameters as above , the same observation window ( 30% full ) , but various signal - to - noise ratios .since we did not find any significant difference between the two fitting methods , only the results obtained with the new fitting method are shown .figure [ fig : fsn_det ] illustrates that even for a relatively low signal - to - noise ratio of , the standard deviation of the inferred mode frequency is smaller than by a factor of four . for higher signal - to - noise ratiosthe precision is even more impressive : when , the standard deviation of the mode frequency is about 20 times smaller than .the theoretical value of the standard deviation of the mode frequency obtained by can be extended to the case of gapped data ( cuypers , 2008 , private communication ) as follows : where is the amplitude of the sinusoid in the time domain , is the rms value of the noise , is the number of recorded data points , and is the total observation length .this theoretical uncertainty is overplotted in figure [ fig : fsn_det ] .the match with our monte - carlo measurements is excellent .this confirms that , in this case , it is equivalent to perform the fits in the temporal and in the fourier domains . note that equation ( [ equ : freq_error_determin ] ) is only valid under the assumption that the noise is uncorrelated in the time domain , a condition fulfilled by our simulations .the main reason why the measurement precision is only limited by the noise - to - signal ratio is because perfect knowledge of the model is assumed . ] as a function of signal - to - noise ratio .the duty cycle is 30% .only the results obtained with the new fitting method are shown .the horizontal grey line shows the input mode frequency .the dashed grey lines show the theoretical value of frequency uncertainty , , given by equation ( [ equ : freq_error_determin ] ) .the vertical axis of the plot spans the interval .,scaledwidth=90.0% ]in this paper we derived an expression for the joint pdf of solar or stellar oscillations in complex fourier space , in agreement with the work of .this joint pdf explicitly takes into account frequency correlations introduced by the convolution with the window function .we implemented a maximum likelihood estimation method to retrieve the parameters of stellar oscillations .both stochastic solar - like oscillations and deterministic sinusoidal oscillations were considered . in the case of solar - like oscillations , we performed monte - carlo simulations to show that the improvement provided by our fitting method can be very significant in comparison with a fitting method that ignores the frequency correlations .the results are summarized in figure 6 .in one particular example , using an observation window with a duty cycle % and a signal - to - noise ratio , the new fitting method increased the precision of the mode frequency by a factor of two and the estimates of the linewidth and mode power were less biased and more precise . for a window with a duty cycle % , the precision on the mode frequency estimatewas increased by a factor of five .we also found that the cramr - rao lower bounds ( formal errors ) can provide reasonable estimates of the uncertainty on the mle estimates of the oscillation parameters . in the case oflong - lived , purely sinusoidal oscillations , we did not find any significant improvement in using this new fitting method .yet , we confirm that the standard deviation of the mode frequency can be measured in fourier space with a precision much better than for large signal - to - noise ratios , in accordance with a previous time - domain calculation ( cuypers , 1987 ; cuypers , 2008 , private communication ) .the analysis of time series containing many gaps can benefit from our work .applications may include , for example , the re - analysis of solar oscillations from the early days of the bison network or the solar - like oscillations of centauri observed from the ground with two telescopes .we thank t. appourchaux for useful discussions , in particular for the suggestion to compute the cramr - rao lower bounds .t. stahn is a member of the international max planck research school on physical processes in the solar system and beyond at the universities of gttingen and braunschweig .the mle source code is available from the internet platform of the european helio- and asteroseismology network ( helas , funded by the european union ) at http://www.mps.mpg.de / projects / seismo / mle_softwarepackage/. miller , b.a . , hale , s.j ., elsworth , y. , chaplin , w.j . ,isaak , g.r . , new , r. : 2004 . in : danesy , d. ( ed . ) , _ proc .soho 14/gong 2004 workshop , helio- and asteroseismology : towards a golden future _ , esa * sp-559 * , esa pub .div . , noordwijk , 571 .winget , d.e . ,nather , r.e ., clemens , j.c . ,provencal , j. , kleinman , s.j ., bradley , p.a . ,wood , m.a . ,claver , c.f . ,frueh , m.l . ,grauer , a.d ., hine , b.p . , hansen , c.j . ,fontaine , g. , achilleos , n. , wickramasinghe , d.t ., marar , t.m.k . ,seetha , s. , ashoka , b.n . ,odonoghue , d. , warner , b. , kurtz , d.w . ,buckley , d.a . ,brickhill , j. , vauclair , g. , dolez , n. , chevreton , m. , barstow , m.a . ,solheim , j.e . ,kanaan , a. , kepler , s.o . ,henry , g.w . ,kawaler , s.d .: 1991 , _ astrophys .j. _ * 378 * , 326 .
quantitative helio- and asteroseismology require very precise measurements of the frequencies , amplitudes , and lifetimes of the global modes of stellar oscillation . it is common knowledge that the precision of these measurements depends on the total length ( ) , quality , and completeness of the observations . except in a few simple cases , the effect of gaps in the data on measurement precision is poorly understood , in particular in fourier space where the convolution of the observable with the observation window introduces correlations between different frequencies . here we describe and implement a rather general method to retrieve maximum likelihood estimates of the oscillation parameters , taking into account the proper statistics of the observations . our fitting method applies in complex fourier space and exploits the phase information . we consider both solar - like stochastic oscillations and long - lived harmonic oscillations , plus random noise . using numerical simulations , we demonstrate the existence of cases for which our improved fitting method is less biased and has a greater precision than when the frequency correlations are ignored . this is especially true of low signal - to - noise solar - like oscillations . for example , we discuss a case where the precision on the mode frequency estimate is increased by a factor of five , for a duty cycle of 15% . in the case of long - lived sinusoidal oscillations , a proper treatment of the frequency correlations does not provide any significant improvement ; nevertheless we confirm that the mode frequency can be measured from gapped data at a much better precision than the rayleigh resolution .
one of the central ideas of quantum mechanics is the uncertainty principle which was first proposed by heisenberg for two conjugate observables .indeed , it forms one of the most significant examples showing that quantum mechanics does differ fundamentally from the classical world .uncertainty relations today are probably best known in the form given by robertson , who extended heisenberg s result to two arbitrary observables and .robertson s relation states that if we prepare many copies of the state , and measure each copy individually using either or , we have {|\psi\rangle}|\end{aligned}\ ] ] where for is the standard deviation resulting from measuring with observable .the essence of is that quantum mechanics does not allow us to simultaneously specify definite outcomes for two non - commuting observables when measuring the same state .the largest possible lower bound in robertson s inequality ( [ eq : heisenberg ] ) is , which happens if and only if and are related by a fourier transform , that is , they are conjugate observables . a natural measure that captures the relations among the probability distributions over the outcomes for each observable is the entropy of such distributions .this prompted hirschmann to propose the first entropic uncertainty relation for position and momentum observables .this relation was later improved by , where show that heisenberg s uncertainty relation is in fact implied by this entropic uncertainty relation .hence , using entropic quantities provides us with a much more general way of quantifying uncertainty .indeed , it was realized by deutsch that other means of quantifying `` uncertainty '' are also desirable for another reason : note that the lower bound in is trivial when happens to give zero expectation on ] , and 2 .the operators in cycle through _ mutually disjoint _ sets of operators under the action of . to understand condition ( ii )better , consider an operator in . then , by construction , for , assuming we construct a total of classes .in addition , property ( ii ) implies , for any . in other words ,given any two operators that cycle through the sets respectively under the action of , property(ii ) demands that , for all and .finally , we note that no class can contain two generators and , since they do not commute .when forming the classes we hence ensure that each one contains exactly one generator , which we refer to as the _ singleton _-operator of the class , as opposed to the rest of the elements which will be _ products _ of -operators .the fact that each class can contain at most one singleton operator limits us to constructing a maximum of such classes . before proceeding to outline our construction ,we establish some useful mathematical facts which will help motivate our algorithm for the construction of mutually disjoint classes . for the rest of the section, we will work with a set of -operators that are cycled under the action of , as follows , in other words , we are given a set of -operators whose _ cycle - length _ is .first , we consider sets of products of two -operators of the form , which we call _ length-2 _ operators .it is convenient to characterize such pairs in terms of the _ spacing _ ( ) between the operators that constitute them .the spacing function , for a given set of operators , is simply defined as : .then , the following holds : [ lem : spacing ] the action of on any length-2 operator leaves its spacing function invariant .thus , length-2 operators that have unique spacings cycle through mutually disjoint sets of operators under the action of .recall , .it clearly follows that similar to defining length-2 operators , we refer to any product of -operators as a _ length- _ operator . for operators of length higher than ,it becomes convenient to refer to them using their corresponding index sets .for example , the operator will be simply denoted by the index set . in the following lemma, we obtain a condition for any set of length- operators to cycle through mutually disjoint sets under the action of . [lem : lengthl ] suppose the length- operators ( for ) that belong to the class are such that they correspond to index sets which all sum to the same value then , no given index set of length can belong to more than one class , for prime values of .given the operators , such that the corresponding index sets sum to under the action of , these index sets change to for any index set the sum of the indices corresponding to the new operators becomes proceeding similarly , the corresponding operators in the class have index sets that sum to for all .thus , starting with a constraint on the length- operators in , we have obtained a constraint on the corresponding operators in a generic class .now , to arrive at a contradiction , suppose that an index set whose indices take values from the set , belongs to two different classes , and ( with ) .the constraint imposed by implies without loss of generality , let .since we can form at most classes , the difference can be at most . finally , since , condition can not be satisfied for prime values of .recall that our approach to constructing any classes is to first construct the class , and then obtain the rest by successive application of .therefore , the fact that any index set of a certain length can not belong to more than one class implies that each length- operator in cycles through a unique set of length- operators under . in other words , the length- operators cycle through mutually disjoint sets , as desired .lemma [ lem : lengthl ] thus provides us with a sufficient condition for the set of length- operators in to cycle through mutually disjoint sets under , given a set of -operators whose cycle - length is prime - valued .we only need to ensure that the length- operators in the first class that we construct , , correspond to index sets that _ all _ sum to the same value .this condition is of course subject to the constraint that the maximum allowed length for the operators in ( and by extension , in any class ) is . as a warmup, we construct classes in dimension , when is prime .this case is particularly easy , and illustrates how the results of the previous sections will be used in general .[ thm:(2n+1)classes ] let denote the complete set of -operators , and let be the unitary that cycles through all of them , that is , if is prime , then there exist classes satisfying properties * ( p1 ) * through * ( p3)*. we prove the existence of classes by construction .we first outline an algorithm to pick operators that constitute the class .the remaining classes are easily obtained by the application of to the elements of .then , we make use of lemmas [ lem : spacing ] and [ lem : lengthl ] to prove that the classes obtained through our construction do satisfy the desired properties .+ * * 1 .pick one of the elements of , , as the singleton operator .2 . pair up the remaining operators in to form length-2 operators which commute with , as follows , where denotes the set of length-2 operators in . since we have left out the pair in the middle , we get , .3 . form higher length operators that commute with , by combining with appropriate combinations of the length-2 operators .any operator of even length is created by combining pairs in . andany operator of odd length is created by appending to a length- operator .+ denoting the sets of length-3 operators as , length-4 operators as , and in general , the set of length- operators as , we have , putting together the operators from steps , , and we get the desired cardinality for the class as follows : the rest of the classes are generated by successive applications of the unitary to the elements of , so that .+ it is easy to see that the elements of each class satisfy property ( * p1 * ) above the different length operators have been picked in such a way as to ensure that they all commute with each other .similarly , by construction , they satisfy property * ( p3)*. it only remains to prove property ( * p2 * ) , that the classes are all mutually disjoint .+ the elements of correspond to the following set of spacings which are all distinct .so by lemma [ lem : spacing ] , the elements of cycle through mutually disjoint sets of length- operators . for higher length operators ,we first show that our construction meets the conditions of lemma [ lem : lengthl ] . for the class ,the elements of correspond to index sets that satisfy the length-2 operators of a generic class similarly satisfy since higher length operators are essentially combinations of length-2 operators and the singleton operator , conditions similar to hold for higher length index sets as well . since operators of even length contain pairs from , the corresponding index sets in satisfy similarly , since the odd length operators have appended to the even length operators , the index sets of length in satisfy , to sum up , for any , our construction ensures that index sets of length- belonging to sum to the same value .the conditions of lemma [ lem : lengthl ] are therefore satisfied , with the quantity in taking the value , for all .now , we can simply evoke lemma [ lem : lengthl ] to prove that , when is prime , the higher length operators in cycle through mutually disjoint sets of operators .next , we show that it is possible to obtain an arrangement of operators into classes in dimension , when is prime and , such that the unitary that cyclically permutes -operators also permutes the coresponding classes .[ lclasses ] suppose is a unitary that cycles through sets of -operators from the set in dimension , where is prime and .then there exist classes that satisfy properties * ( p1 ) * through * ( p3)*. * proof : * note that since we have for some positive integer .the set of clifford generators can then be partitioned into sets as follows : without loss of generality , we can assume the unitary is constructed such that it cyclically permutes the operators within each set , as follows . once again , we begin with an algorithm for picking elements for the class .the algorithm closely follows the one outlined in the previous section , barring some minor modifications .+ * * 1 . the `` middle '' element from , , is picked as the singleton element of .the length-2 operators which commute with are picked as follows 3 . pairs are picked from leaving and unused . pairs are picked from each of the sets through , leaving the first operator in each set unused .finally , the unused -operators from different sets are put together as specified below , to get the remaining length- operators : the set of length- operators is then given by which gives .pick higher length operators from that commute with and , by combining with appropriate combinations of the length-2 operators .as before , any even - length operator of length is obtained by combining length-2 operators from .any operator of odd - length , is created by appending to a length- operator .putting together all the operators created in steps[1]-[3 ] , we get the desired cardinality for the class ( see ) , that is , .+ * proof of properties * ( p1 ) * through * ( p3 ) * : * the different length operators have again been picked in such a way as to ensure that they all commute with each other . since the remaining classes are generated by successive applications of the unitary to the elements of , we have . thus * ( p1 ) * and * ( p3 ) * is satisfied .it remains to prove that the classes constructed here also satisfy property ( * p2 * ) .as in the earlier case of classes , the operators in each of the sets correspond to unique values of the spacing function : , \nonumber\ ] ] which guarantees , by lemma [ lem : spacing ] that these operators cycle through mutually disjoint sets under .since the operators in are formed by combining -operators from different sets , each of them cycles through a different set of operators under .thus we see that all the length- operators in cycle through mutually disjoint sets . before we proceed to discuss the higher length operators , we make one further observation about the length- operators .the operators in correspond to index sets which satisfy in particular , the length-2 operators in the set have been picked carefully so as to ensure that the above constraint is satisfied .in fact , this was the rationale behind leaving out the first operator in each of the sets while choosing the corresponding length- elements in .the higher length operators in can be of two types : 1 .those that are comprised of -operators from a single set alone , and 2 .operators that comprise -operators from more than one set . since a type-(a )operator can not cycle into a type-(b ) operator under the action of , these two cases can be examined separately .+ * type-(a ) : * the maximum length that an operator of type-(a ) can have , as per our construction , is .we have ensured this by leaving at least one operator of each of the sets unused in constructing the length- operators .furthermore , the constraint in implies that the index sets corresponding to such higher length operators in , sum to the same value modulo .more precisely , any even - length index set of length , where the indices are all drawn from a given set , satisfies and any index set of odd length satisfies then , invoking lemma [ lem : lengthl ] with for even values of and for odd values of , we see that no operator of type-(a ) can belong to more than one class , for prime values of .+ * type-(b ) : * an operator of type-(b ) is a product of operators from smaller sets .consider a length- operator , which comprises -operators from , operators from , and in general , from the set . note that by our construction , the operator exists in more than one class if and only if , for all the product of all operators in also belongs to more than one class . in what follows, we argue that our construction ensures that this is not possible .in particular , given a set of length- operators in which can be broken down into smaller sets as described above , we will argue that there exists at least one set in every such length- operator , such that the products of operators in corresponding to different length- operators cycle through mutually disjoint sets , as defined earlier .note the following two facts about the subsets .first , our construction ensures that any subset of a given size , satisfies either or depending on being even or odd .second , note that the maximum size of these subsets is .however , in order to invoke lemma [ lem : lengthl ] , we still require to be strictly less than .our goal is hence to show that every length- operator must have at least one subset of size .suppose there exists a length- operator such that every subset is of size .then , the operator itself has to be of length however the maximum value of in our construction is , implying that atleast one of the subsets must be of a size strictly smaller than . and , for such a subset of size less than , constraints and ensure that the same subset can not be found in more than one class , provided is prime .the min - entropy of the distribution that an orthonormal basis induces on a state is given by \ ] ] we are looking to evaluate a lower bound on the average min - entropy of any mutually unbiased bases ( not necessarily coming from our construction ) in a -dimensional hilbert space .the average min - entropy is given by - using jensen s inequality .the problem of finding an optimal uncertainty relation for the min - entropy , thus reduces to the problem of maximizing over all , the quantity .it is easy to see that this maximum is always attained at a pure state , so we can restrict the problem to an optimization over pure states .we can simplify the problem of finding the lower bound of by recasting it as follows .consider states of the form where denotes a string of basis elements , that is , .suppose we can show for all possible strings , then , since ] , and the scalars .thus we can parameterize any state in our -dimensional hilbert space with a vector . when is a pure state ( = 1 ] implies that , .( by an argument similar to the one that leads to . ) * ( m2 ) _ constant inner - product _: implies that .this is easily seen , as follows : & = & \frac{1}{d } + \frac{1}{2}\sum_{i}\alpha^{(i)}_{(b , j)}\alpha^{(i)}_{(\hat{b},k ) } = \frac{1}{d } \nonumber \\\rightarrow \vec{\alpha}_{(b , j)}.\vec{\alpha}_{(\hat{b},k ) } & = & 0\end{aligned}\ ] ] now , using this representation of mub states and density operators , we can rewrite the maximization problem of as : = \max_{{|\psi\rangle}}\textrm{tr}\left[\frac{1}{l}\sum_{j}{|b^{(j)}\rangle}{\langleb^{(j)}|}{|\psi\rangle}{\langle\psi|}\right]\nonumber \\ & \leq & \max_{\vec{\alpha}}\frac{1}{l}\sum_{j}\textrm{tr}\left[\left(\frac{\mathbb{i}}{d } + \frac{\sum_{j}\alpha^{j}_{(b^{(j)},j)}\hat{a}_{j}}{2}\right)\left(\frac{\mathbb{i}}{d } + \frac{\sum_{i}\alpha^{(i)}\hat{a}_{i}}{2}\right)\right ] \nonumber \\ & = & \max_{\vec{\alpha}}\frac{1}{l}\sum_{j}\left(\frac{1}{d } + \frac{1}{2}\vec{\alpha}_{(b^{(j ) } , j)}.\vec{\alpha}\right ) \nonumber \\ & = & \frac{1}{d } + \max_{\vec{\alpha}}\frac{1}{2l}\sum_{j}\vec{\alpha}_{(b^{(j ) } , j)}.\vec{\alpha } \end{aligned}\ ] ] now we only need to find the real -dimensional vector , that maximizes the sum .if we now define an `` average '' vector corresponding to each string , as follows then , it becomes obvious that the maximum is attained when is parallel to .since it is a vector corresponding to a pure state , its norm is given by , so that note that this maximizing vector has a constant overlap with all vectors , for a given string .in other words , for each string , the maximum is attained by the vector that makes equal angles with all the vectors that constitute the `` average '' vector ( ) corresponding to that string .note however that this vector may not always correspond to a valid state .now that we know the maximizing vector , we can go ahead and compute the value of in . & \leq & \frac{1}{d } + \max_{\vec{\alpha}}\frac{1}{2l}\sum_{j}\vec{\alpha}_{(b^{(j ) } , j)}.\vec{\alpha }\nonumber \\ & = & \frac{1}{d } + \frac{1}{2}\max_{\vec{\alpha}}\vec{\alpha}_{(\textrm{avg})}.\vec{\alpha } \nonumber \\ & = & \frac{1}{d } + \frac{1}{2}\frac{\vec{\alpha}_{(\textrm{avg})}.\vec{\alpha}_{(\textrm{avg})}}{|\vec{\alpha}_{(\textrm{avg})}|}\sqrt{\frac{2(d-1)}{d } } \nonumber \\ & = & \frac{1}{d } + \frac{1}{2}|\vec{\alpha}_{(\textrm{avg})}|\sqrt{\frac{2(d-1)}{d } } \nonumber \\ & = & \frac{1}{d } + \frac{1}{2\sqrt{l}}\frac{2(d-1)}{d } \nonumber \\ & = & \frac{1}{d}\left(1 + \frac{d-1}{\sqrt{l}}\right)\end{aligned}\ ] ] where we have used the fact that the vector have a constant norm which can be computed as follows : \nonumber \\ \rightarrow |\vec{\alpha}_{(\textrm{avg})}| & = & \frac{1}{\sqrt{l}}\sqrt{\frac{2(d-1)}{d}},\end{aligned}\ ] ] thus proving our claim .the second step follows from the fact that vectors corresponding to different mub states have zero inner product ( see property above ) .note that the fact that the bases are mutually unbiased was crucial in giving rise to properties and which in turn enabled us to identify the maximizing vector .indeed the maximizing vector corresponding to a given string might not always correspond to a valid state , in which case the bound we derive can not be achieved .however , there exist strings of basis elements , for which we can explicitly construct a state that has equal trace overlap with the states that constitute the corresponding operator .these are in fact states of the form for any . clearly , for the symmetric mubs that we construct , an eigenstate of the unitary that cycles between the different mubs has the same trace overlap with each of the states , for a fixed value of .to see this , suppose is an eigenvector of with eigenvalue , then for all and a given value of , & = & |\langle b^{(j)}|\phi\rangle|^{2 } = |\langle b^{(1)}|(u^\dagger)^{j-1}|\phi\rangle|^{2 } \nonumber \\ & = & ( |\lambda|^{2})|\langle b^{(1)}|\phi\rangle|^{2 } \nonumber \\ & = & |\langle b^{(1)}|\phi\rangle|^{2 } \ ; \end{aligned}\ ] ] this is indeed the case for mubs in , where the lower bound we derive is achieved by eigenstates of .
even though mutually unbiased bases and entropic uncertainty relations play an important role in quantum cryptographic protocols they remain ill understood . here , we construct special sets of up to mutually unbiased bases ( mubs ) in dimension which have particularly beautiful symmetry properties derived from the clifford algebra . more precisely , we show that there exists a unitary transformation that cyclically permutes such bases . this unitary can be understood as a generalization of the fourier transform , which exchanges two mubs , to multiple complementary aspects . we proceed to prove a lower bound for min - entropic entropic uncertainty relations for any set of mubs , and show that symmetry plays a central role in obtaining tight bounds . for example , we obtain for the first time a tight bound for four mubs in dimension , which is attained by an eigenstate of our complementarity transform . finally , we discuss the relation to other symmetries obtained by transformations in discrete phase space , and note that the extrema of discrete wigner functions are directly related to min - entropic uncertainty relations for mubs .
infrared ( ir ) spectroscopy has been a powerful tool to study the microscopic carrier dynamics and electronic structures in strongly correlated electron materials , such as rare earth ( electron ) , transition metal ( electron ) , and organic ( electron ) compounds .the ir spectroscopy technique has been also performed under high pressure using a diamond anvil cell ( dac ) [ 2 - 13 ] since the strongly correlated materials show many interesting physical properties under high pressure . in a dac ,a pair of diamond anvils and a thin metal gasket are used to seal a sample and a pressure transmitting medium .a typical diameter of the diamond surface is 0.8 mm to reach a pressure of 10 gpa , and 0.6 mm to reach 20 gpa .therefore the sample in this experiment should have dimensions of the order of 100 m . to perform an infrared ( ir ) reflectance study on such a small sample under the restricted sample space in a dac, synchrotron radiation ( sr ) has been used as a bright source of both far and mid - infrared .in fact , high pressure ir spectroscopy with dac is currently one of the major applications of ir - sr [ 4 - 7,9 - 13 ] . with a dac, the reflectance is measured between the sample / diamond interface , in contrast to the usual case of sample / vacuum or sample / air interface .the normal - incidence reflectance of a sample relative to a transparent medium of ( real ) refractive index is given by the fresnel s formula as : here , is the complex refractive index of the sample , and =2.4 for diamond and 1.0 for vacuum .hereafter , we denote at sample / diamond interface as , and that at sample / vacuum interface as . from eq .( 1 ) , it is easily seen that of a sample measured in dac may be substantially different from .the purpose of this study is to consider the kramers - kronig ( kk ) analysis of data measured in dac .kk analysis has been widely used to derive optical constants such as the refractive index , dielectric function and optical conductivity from a measured spectrum . however , due to the difference between and discussed above , the usual kk analysis method can not be straightforwardly applied to . to derive optical constants from , therefore ,previous high pressure ir studies used either a drude - lorentz spectral fitting or a modified kk transform . in this work ,we propose a different method , which relies on the usual kk transform with an appropriate cutoff to the , as an alternative approach to obtain the infrared from .the validity of the proposed method is demonstrated with actually measured reflectance data of prru .the complex reflectivity of the electric field , , is expressed as here is the square root of the reflectance , which is actually measured in experiments .then the real and imaginary parts of can be expressed in terms of and as and respectively .therefore , if can be derived from measured with kk analysis even for the sample / diamond reflection case , and can also be derived simply by setting =2.4 in eqs .( 2 ) and ( 3 ) .then , the imaginary part of the complex dielectric function is given as , and the optical conductivity is given as . in performing kk analysis on reflectance data , usually the logarithm of , namely regarded as a complex response function . in the case of sample / vacuum reflection, the kk relation between and is expressed as here , denotes the principal value . in deriving this relation, it is required that has no poles in the upper complex plane when is finite .this is correct since only when in the case of =1 .however , when as in the case of sample / diamond interface , may be satisfied at some point on the upper imaginary axis . this point is denoted as , where is a real , positive and finite number . when , from eq .( 2 ) and therefore has a pole at .accordingly , the kk relation in this case must be modified to .\ ] ] namely , the presence of a medium with brings an extra phase shift , indicated by the square bracket in eq .( 7 ) , into the kk relation .note that the extra phase shift is a decreasing function of , and that the original kk relation of eq .( 6 ) is recovered when .detailed theoretical considerations on the extra phase shift in various situations have been reported . in the case of actual experimental studies , however , the precise value of may not be known .accordingly , the value of has been estimated from experimental data by use of a combination of dl fitting and the modified kk transform . in this method, one uses eq .( 7 ) with and looks for a value of that well reproduces the given by a dl fitting of .note that , on the other hand , if the frequency range of interest is lower than the value of , effects of the extra phase may be only minor , and the usual kk transform of eq .( 6 ) , combined with the use of =2.4 in eqs .( 3 ) and ( 4 ) , might give sufficiently accurate values of optical constants .we will examine the validity of such a procedure in the next section .here , we use spectra actually measured on prru .this compound is well known for showing a metal - to - insulator transition at about 60 k , and a clear energy gap in was observed in our previous work .here we use at 60 k ( metal ) and 9 k ( insulator ) measured over a wide photon energy range of 0.008 - 30 ev , which are shown by the blue curve in figs .1(a ) and 2(a ) . is the reflectance spectrum of prru measured at 60 k in vacuum , and is that expected in a dac calculated from as described in the text .the green , red , and light blue curves are extrapolations with cutoff energies of =1.5 , 2 , and 5 ev , respectively .( b ) the optical conductivity ( ) obtained with kk analysis of is compared with those obtained with kk analysis of with a cutoff at =1.5 , 2 , and 5 ev , and extrapolations above them . below 1.5 ev , obtained from with =2 ev agrees very well with that obtained from . ]the procedure is the following . *the full spectrum is kk analyzed with eq .( 6 ) to obtain , and . *the above and are substituted into eq .( 1 ) with =2.4 to derive that is _ expected _ in a dac . * the obtained aboveis used with the usual kk transform of eq .( 6 ) and =2.4 in eqs .( 3 ) and ( 4 ) , to obtain , and .before this is done , an appropriate cutoff and extrapolation are made to the , as described in detail below .if the kk analysis on works properly , the resulting from ( iii ) should well agree with that given by and the usual kk analysis .we first examine the 60 k data .the expected at 60 k obtained by ( i ) and ( ii ) is indicated by the black curve in fig .1(a ) . in carrying out the integration in eq .( 6 ) , the spectrum were extrapolated below 0.008 ev and above 30 ev with the hagen - rubens and functions , respectively .it is seen in fig .1(a ) that shows very high values above about 4 ev .this physically unrealistic feature resulted from the unphysical assumption of constant =2.4 in the entire spectral range . in reality , of course , the refractive index of diamond can not be constant and real near and above the band gap , where it shows strong light absorption .in addition , when , and , and therefore must hold just like any other material . accordingly , before performing kk transform in ( iii ) , in fig .1(a ) was cut off at some energy , and then it was extrapolated with function .several different values of were tried , as shown in fig .for each value of , the usual kk transform of eq .( 6 ) was made to get , which was then used to derive and with =2.4 in eqs .( 3 ) and ( 4 ) , and to finally obtain .figure 1(b ) shows the spectra obtained as described above , with different values of which are also indicated in fig .it is seen that the obtained spectra strongly depend on . with =2.0ev , the resulting below 1.5 ev agrees very well with that derived from the original .( actually , =2.2 ev gives the best agreement , but =2.0 ev data is shown instead .this is because the 2.2 ev data almost completely overlaps with that from , making it difficult to distinguish them in the figure . )the result for the 9 k data , where the sample is an insulator ( semiconductor ) , is also shown in fig . 2 .a good agreement is again observed between the derived from the full and that from with =2 ev .the spectral range of their good agreement is below 1.5 ev , which is similar to the case of 60 k data discussed above .note also that both 9 k and 60 k data show good agreement with the common value of = 2 ev .these results show that , for any spectral change in ( either temperature- or pressure - induced ) below 1.5 ev , the corresponding can be obtained by the present method .in actual high pressure studies of strongly correlated materials with dac [ 2 - 13 ] , is usually measured below 1 - 2 ev .hence , above the high energy limit of measurement , the expected from can be connected to the measured , with the cutoff and extrapolation discussed above .then the connected may be kk transformed to obtain , as discussed above . an obvious condition required for this method to work properly is that the pressure- and temperature - induced changes of should be limited below certain energy , which is 1.5 ev for prru as seen in figs .1(b ) and 2(b ) .this condition is actually met in the high pressure data of prru , which has enabled us to derive its under pressure up to 14 gpa using the present method .is the reflectance spectrum of prru measured at 9 k in vacuum , and is that expected in a dac calculated from as described in the text .the green , red , and light blue curves are extrapolations with cutoff energies of =1.5 , 2 , and 5 ev , respectively .( b ) the optical conductivity ( ) obtained with kk analysis of is compared with those obtained with kk analysis of with a cutoff at =1.5 , 2 , and 5 ev , and extrapolations above them . ]we have also done similar simulations for other compounds , both metals and insulators , using actually measured data , and have obtained similar results .namely , when an appropriate cutoff and extrapolation are applied to , the usual kk transform of eq .( 6 ) gave spectra which agreed very well with those directly obtained from the wide range .a limitation of the present method is , as already mentioned above , it can give correct only below certain photon energy ( 1.5 ev in the case of prru ) .hence this method is useful when the temperature and pressure dependences of is limited to below certain energy .in addition , to use the present method , it is required that is known over a wide enough photon energy range , since and must be obtained from with the usual kk analysis .while a mathematically rigorous justification of the proposed method is beyond the scope of this work , this method may be very useful as a simple analysis technique of reflectance spectra measured under high pressure with dac .this work has been done as a part of high pressure infrared studies of strongly correlated electron materials using synchrotron radiation at spring-8 , under the approval by jasri ( 2009a0089 through 2011b0089 ) .the data used in figs .1 and 2 have been already published , in collaboration with m. matsunami , l. chen , m. takimoto , t. nanba , c. sekine , and i. shirotani .
when the optical reflectance spectrum of a sample under high pressure is studied with a diamond anvil cell , it is measured at a sample / diamond interface . due to the large refractive index of diamond , the resulting reflectance may substantially differ from that measured in vacuum . to obtain optical constants from , therefore , the usual kramers - kronig ( kk ) analysis can not be straightforwardly applied , and either a spectral fitting or a modified kk transform has been used . here we describe an alternative method to perform kk analysis on . this method relies on the usual kk transform with an appropriate cutoff and extrapolation to , and may offer a simpler approach to obtain infrared conductivity from measured .
there are empirical evidences that the trading activity , the trading volume and the volatility of the financial markets are stochastic variables with the power - law probability distribution function ( pdf ) and the long - range correlations .most of proposed models apply generic multiplicative noise responsible for the power - law probability distribution function ( pdf ) , whereas the long - range memory aspect is not accounted in the widespread models .the additive - multiplicative stochastic models of the financial mean - reverting processes provide rich spectrum of shapes for pdf , depending on the model parameters , however , do not describe the long - memory features .empirical analysis confirms that the long - range correlations in volatility arise due to those of trading activity . on the other hand ,trading activity is a financial variable dependant on the one stochastic process , i.e.interevent time between successive market trades .therefore , it can be modeled as event flow of the stochastic point process . recently , we investigated analytically and numerically the properties of the stochastic multiplicative point processes , derived formula for the power spectrum and related the model with the multiplicative stochastic differential equations .preliminary comparison of the model with the empirical data of the power spectrum and probability distribution of stock market trading activity stimulated us to work on the more detailed definition of the model .here we present the stochastic model of the trading activity with the long - range correlations and investigate its connection to the stochastic modeling of the volatility .the proposed stochastic nonlinear differential equations reproduce the power spectrum and pdf of the trading activity in the financial markets , describe the stochastic interevent time as the fractal - based point process and can be applicable for modeling of the volatility with the long - range autocorrelation .trades in financial markets occur at discrete times and can be considered as identical point events .such point process is stochastic and totaly defined by the stochastic interevent time . a fractal stochastic point process results , when at least two statistics exhibit the power - law scaling , indicating that represented phenomena contains clusters of events over all scales of time .the dimension of the fractal point process is a measure of the clustering of the events within the process and by the definition coincides with the exponent of the power spectral density of the flow of events .we can model the trading activity in financial markets by the fractal point process as its empirical pdf and the power spectral density exhibit the power - law scaling . in this paperwe consider the flow of trades in financial markets as poisson process driven by the multiplicative stochastic equation .first of all we define the stochastic rate of event flow by continuous stochastic differential equation where is a standard random wiener process , denotes the standard deviation of the white noise , is a coefficient of the nonlinear damping and defines the power of noise multiplicativity. the diffusion of should be restricted at least from the side of high values .therefore we introduce an additional term into the eq .( [ eq : taustoch ] ) , which produces the exponential diffusion reversion in equation \tau^{2\mu-2}\mathrm{d}t+\sigma\tau^{\mu-1/2}\mathrm{d}w , \label{eq : taustoch2}\ ] ] where and are the power and value of the diffusion reversion , respectively .the associated fokker - plank equation with the zero flow gives the simple stationary pdf \label{eq : taudistrib}\ ] ] with and .( [ eq : taustoch2 ] ) describes continuous stochastic variable , defines the rate and , after the ito transform of variable , results in stochastic differential equation ^{2\eta-1}\mathrm{d}t+\sigma n^{\eta}\mathrm{d}w , \label{eq : nstoch}\ ] ] where and .( [ eq : nstoch ] ) describes stochastic process with pdf and power spectrum noteworthy , that in the proposed model only two parameters , and ( or ) , define exponents and of two power - law statistics , i.e. , of pdf and of the power spectrum .time scaling parameter in eq .( [ eq : nstoch ] ) can be omitted adjusting the time scale .here we define the fractal point process driven by the stochastic differential equation ( [ eq : nstoch ] ) or equivalently by eq .( [ eq : taustoch2 ] ) , i.e. , we assume as slowly diffusing mean interevent time of poisson process with the stochastic rate . this should produce the fractal point process with the statistical properties defined by eqs .( [ eq : ndistr ] ) and ( [ eq : nspekt ] ) . within this assumptionthe conditional probability of interevent time in the modulated poisson point process with the stochastic rate is .\label{eq : taupoisson}\ ] ] then the long time distribution of interevent time has the integral form \tau^{\alpha}\exp\left[-\left(\frac{\tau}{\tau_{0}}\right)^m\right]\mathrm{d } \tau,\label{eq : taupdistrib}\ ] ] with defined from the normalization , . in the case of pure exponential diffusion reversion , , pdf ( [ eq : taupdistrib ] ) has a simple form where denotes the modified bessel function of the second kind . for complicated structures of distribution expressed in terms of hypergeometric functions arise .we will investigate how the proposed modulated poisson stochastic point process can be adjusted to the empirical trading activity , defined as number of transactions in the selected time window .stochastic variable denotes the number of events per unit time interval .one has to integrate the stochastic signal eq .( [ eq : nstoch ] ) in the time interval to get the number of events in the selected time window . in this paperwe denote the integrated number of events as and call it the trading activity in the case of the financial market . detrended fluctuation analysis is one of the methods to analyze the second order statistics related to the autocorrelation of trading activity .the exponents of the detrended fluctuation analysis obtained by fits for each of the 1000 us stocks show a relatively narrow spread of around the mean value .we use relation between the exponents of detrended fluctuation analysis and the exponents of the power spectrum and in this way define the empirical value of the exponent for the power spectral density .our analysis of the lithuanian stock exchange data confirmed that the power spectrum of trading activity is the same for various liquid stocks even for the emerging markets .the histogram of exponents obtained by fits to the cumulative distributions of trading activites of 1000 us stocks gives the value of exponent describing the power - law behavior of the trading activity .empirical values of and confirm that the time series of the trading activity in real markets are fractal with the power law statistics .time series generated by stochastic process ( [ eq : nstoch ] ) are fractal in the same sense .nevertheless , we face serious complications trying to adjust model parameters to the empirical data of the financial markets . for the pure multiplicative model , when or , we have to take to get and to get , i.e. it is impossible to reproduce the empirical pdf and power spectrum with the same relaxation parameter and exponent of multiplicativity .we have proposed possible solution of this problem in our previous publications deriving pdf for the trading activity when this yields exactly the required value of and for .nevertheless , we can not accept this as the sufficiently accurate model of the trading activity since the empirical power law distribution is achieved only for very high values of the trading activity .probably this reveals the mechanism how the power law distribution converges to normal distribution through the growing values of the exponent , but empirically observed power law distribution in wide area of values can not be reproduced .let us notice here that the desirable power law distribution of the trading activity with the exponent may be generated by the model ( [ eq : nstoch ] ) with and .moreover , only the smallest values of or high values of contribute to the power spectral density of trading activity .this suggests us to combine the stochastic process with two values of : ( i ) for the main area of and diffusion and ( ii ) for the lowest values of or highest values of .therefore , we introduce a new stochastic differential equation for combining two powers of the multiplicative noise , \frac{n^4}{(n\epsilon+1)^2}\mathrm{d } t+\frac{\sigma n^{5/2}}{(n\epsilon+1)}\mathrm{d}w , \label{eq : nstoch2}\ ] ] where a new parameter defines crossover between two areas of diffusion . the corresponding iterative equation for in such a caseis \frac{\tau_{k}}{(\epsilon+\tau_{k})^2}+\sigma\frac{\tau_{k}}{\epsilon+\tau_{k}}\varepsilon_{k } , \label{eq : tauiterat2}\ ] ] where denotes uncorrelated normally distributed random variable with the zero expectation and unit variance .( [ eq : nstoch2 ] ) and ( [ eq : tauiterat2 ] ) define related stochastic variables and , respectively , and they should reproduce the long - range statistical properties of the trading activity and of waiting time in the financial markets .we verify this by the numerical calculations . in figure [ fig:1 ]we present the power spectral density calculated for the equivalent processes ( [ eq : nstoch2 ] ) and ( [ eq : tauiterat2 ] ) ( see for details of calculations ) .this approach reveals the structure of the power spectral density in wide range of frequencies and shows that the model exhibits not one but rather two separate power laws with the exponents and . from many numerical calculations performed with the multiplicative point processes we can conclude that combination of two power laws of spectral density arise only when the multiplicative noise is a crossover of two power laws as in eqs . ( [ eq : nstoch2 ] ) and ( [ eq : tauiterat2 ] ). we will show in the next section that this may serve as an explanation of two exponents of the power spectrum in the empirical data of volatility for ` s&p 500 ` companies .averaged over 100 realisations of series with 1000000 iterations and parameters ; ; ; ; .straight lines approximate power spectrum with and : a ) of the flow with the interevent time generated by eq .( [ eq : tauiterat2 ] ) , b ) calculated by the fast fourier transform of series generated by eq .( [ eq : nstoch2]).,title="fig : " ] averaged over 100 realisations of series with 1000000 iterations and parameters ; ; ; ; .straight lines approximate power spectrum with and : a ) of the flow with the interevent time generated by eq .( [ eq : tauiterat2 ] ) , b ) calculated by the fast fourier transform of series generated by eq .( [ eq : nstoch2]).,title="fig : " ] empirical data of the trading activity statistics should be modeled by the integrated flow of events defined in the time interval . in figure [ fig:2 ]we demonstrate the probability distribution functions and its cumulative form calculated from the histogram of generated by eq .( [ eq : tauiterat2 ] ) with the selected time interval .this illustrates that the model distribution of the integrated signal has the power - law form with the same exponent as observed in empirical data .calculated from the histogram of generated by eq .( [ eq : tauiterat2 ] ) with the selected time interval .b ) cumulative distribution .other parameters are as in figure [ fig:1 ] . ] the power spectrum of the trading activity has the same exponent as power spectrum of in the low frequency area for all values of .the same numerical results can be reproduced by continuous stochastic differential equation ( [ eq : nstoch2 ] ) or iteration equation ( [ eq : tauiterat2 ] ) .one can consider the discrete iterative equation for the interevent time ( [ eq : tauiterat2 ] ) as a method to solve numerically continuous equation \frac{1}{(\epsilon+\tau)^2}\mathrm{d}t + \sigma\frac{\sqrt{\tau}}{\epsilon+\tau}\mathrm{d}w .\label{eq : taucontinuous}\ ] ] the continuous equation ( [ eq : nstoch2 ] ) follows from the eq .( [ eq : taucontinuous ] ) after change of variables .we can conclude that the long - range memory properties of the trading activity in the financial markets as well as the pdf can be modeled by the continuous stochastic differential equation ( [ eq : nstoch2 ] ) . in this model the exponents of the power spectral density , , and of pdf , ,are defined by one parameter .we consider the continuous equation of the mean interevent time as a model of slowly varying stochastic rate in the modulated poisson process ( [ eq : taupoisson ] ) . in figure [ fig:3 ]we demonstrate the probability distribution functions calculated from the histogram of generated by eq .( [ eq : taupoisson ] ) with the diffusing mean interevent time calculated from eq .( [ eq : taucontinuous ] ) . :open circles , calculated from the histogram of generated by eq .( [ eq : taupoisson ] ) with the mean interevent time calculated from eq .( [ eq : taucontinuous ] ) ; open squares , calculated form the iterative equation ( [ eq : tauiterat2 ] ) .used parameters are as in figure 1 .straight line approximates power law . ]numerical results show good qualitative agreement with the empirical data of interevent time probability distribution measured from few years series of u.s .stock data .this enables us to conclude that the proposed stochastic model captures the main statistical properties including pdf and the long - range correlation of the trading activity in the financial markets .furthermore , in the next section we will show that this may serve as a background statistical model responsible for the statistics of return volatility in widely accepted geometric brownian motion ( gbm ) of the financial asset prices .the basic quantities studied for the individual stocks are price and return let us express return over a time interval through the subsequent changes due to the trades in the time interval $ ] , we denote the variance of calculated over the time interval as . if are mutually independent one can apply the central limit theorem to sum ( [ eq : return2 ] ) .this implies that for the fixed variance return is a normally distributed random variable with the variance where is the normally distributed random variable with the zero expectation and unit variance ., averaged over 10 intervals calculated from the series of generated by eqs .( [ eq : nstoch2 ] ) and ( [ eq : return4 ] ) , all parameters are the same as in previous calculations . dashed line approximates the power law .( b ) power spectral density of calculated from fft of the same series .straight lines approximate power spectral density with and . ]empirical test of conditional probability confirms its gaussian form , and the unconditional distribution is a power - law with the cumulative exponent .this implies that the power - law tails of returns are largely due to those of .here we refer to the theory of price diffusion as a mechanistic random process . for this idealized modelthe short term price diffusion depends on the limit order removal and this way is related to the market order flow .furthermore , the empirical analysis confirms that the volatility calculated for the fixed number of transactions has the long memory properties as well and it is correlated with real time volatility .we accumulate all these results into the assumption that standard deviation may be proportional to the square root of the trading activity , i.e. , .this enables us to propose a simple model of return and related model of volatility based on the proposed model of trading activity ( [ eq : nstoch2 ] ) .we generate series of trade flow numerically solving eq .( [ eq : nstoch2 ] ) with variable steps of time and calculate the trading activity in subsequent time intervals as .this enables us to generate series of return , of volatility and of the averaged volatility . in figure [ fig:4 ]we demonstrate cumulative distribution of and the power spectral density of calculated from fft .we see that proposed model enables us to catch up the main features of the volatility : the power law distribution with exponent and power spectral density with two exponents and .this is in a good agreement with the empirical data .starting from the concept of the fractal point processes we proposed process driven by the nonlinear stochastic differential equation and based on the earlier introduced stochastic point process model .this may serve as a possible model of the flow of points or events in the physical , biological and social systems when their statistics exhibit power - law scaling indicating that the represented phenomena contains clusters of events over all scales .first of all , we analyze the statistical properties of trading activity and waiting time in financial markets by the proposed poisson process with the stochastic rate defined as a stand - alone stochastic variable .we consider the stochastic rate as continuous one and model it by the stochastic differential equation , exhibiting long - range memory properties . further we propose a new form of the stochastic differential equation combining two powers of multiplicative noise : one responsible for the probability distribution function and another responsible for the power spectral density .the proposed new form of the continuous stochastic differential equation enabled us to reproduce the main statistical properties of the trading activity and waiting time , observable in the financial markets . in the new modelthe power spectral density with two different scaling exponents arise .this is in agreement with the empirical power spectrum of volatility and implies that the market behavior may be dependant on the level of activity .one can observe at least two stages in market behavior : calm and excited .finally , we propose a very simple stochastic relation between trading activity and return to reproduce the statistical properties of volatility .this enabled us to model empirical distribution and long - range memory of volatility .
we propose a model of fractal point process driven by the nonlinear stochastic differential equation . the model is adjusted to the empirical data of trading activity in financial markets . this reproduces the probability distribution function and power spectral density of trading activity observed in the stock markets . we present a simple stochastic relation between the trading activity and return , which enables us to reproduce long - range memory statistical properties of volatility by numerical calculations based on the proposed fractal point process .
the dynamics of a large diversity of physicochemcal systems can be mathematically modeled as reaction - diffusion systems in which it is described how the composition of multiple chemical species distributed in space change under the influence of competitive chemical reactions between the species ( giving origin to a new species ) and the diffusion which causes the species to spread out in the space .it is well known that depending on the relative importance of the kinetics and the diffusion these systems can provide a large diversity of behaviors , including the formation of complex structures and patterns see .such a structure formation occurs for example during the solid phase formation and evolution in intercalation and conversion reactions in rechargeable lithium batteries , during the self - organisation of materials occuring with the fabrication process of composite electrodes for electrochemical devices applications , during the microstructural evolution of composite elecrodes upon their degradation and in other competitive chemically reactive systems like in the belousov - zhabotinsky reaction .+ designing appropriate controllers of these reaction - diffusion systems can reveal of great relevance within a reverse engineering approach for example towards the optimization of discharge - charge of lithium batteries ( by for example enhancing the formation of solid phases during discharge more reversible upon charge ) and the optimization of the structure of the fabricated electrodes as function of the fabrication parameters ( e.g. temperature dynamics , reactant flow , etc . ) .+ in this paper , we consider the one - dimensional allen - cahn equation ,1 [ , t > 0,\\ u_x(0,t)=\alpha(t ) , u_x(1,t)=0 & \forall t > 0,\\ u(x,0)=u_0(x ) & x\in ] 0,1[.\end{aligned}\ ] ] + this reaction - diffusion equation describes the process of phase separation in many situations .it was originally introduced in by allen and cahn to model the motion of anti - phase boundaries in crystalline solids . in equation( 1 ) , represents the concentration of one of the possible phases , represents the interfacial width , supposed to be small as compared to the characteristic length of the laboratory scale . the homogenous neumann boundary condition ( when ) traduces that there is no loss of mass across the boundary wallshowever , the allen - cahn equation is invoqued in a large number of complicated moving interface problems in materials science through a phase - field approach , therefore a large litterature in mathematical analysis and in numerical analysis is devoted to the study of the mathematical properties of this equation and of its simulation ( see and the references therein ) . +in equation ( 1 ) , represents the potential energy and represents the control flux at one of the boundaries ; is assumed having stable roots , such that and .it is observed in many cases that when and as goes to , the solutions tend to steady states which consist in ( almost ) piecewise constant functions whose the different values are equal to the stable roots of which represent the different phase stripes .hence exhibits large gradient near , as illustrated in figure ( [ fig1 ] ) .( left ) and for ( right).,title="fig:",width=321,height=264 ] ( left ) and for ( right).,title="fig:",width=321,height=264 ] + + an important issue in the conception of rechargeable lithium and post - lithium batteries , is the design of active materials providing upon the battery discharge a number of interphases as low as possible . the morphological simplicity of such discharged materials is expected to enhance the rechargeability of these type batteries and thus to increase their efficiency . in this paper we propose a first numerical strategy to calculate the boundary flux function on a given time interval ] is subdivided into subintervals of length , ] compute by time integration of ( [ eq1])-([eq4 ] ) with }}\ ] ] compute .update from by a derivative free optimization process ( nonlinear search ) also , we will have to establish numerical convergence by varying , .we subdivide ] . we hereafter display results for different values of and .the merit function is .starting from a regular initial data allows to observe numerical convergence of the optimal control function as the number of discretisation points increases , and as , for a fixed , the time step decreases .we can see , in figures ( [ fig4])-([fig5])-([fig6])-([fig7 ] ) the good coherence of the results for fixed values of and when varying and the number of grid points .in all the cases , the global procedure allows minimizing the number of interphases or stripes . ,title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] + , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] + , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] + , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] + , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] here , in figures ( [ fig8 ] ) and ( [ fig9 ] ) the discrete components of are randomly calculated following a uniform law on ] . as we see , here again , in all cases , the global procedure , illustrating the effective numerical controllability by the boundaries and the robustness of the approach , since the data are very oscillating . ,title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] + , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] + , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] finally , we give hereafter a numerical illustration of the optimization process when considering a weighted merit function as in ( [ weighted ] ) : we adopt here . as we see in figure ( [ fig10 ] ), the merit function favors the formation of the phase , the initial datum is , as above , randomly generated on $ ] by an uniform law . ,title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ] + , title="fig:",width=302,height=264 ] , title="fig:",width=302,height=264 ]in this paper , we have presented a simple approach to calculate numerically a boundary control that allows obtaining an optimal steady - sate configuration , i.e. , with a minimal number of interphases .we have also demonstrated that we can also favor the formation of a given phase by following the same procedure .the results we obtained are encouraging and show the numerical faisability of the proposed method .. of course , we have here considered first a relatively simple case , namely the one dimensional case before extending the approach to 2d or 3d models which correspond to more realistic situations found , for example , in electrochemistry .furthermore , the monitoring of the number of interphases by ( problem 2 ) is an important feature that we will study in a near future .finally the integration of such optimization algorithms in an in - house multiscale simulator of electrochemical power generators will be also considered .99 s. m. allen and j. w. cahn . a microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening .acta metall .( 27 ) ( 1979 ) , pp 1085-1095 .a. r. conn , k. scheinberg , and l. n. vicente , _ introduction to derivative - free optimization _ , mps - siam series on optimization , siam , philadelphia , ( 2009 ) .a.a . franco , k.h .xue , carbon - based electrodes for lithium air batteries : scientific and technological challenges from a modeling perspective , ecs journal of solid state science and technology , 2 ( 10 ) ( 2013 ) , pp m3084-m3100 .a.a . franco , multiscale modeling and numerical simulation of rechargeable lithium ion batteries : concepts , methods and challenges , rsc advances , 3 ( 32 ) ( 2013 ) , pp 1302713058 .franco , ms liber t computational software , http://www.modeling-electrochemistry.com/ms-liber-t/ ignat l. i. , pozo a. , zuazua e. large - time asymptotics , vanishing viscosity and numerics for 1-d scalar conservation laws , submitted ( 2013 ) + http://www.bcamath.org/documentospublic/archivos/publicaciones/asymptoticsinnumerics.pdf matlab s optimization toolbox , http://www.mathworks.fr/fr/products/optimization/index.html k. malek , m. eikerling , q. wang , t. navessin , z. liu , self organization in catalyst layers of pem fuel cells , j. phys .c , 111 ( 36 ) , ( 2007 ) , pp 13627-13634 .k. malek , a.a .franco , microstructural resolved modeling of aging mechanisms in pemfc , j. phys .b , 115 ( 25 ) ( 2011 ) , pp 8088-8101 .m. pierre , _etude numrique et mathmatique de quelques modles de transition de phase , de sparation de phases et de cristaux liquides _ , habilitation diriger les recherches , ( in french ) , universit de poitiers ( oct .d. kondepudi , i. prigogine , modern thermodynamics : from heat engines to dissipative structures , john & wiley sons ltd . , new york ( 1998 ) . c. sachs , m. hildebrand , s. vlkening , j. wintterlin , g. ertl , science ( 293 ) no .5535 ( 2001 ) , pp .sirimungkala a. , frsterling h - d ., dlask v. , bromination reactions important in the mechanism of the belousov-zhabotinsky system " .a 103 ( 8) , ( 1999 ) , 1038-1043. j. shen , x. yang , numerical approximations of allen - cahn and cahn - hilliard equations .dcds , series a , ( 28 ) , ( 2010 ) , pp 16691691 .
the calculation of optimal structures in reaction - diffusion models is of great importance in many physicochemical systems . we propose here a simple method to monitor the number of interphases for long times by using a boundary flux condition as a control . we consider as an illustration a 1-d allen - cahn equation with neumann boundary conditions . numerical examples are given and perspectives for the application of this approach to electrochemical systems are discussed .
we understand a complex system as a system with a large number of interacting components whose aggregated behaviour is non - linear and undetermined from the behaviour of the individual components .if we now consider these components as nodes of a network , and the underlying physical interaction between any two nodes as links , one way to understand these complex systems is by studying its topological structure , namely , the network connectivity . in natural complex systems ,the connectivity of the components is often unknown or is difficult to detect by physical methods due to large system - sizes .hence , it is of interest to infer the network structure that represents the physical interaction between time - series collected from the dynamics of the nodes .although network inference in non - linear systems has been extensively studied in recent years using cross - correlation or mutual information , recurrences , functional dynamics , and granger causality , to name a few , it still presents open challenges .the fundamental reason is that non - linearities , even in the absence of noise , produce behaviour that hinders the correct identification of existing or non - existing underlying direct physical dependence between any pair of nodes . in this paper, we introduce an information - based methodology to infer the structure of complex systems from time - series data .our methodology is based on a normalized form of an estimated mutual information rate ( mir ) , the rate by which information is exchanged per unit of time between any two components .mir is an appropriate measure to quantify the exchange of information in systems with correlation . in particular , authors in ref . show how to calculate mir in the case a markov partition is attainable , which is generally extremely difficult to find or unknown . here, we first show how mir can be approximately calculated for time - series data of finite length and low - resolution .then , we propose a normalization of the estimated mir that allows for a successful inference about the dependence structure of small networks of interacting dynamical systems , when markov partitions are unknown .our findings show that the estimated normalized mir allows for a successful inference of the structure of small networks even in the presence of additive noise , parameter heterogeneities and different coupling strenghts .moreover , our normalized estimated mir outperforms the use of mutual information ( mi ) based inference when different time - scale dynamics are present in the networks .the paper is organized as follows . in sec .[ section_methods_and_material ] , we introduce two information - based measures , the mi and the mir .we discuss the theoretical aspects of their definitions and show how they are related to each other . in sec .[ sec : section_models ] , we introduce the models used to create the complex system dynamics studied in this work . in sec .[ sec : methodology ] , we explain our methodology to calculate an approximation value of mir and introduce the normalized mir .section [ sec : results ] shows how we apply our methodology to different coupled maps and to a neural network in which the dynamics of the nodes is described by the hindmarsh - rose neuron model . finally , in sec .[ sec : conclusions ] we discuss our work and discuss our findings .information can be produced in a system and it can be transferred between its different components . if transferred , at least two components that are physically interacting by direct or indirect links should be involved . in general , these components can be time - series , modes , or related functions of them , defined on subspaces or projections of the state space of the system . in this work ,we study the amount of information transferred per unit of time , i.e. , the mutual information rate ( mir ) , between any two components of a system , to determine if a link between them exists .the existence of a link between two units means there is a bidirectional connection between them due to their interaction .the mutual information ( mi ) between two random variables , and , of a system is the amount of uncertainty one has about ( ) after observing ( ) .specifically , mi is given by where and are the marginal entropies of and ( shannon entropies ) respectively , and is the joint entropy between and . is the probability of a random event to happen in , is the probability of a random event to happen in , and is the joint probability of events and to occur simultaneously in variables and . is the number of random events in both variables , and . in particular , eq .can be written equivalently as this equation can be interpreted as the strength of the dependence between two random variables and . when , the dependence strength between and is null , consequently , and are independent . the computation of from time - series is a subtle task .firstly , it requires the calculation of probabilities computed on an appropriate probabilistic space on which a partition can be defined .secondly , is a measure suitable for the comparison between pairs of components of the same system but not between different systems .the reason is that different systems can have different correlation decay times , hence , different characteristic time - scales .there are three main approaches to compute mi , and the variation resides in the different ways to compute the probabilities involved in eq . .the first one is the bin or histogram method , which finds a suitable partition of the 2d space on equal or adaptive - size cells .the second one employs density kernels , where a kernel estimation of the probability density function is used .the last one computes mi by estimating probabilities from the distances between closest neighbours . in this work, we adopt the first method and compute probabilities in a partition of equally - sized cells in the probabilistic space generated by two variables and .it is well known that this approach , proposed in and studied in , overestimates the value of for random systems or non - markovian partitions . in particular , the authors explain two basic reasons for the overestimation of mi . the finite resolution of a non - markovian partition and the finite length of the recorded time - series . according to ,these errors are systematic and are always present in the computation of mi for an arbitrary non - markovian partitions . here, we avoid these systematic error by creating a novel normalization when dealing with the mir . for the numerical computation of [ eq .] , we use the approach reported in refs .we define a probabilistic space , where is formed by the time - series data observed from a pair of nodes , and , of a complex system . then , we partition into a grid of fixed - sized cells . the length - side of each cell , ,is then set to .consequently , the probability of having an event for variable , , is the fraction of points found in row of the partition .similarly , is the fraction of points that are found in column of , and is the joint probability computed from the fraction of points that are found in cell of the same partition , where .we emphasize here that depends on the partition considered for its calculation as , , and attain different values for different cell - sizes . due to the issues arising from the definition of mi in terms of its partition dependence , the authors in ref . have demonstrated how to calculate the mir for two time - series of finite length irrespective of the partitions , instead of using the mi .this quantity is invariant with respect to the resolution of the partition . in particular , and for infinitely long time - series, mir is theoretically defined as the mutual information exchanged per unit of time between and .specifically , where represents the mi of eq . between random variables and , considering trajectories of length that follow an itinerary over boxes in a grid with an infinite number of cells . since is a symmetric function with respect to and , .we also note that the term tends to zero in the limit of infinitely long trajectories , . the authors in ref . show that if a partition with cells is a markov partition of order , then mir can be estimated from finite - length and low - resolution time - series ( since the limits in eq . ) by using where both and are finite quantities .notice that an order partition can only generate statistically significantly probabilities if there is in each cell a sufficiently large amount of points ( see eq .( [ noc ] ) ) . besides, points in a cell must spread over the probabiliistic space after iterations .so , the length of the time - series must be reasonably larger than . in sec .[ sec : methodology ] , we make a novel demonstration of eq .( [ mir_definition_epsilon ] ) , from which it becomes clear why mir can be estimated from finite - length and low - resolution time - series . in this equation, is the mi between and , considering probabilities that are calculated in a markov partition , and represents the shortest time for the correlation between and to be lost for that particular markov partition . also represents the time after which the evolution of a chaotic system is unpredictable .moreover , this time is of the order of the shortest poincar return - time and is related to the order markov partition , where indicates that the future state of a random variable is independent on its previous states and is independent on the states of for an order .we adopt various topologies for the networks and various dynamics for the components of the complex systems considered .hence , the network inference , which represents the detection of the topological structure of the component s interactions , is done from the time - series that are recorded for each component . in particular , we divide the analysis on discrete and on continues time - series components .the dynamics of the class of discrete complex systems that are of interest here are described by the following equation where is the -th iterate of map , where and is the number of maps ( nodes ) of the system , ] for all , following . is a laplacian matrix and accounts for the way neurons are electrically ( diffusively ) coupled . particularly , where is the binary adjacency matrix of the electrical connections and is the nodes degree diagonal matrix based on . if then neuron perturbs neuron with an intensity given by infer the topology of a network using mir [ eq . ] , we need to compute the correlation decay time . is difficult to calculate in practical situations since it depends on quantities such as lyapunov exponents and expansion rates , which demand a high computational cost . here, we estimate it by the number of iterations that takes to points in cells of to expand and completely cover . this is a necessary condition to determine the shortest time for the correlation to decay to zero .in particular , we are introducing a novel way to calculate from the diameter of a network , which is based on the dynamics of points mapped from one cell of to another , namely , a network with the connectivity given by the transitions of points from cell to cell of or an itinerary network .we construct as follows .we assume that each equally - sized cell in , occupied by at least one point , represents a node in .then , following the dynamics of points moving from one cell to another , we create the connections between nodes , i.e. , the links in . specifically , a link between nodes and exists if points in travel from cell to cell .if the link exists the weight is equal to 1 , if it is absent , then it is equal to , therefore , is defined as a binary matrix with elements . in this framework , a uniformly random time - series with no correlation results in a complete network , namely , an all - to - all network .we define as the diameter of .the reason is that is the minimum time that takes for points inside any cell of to spread to the whole extent of . by definition ,the diameter of a network is the maximum length for all shortest - paths , i.e , the minimum distance required to cross the entire network .hence , our approach transforms the calculation of into the calculation of the diameter of .in particular , for the estimation of the network diameter we use johnson s algorithm .to estimate mir from finite - length low - resolution time - series data , we truncate the summation in eq . up to a finite size , depending on the resolution of data , and consider small trajectory pieces of the time - series with a length , which depends on the total length of the time - series and on eq .( [ noc ] ) , such that , .\label{truncate}\ ] ] in eq .( [ truncate ] ) , left - hand and right - hand sides would be equal if the partition , where probabilities are being calculated , is markov .the length represents also the largest order that a partition that generates statistically significant probabilities can be constructed from these many trajectory pieces . assuming that the order of the partition constructed is ( which also represents the time for the correlation in the partition to decay to zero , if the partition would be markov ) , then eq .( [ truncate ] ) becomes .\ ] ] now , taking two partitions , and , with different correlation decay times , and respectively , and different number of cells , and respectively , with , we have .moreover , generates in the sense that , where is the evolution operator and means the pre - iteration of partition . then , hence , we can write eq . as , \vspace{0.25cm}\\ & \cong&\frac{1}{t_1}\sum_{i=1}^{t_1}[i_{xy}(i,\lambda_2)-i_{xy}(i,\lambda_1 ) ]. \label{eq : mir_partitions}\end{aligned}\ ] ] when the partition is a markov generating partition , its properties fulfil then , if our partition is close to a markov partition , eq .results in \\ & \equiv & \frac{1}{t_1}i_{xy}(1,\lambda_{t_1}),\end{aligned}\ ] ] which is our demonstration for the validity of eq . .therefore , in order to use eq .( [ eq : mir_markov_p ] ) , we must have partitions for which eq .( [ relate - markov ] ) is approximately valid .this condition can be reached for partitions constructed with a sufficiently large number of equally - sized cells of length , exactly the type of partition considered here .notice , however , that partitions will typically not be markov nor generating , causing systematic errors in the estimation of mir . to correct these errors ,we propose the normalizations in eqs .( [ mir_epsilon_normalized ] ) and ( [ mir_en2 ] ) .it is important to notice that is always a partition - independent quantity , if and only if , the partitions are markov . in order to calculate , we use eq . , which requires the calculation of probabilities in .fulfilling the inequality where is the mean number of points inside all occupied cells of the partition of , eq .guarantees that the probabilities are unbiased . for our analysis , using a non - markovian partition allows us to simplify the calculations of , however , taking this kind of partitions into consideration would make the mir values to oscillate around an expected value .moreover , mir for different non - markovian partitions , not only has a non - trivial dependence with the number of cells in the partition , but also presents a systematic error . therefore , since for a non - markovian partition of equally - sized cells [ estimated by eq .] , is expected to be partition - dependent , we propose here a way to obtain a measure , computed from , that is partition independent and that is suitable for network inference . to infer the structure of a network , we calculate the mir for the different pairs of nodes in the network , which is all we need due to the symmetric property of mir .we also discard the mir values for the same variable , i.e. , mir , because we are interested in the exchange of information between different variables .we compute the exchanged between any two nodes in a network by taking the expected value over different partition sizes , i.e. , , where is the expected value of . in order to remove the systematic error in this calculation ,we perform instead a weighted average , where the finer partitions ( larger ) contribute more to the value than the coarser ones ( smaller ) .the reason is that a smaller is likely to create a partition that is further away from a markovian one than a partition of larger .consequently , we resolve the systematic error by weighing differently the different partitions .therefore , we propose a novel normalization for the mir as follows .first , we use an equally - sized grid of size , we subtract from mir , calculated for all pairs of nodes , its minimum value and denote the new quantity as .theoretically , a pair that is disconnected should have a mir value close to zero , however , in practice , the situation is different because of the systematic errors coming from the use of a non - markovian partition , as well as , from the information flow passing through all the nodes in the network .for example , the effects of a perturbation in one single node will arrive to any other node in a finite amount of time .this subtraction is proposed to reduce these two undesired overestimations of mir .after this step , we remain with mir as a function of . normalizing then by , where again the maximum and minimum are taken over all different pairs , we construct a relative magnitude , namely , where is the mir between nodes and and is the minimum with respect to the pairs and is the maximum with respect to all pairs .this magnitude is still a function of , however , we can now perform an average over different values of without the systematic error .next , we apply eq . for different gridssizes to obtain , where is the maximum number of cells per axis , resulting in a grid of cells , and fulfilling at the same time eq . .then , similarly to the idea used for eq . , we make a second normalization over to obtain where the maximum is being taken now over the grids . finally , applying eq . to each pair , we obtain its average value , . the higher the value of , the higher the amount of information exchanged between and per unit of time .this allows us to identify pairs of nodes that exchange larger rates of information than others . in order to perform the network inference from the mir , we fix a threshold in ] , we obtain different inferred networks .our results show that there is an interval of thresholds within ] . in fig .[ fig : param]*(a ) * , we observe that for closer to , a relatively short length ( of about points ) is enough to infer correctly the original network , which is generated by the adjacency matrix of sec .[ sec : section_models ] .however , when is close to , a larger time - series ( of about points ) is needed to achieve successful reconstruction .values of and ] is the noise strength .since , the noise strength is the standard deviation in the normal distribution . fig . [fig : param]*(b ) * shows the parameter space for different coupling strengths versus .we observe perfect inference for noise strengths , i.e. for .moreover , the best reconstruction using is for coupling strengths in $ ] , a dynamical regime where chaotic behaviour is prevalent .we also apply our methodology for the study of network inference in the case of continuous dynamics given by the hr system .we use two electrical couplings , and , both considered for time - series of length .figure [ fig : hindmarsh - rose_recons ] shows the band for 100% successful network inference , where panel * ( a ) * corresponds to and panel * ( b ) * to .this figure shows that is able to infer the correct network structure , in this case , for small networks of continuous - time interacting components . and , respectively .the red bands show the range of thresholds for which the original network is inferred with a 100% success.,title="fig : " ] and , respectively .the red bands show the range of thresholds for which the original network is inferred with a 100% success.,title="fig : " ] finally , we compare mi and to assess the effectiveness of our proposed methodology for network inference .we apply the same normalization process used for mir , eq . , to mi to have an appropriate comparison .in particular , we infer the network structure of the system described in sec .[ sec : section_models ] with the network shown in fig .[ fig : networks]*(b)*. as we have explained in sec .[ sec : section_models ] , this system has two clusters of nodes with different dynamics .the dynamics in the left cluster is given by the 3rd - order composition of the logistic map , whereas the dynamics of the right cluster is given by ordinary logistic map dynamics .the different dynamics of the two groups produces different correlation decay times , , for nodes and , in particular when the pair of nodes comes from different clusters .the different correlation decay times produce a non - trivial dynamical behaviour that challenges the mi performance for network inference . .panel * ( a ) * plots of eq .for all links .this is a case where a complete network inference can not be achieved ( indicated by the absence of any red band ) .panel * ( b ) * is the same as before but for .the color code corresponds to the same color code identifying different nodes in fig .[ fig : networks]*(b)*. the darkest color is the link connecting the two clusters.,title="fig : " ]. panel * ( a ) * plots of eq . for all links .this is a case where a complete network inference can not be achieved ( indicated by the absence of any red band ) .panel * ( b ) * is the same as before but for .the color code corresponds to the same color code identifying different nodes in fig .[ fig : networks]*(b)*. the darkest color is the link connecting the two clusters.,title="fig : " ] figure [ fig : composed_system ] shows the results obtained for the normalized mi , , and our normalized mir , , for each of the possible pairs of nodes .the purple bars correspond to the pairs of nodes , and of the first cluster , the orange bars correspond to the pairs of nodes , and of the second cluster ( 3rd order composed dynamics ) and the black bar corresponds to the link between clusters ( notice that due to the small coupling strength between the two clusters this link is not detected using any of the two methods ) .nevertheless , mir identifies correctly all intra links of the network where mi fails to do so .we conclude that the normalized mir is preferable over the normalized mi when it comes to the detection of links in a complex system with different correlation decay times .the reason is that the normalized mir takes into consideration the correlation decay time associated to each pair of nodes , contrary to the mi .in this paper we have introduced a new information based approach to infer the network structure of complex systems .mir is an information measure that computes the information transferred per unit of time between pairs of components in a complex system . , our novel normalization for the mir that is introduced in eq . , is a measure based on mir and developed for network inference .we find that is a robust measure to perform network inference in the presence of additive noise , short time - series , and also for systems with different coupling strengths .since mir and depend on the correlation decay time , they are suitable for inferring the correct topology of networks with different time - scales . in particular , we have explored the effectiveness of mir versus mi in terms of how successful they are in inferring exactly the network of our small complex systems . in general , we find that mir outperforms mi when different time - scales are present in the system . our results also show that both measures are sufficiently robust and reliable to infer the networks analyzed whenever a single time - scale is present . in other words , small variations in the dynamical parameters , time - series length , noise intensity , or topology structure , maintain a successful inference for both methods .it remains to be seen the types of errors that are found in these measures when perfect inference is missing or impossible to be done .ebm , msb and cga acknowledge financial support provided by the epsrc `` ep / i032606/1 '' grant .cga contributed to this work while working at the university of aberdeen and then , while working at the university of essex , united kingdom .nr acknowledges the support of pedeciba , uruguay .10 n. rubido , a. c. mart , e. bianco - martinez , c. grebogi , m. s. baptista , and c. masoller , `` exact detection of direct links in networks of interacting dynamical units '' , new j. phys . * 16 * , 093010 ( 2014 ) .m. s. baptista , f. m. kakmeni , and c. grebogi , `` combined effect of chemical and electrical synapses in hindmarsh - rose neural networks on synchronization and the rate of information . '' , phys .rev .. e * 82*(3 ) , 036203 ( 2010 ) .g. benettin , l. galgani , a. giorgilli , and j. m. strelcyn , `` lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems ; a method for computing all of them .part 1 : theory '' , meccanica * 15 * , 9 - 20 ( 1980 ) .g. benettin , l. galgani , a. giorgilli , and j. m. strelcyn , `` lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems ; a method for computing all of them .part 2 : numerical application '' , meccanica * 15 * , 21 - 30 ( 1980 ) . j. m. stinnett - donnelly , n. thompson , n. habel , v. petrov - kondratov , d. d. correa de sa , j. h. bates , and p. s. spector , `` effects of electrode size and spacing on the resolution of intra - cardiac electrograms '' , coronary artery dis .* 23*(2 ) , 126 - 32 ( 2012 ) .n. k. chen , c. c. dickey , s. s. yoo , c. r. guttmann , and l. p panych , `` selection of voxel size and slice orientation for fmri in the presence of susceptibility field gradients : application to imaging of the amygdala '' , neuroimage * 19*(3 ) , 817 - 25 ( 2003 ) .
this work uses an information - based methodology to infer the connectivity of complex systems from observed time - series data . we first derive analytically an expression for the mutual information rate ( mir ) , namely , the amount of information exchanged per unit of time , that can be used to estimate the mir between two finite - length low - resolution noisy time - series , and then apply it after a proper normalization for the identification of the connectivity structure of small networks of interacting dynamical systems . in particular , we show that our methodology successfully infers the connectivity for heterogeneous networks , different time - series lengths or coupling strengths , and even in the presence of additive noise . finally , we show that our methodology based on mir successfully infers the connectivity of networks composed of nodes with different time - scale dynamics , where inference based on mutual information fails . = 1 * the mutual information rate ( mir ) measures the time rate of information exchanged between two non - random and correlated variables . since variables in complex systems are not purely random , mir is an appropriate quantity to access the amount of information exchanged in complex systems . however , its calculation requires infinitely long measurements with arbitrary resolution . having in mind that it is impossible to perform infinitely long measurements with perfect accuracy , this work shows how to estimate mir taking into consideration this fundamental limitation and how to use it for the characterization and understanding of dynamical and complex systems . moreover , we introduce a novel normalized form of mir that successfully infers the structure of small networks of interacting dynamical systems . the proposed inference methodology is robust in the presence of additive noise , different time - series lengths , and heterogeneous node dynamics and coupling strengths . moreover , it also outperforms inference methods based on mutual information when analysing networks formed by nodes possessing different time - scales . *
we consider the following pde system }(\chi_t)-\operatorname{div}({\bf d}(x,\nabla\chi))+w'(\chi ) \ni - b'(\chi)\frac{{\varepsilon({\bf u})}\mathrm{r}_e { \varepsilon({\bf u})}}2 + { \vartheta}\quad\hbox{in } \omega \times ( 0,t),\label{eqii}\end{aligned}\ ] ] which describes a thermoviscoelastic system occupying a reference domain , , supplemented with suitable initial and boundary conditions . the symbols and respectively denote the absolute temperature of the system and the vector of _ small displacements_. depending on the choices of the functions and , we obtain a model for _ phase transitions _ : in this case , is the order parameter , standing for the local proportion of one of the two phases ; for _ damage _ : in this case , is the damage parameter , assessing the soundness of the material .we will assume that takes values between and , choosing and as reference values : for the _ pure phases _ in phase change models ( for example , stands for the solid phase and for the liquid one in solid - liquid phase transitions , and one has in the so - called _ mushy regions _ ) ; for the completely _ damaged _ and the _ undamaged _ state , respectively , in damage models , while corresponds to _ partial damage_. let us now briefly illustrate the derivation of the pde system . we shall systematically refer for more details to , where we dealt with the case of phase transitions in thermoviscoelastic materials , and just underline here the main differences with respect to the discussion in . equation , governing the evolution of the displacement , is the classical balance equation for macroscopic movements ( also known as the _ stress - strain relation _ ) , in which inertial effects are taken into account as well .it is derived from the principle of virtual power ( cf . ) , which yields where the symbol stands both for the scalar and for the vectorial divergence operator , is the stress tensor , and an exterior volume force . for , we adopt the well - known constitutive law with the linearized symmetric strain tensor , which in the ( spatially ) three - dimensional case is given by , ( with the commas we denote space derivatives ) .hence , the explicit expression of depends on the form of the free energy functional and of the pseudopotential of dissipation .the former is a function of the state variables , namely , its gradient , the absolute temperature , and the linearized symmetric strain tensor . according to moreau s approach ( cf . and references therein ) , we include dissipation in the model by means of the latter potential , which depends on the dissipative variables , , and . we will make precise our choice for and below , cf . and. we shall supplement with a zero dirichlet boundary condition on the boundary of yielding a _ pure displacement _ boundary value problem for , according to the terminology of .however , our analysis carries over to other kinds of boundary conditions on , see remark [ rem - other - b.c . ] .following frmond s perspective , is coupled with the equation of microscopic movements for the phase variable ( cf . ) , leading to .let ( a density of energy function ) and ( an energy flux vector ) represent the internal microscopic forces responsible for the mechanically induced heat sources , and let us denote by and their dissipative parts , and by and their non - dissipative parts .standard constitutive relations yield then , if the volume amount of mechanical energy provided to the domain by the external actions ( which do not involve macroscopic motions ) is zero , the equation for the microscopic motions can be written as where and will be specified according to the expression of and .the natural boundary condition for this equation of motion is where is the outward unit normal to .thus ( cf . )we obtain the homogeneous neumann boundary condition on finally , equation is derived from the internal energy balance where denotes a heat source and and are obtained from and by means of the standard constitutive relations we couple equation with a no - flux boundary condition : implying ( cf . ) the homogeneous neumann boundary condition from the above relations and the following choices for the free energy functional and of the pseudopotential of dissipation ( cf . and ), we derive the pde system within the _ small perturbation assumption _ ( i.e. neglecting the quadratic terms on the right - hand side of the heat equation ) . in agreement with thermodynamics ( cf . and ( * ? ? ?* sec . 4 , 6 ) ), we choose the volumetric free energy of the form where is a concave function of .notice that the symmetric , positive - definite elasticity tensor is pre - multiplied by a function of the phase / damage parameter .in particular , in the case of phase transitions in viscoelastic materials , a meaningful choice for is , or a function vanishing at ( * ? ? ?4.5 , pp .42 - 43 ) .this reflects the fact that we have the full elastic contribution of only in the non - viscous phase , and that such a contribution is null in the viscous one ( i.e. when ) ; for damage models a significant choice is instead ( cf . and ( * ? ? ?6.2 , pp .102 - 103 ) for further comments on this topic ) .the term represents the classical elastic contribution in which the stiffness of the material decreases as approaches , i.e. during the evolution of damage .the term is a _ mixture _ or _ interaction free - energy_. we shall suppose that is a normal integrand , such that for almost all the function is convex , , with -growth , and .hence , the field , , leads to a -laplace type operator in .the prototypical example is , yielding .let us point out that the gradient of accounts for interfacial energy effects in phase transitions , and for the influence of damage at a material point , undamaged in its neighborhood , in damage models .in this sense we can say that the term models nonlocality of the phase transition or the damage process , i.e. the feature that a particular point is influenced by its surrounding . in damage, this leads to possible hardening or softening effects ( cf .also for further comments on this topic ) .gradient regularizations of -laplacian type are often adopted in the mathematical papers on damage ( see for example ) , and in the modeling literature as well ( cf . , e.g. , ) . in a different context , a -laplacian elliptic regularization with has also been exploited in , in order to study a diffuse interface model for the flow of two viscous incompressible newtonian fluids in a bounded domain . in the following, we will also scrutinize another kind of elliptic regularization in , given by the _-laplacian operator on the sobolev - slobodeckij space , hereafter denoted by ( cf .later on for its precise definition ) . recently , fractional laplacian operators have been widely investigated ( cf . , e.g. , and the references therein ) , and used in connection with real - world applications , such as thin obstacle problems , finance , material sciences , but also phase transition and damage phenomena ( cf ., e.g. and ) . for analytical reasons, we will have to assume , which ensures the ( compact ) embedding , in the same way as for .this property will play a crucial role in the _ degenerate _ limit to complete damage , as it did in within the rate - independent context , cf .remark [ rmk : explanation ] for more details . as for the potential , we suppose that with ] .note that , in this way , the values outside ] with a _ nonconvex _ .in such a case , in the derivative needs to be understood as the subdifferential in the sense of convex analysis .the term in accounts for the thermal expansion of the system , with the thermal expansion coefficient assumed to be constant ( cf ., e.g. , ) .indeed , one could consider more general functions depending , e.g. , on the phase parameter and vanishing when .this would be meaningful especially in damage models , where the terms associated with deformations should disappear once the material is completely damaged ( cf ., e.g. , ) .we will discuss the mathematical difficulties attached to this extension in section [ mathdiff ] .for the pseudo - potential , following ( * ? ? ?* sec . 4 , 6 ) we take }(\chi_t ) + a(\chi)\frac{{\varepsilon}({\mathbf{u}}_{t})\mathrm{r}_v{\varepsilon}({\mathbf{u}}_t)}{2}\,,\end{aligned}\ ] ] where is a symmetric and positive definite viscosity matrix , premultiplied by a function of .in particular , for phase change models , one can take for example .the underlying physical interpretation is that the viscosity term vanishes when we are in the non - viscous phase , i.e. in the solid phase .also in damage models the choice is considered , cf .e.g. .the heat conductivity function will be assumed continuous ; for the analysis of system , we will need to impose some compatibility conditions on the growth of and of the heat capacity function in , see hypothesis ( ii ) in section [ ss : assumptions ] .furthermore , in is a non - negative coefficient : for we encompass in our model the _ unidirectionality _ constraint a.e . in .in fact , throughout the paper we are going to use the term _ irreversible _ in connection with the case in which the process under consideration is unidirectional , which is indeed typical of damage phenomena . with straightforward computations , from and using the form of the free energy functional and of the pseudopotential of dissipation , we derive equations , neglecting the quadratic contributions in the velocities on the right - hand side in by means of the aforementioned _ small perturbation assumption _ .this is a simplification needed from the analytical point of view in order to solve the problem .indeed , in a forthcoming paper we plan to tackle the pde system , featuring in addition these quadratic terms in the temperature equation .to do so , we are going to resort to specific techniques , partially mutuated from , however confining the analysis to some particular cases .in fact , to our knowledge only few results are available on diffuse interface models in thermoviscoelasticity ( i.e. also accounting for the evolution of the displacement variables , besides the temperature and the order parameter ) : among others , we quote . in all of these papers , the small perturbation assumption is adopted . for , without it in the spatial three - dimensional caseexistence results seem to be out of reach , at the moment , even when the equation for displacements is neglected ( whereas the existence of solutions to the _ full _ phase change model in the unknowns and has been obtained in in ) .this has led to the development of suitable _ weak solvability _notions to handle ( the usually neglected ) quadratic terms , like in ( where however is still taken constant ) . also in , a pde system coupling the displacement and the temperature equation ( with quadratic nonlinearities ) and a _ rate - independent _ flow rule for an internal dissipative variable ( such as the damage parameter ) has been analyzed .rate - independence means that the evolution equation for has no longer the _ gradient flow _ structure of : the term therein is replaced by , viz . in the pseudo - potential , instead ofthe quadratic contribution we have the -homogeneous dissipation term . in the frame of the ( weak ) _ energetic formulation _ for rate - independent systems , suitably adapted to the temperature - dependent case , in existence results have been obtained .a temperature - dependent , _ full _ model for ( rate - dependent ) damage has been addressed in as well , with local - in - time existence results .the main difficulties attached to the analysis of system are : the _ elliptic degeneracy _ of the momentum equation : in particular , we allow for the positive coefficients and to tend to zero simultaneously ; the _ highly nonlinear coupling _ between the single equations , resulting in the the quadratic terms , , and in the heat and phase equations and , respectively ; the _ poor regularity _ of the temperature variable , which brings about difficulties in dealing with the coupling between equations and when we consider the thermal expansion terms ( i.e. we take ) ; the _ doubly nonlinear _ character of , due to the nonsmooth graph and the nonlinear operator ( which on the other hand has a key regularizing role ) .furthermore , if we set in to enforce an irreversible evolution for , the simultaneous presence of the terms and }(\chi_t) ] , provided that ] and , ( cf .) , as well as of the -laplacian type operator , which still has a key role in providing global - in - time estimates for . to tackle this problem , following the approach of we restrict to the yet meaningful case and consider a suitable weak formulation of .it consists ( cf. definition [ def - weak - sol ] later on ) of the _ one - sided _ variational inequality [ weaksol - intro ] and of the following energy inequality for all ] ( again written as single - valued ) , , and in , which is the key step for proving the existence of solutions to the _ pointwise _ subdifferential inclusion .uniqueness results for the _ irreversible _ system , even in the isothermal case , do not seem to be at hand , due to the triply nonlinear character of equation , cf . also remark [ uni - irrev ] ahead . nonetheless , both in the reversible and in the irreversible case , in thms .[ teor1 ] , [ teor1bis ] and [ teor3 ] we will prove positivity of the temperature .in fact , under suitable conditions on the initial temperature , for we will also obtain a strictly positive lower bound for .for the analysis of the degenerate limit of ( [ eq0 ] , [ eqi - delta ] , [ eqii ] ) , we have carefully adapted to the present setting techniques from and .these two papers deal with _ complete damage _ in the fully rate - independent case , and , respectively , for a system featuring a rate - independent damage flow rule for and a displacement equation with viscosity and inertia according to kelvin - voigt rheology . in particular, we have extended the results from to the case of a _ rate - dependent _equation for , also coupled with the temperature equation . following , the key observation is that , for any family of solutions to ( [ eq0 ] , [ eqi - delta ] , [ eqii ] ) ( where denotes the _ enthalpy _ ) , it is possible to deduce for the quantities and the estimates for a positive constant independent of .therefore , there exist and such that , up to a subsequence in and in as . according the terminology of , we refer to and , respectively , as the viscous and elastic _ quasi - stresses_. in theorem [ teor5 ] we will focus on the degenerate limit , confining the discussion to the case where and ( viz .the map is nonincreasing for all ) .we refer to remark [ rmk : explanation ] for a thorough justification of these choices . passing to the limit as in and exploiting the above convergences for and will prove that there exist a triple solving the _ generalized _ momentum balance [ degen - intro ] such that the quasi - stresses fulfill in addition to , the notion of weak solution to system arising in the limit consists of the ( weak formulation of the ) enthalpy equation , of the _ one - sided _ variational inequality and of a _ generalized _ total energy inequality , featuring the quasi - stresses and .while referring to remark [ rmk : comparison - with - hk ] for more comments in this direction , we may observe here that is in fact the integrated version in terms of _ quasi - stresses _ of the variational inequality .[ [ plan - of - the - paper . ] ] plan of the paper .+ + + + + + + + + + + + + + + + + + in the next section [ s : main ] we introduce the variational formulation for the initial boundary value problem associated to the pde system , as well as our main assumptions .then , we state theorems [ teor1][teor4 ] on the existence / uniqueness of solutions for the reversible and the irreversible _ non - degenerating _ systems ( i.e. ) . the existence thms .[ teor1 ] , [ teor1bis ] , [ teor3 ] , and [ teor4 ] rely on the time - discretization procedure of section [ s : time - discrete ] ; their proof is carried out by passing to the limit with the time discretization in sections [ ss:4.1 ] , [ sec:4.2new ] , [ ss:5.1 ] , and [ ss:5.2 ] .the continuous dependence thm .[ teor2 ] is proved in section [ cd ] .finally , section [ sec:5 ] is devoted to the passage to the degenerate limit .the following table summarizes our results [ cols="<,<,<",options="header " , ][ not:2.1 ] throughout the paper , given a banach space we shall denote by its norm , and use the symbol for the duality pairing between and .hereafter , we shall suppose that we will identify both and with their dual spaces , and denote by the scalar product in , by both the scalar product in , and in , and by and the spaces for we will use the notation we standardly denote by and , for any , by its mean value .given a ( separable ) banach space , we will denote by ;x) ] , respectively ) , the space of functions from ] and have bounded variation on ] , resp . ) finally , throughout the paper we shall denote by the symbols various positive constants depending only on known quantities and by ( respectively ) or ( whenever it turns out to be more convenient ) ( respectively ) the first ( respectively ) second partial derivatives with respect to time of a function .* preliminaries of mathematical elasticity . * in what follows, we shall assume the material to be homogeneous and isotropic , so that the elasticity matrix in equation may be represented by where are the so - called lam constants and is the identity matrix . in order to state the variational formulation of the initial - boundary value problem for , we need to introduce the bilinear forms related to the -dependent elliptic operators appearing in .hence , given a _non - negative _ function , let us consider the continuous bilinear symmetric forms defined for all by where is the viscosity matrix . now , by korn s inequality ( see eg ( * ? ? ? * thm .6.3 - 3 ) ) , the forms and are -elliptic and continuous .namely , there exist constants , only depending on and , such that such that for all we shall denote by and the linear operators associated with and , respectively , namely it can be checked via an approximation argument that the following regularity results hold : in fact , the calculations we will develop extend to the case of an anisotropic and inhomogeneous material , for which the elasticity and viscosity matrices and are of the form and , with functions satisfying the classical symmetry and ellipticity conditions ( with the usual summation convention ) clearly , ensures , whereas not only does imply , but the -regularity also allows us to perform the third a priori estimate of section [ ss:3.2 ] rigorously . in what follows we will use the following elliptic regularity result ( see e.g. ( * ? ? ?6.3-.6 , p. 296 ) , cf .also ) : finally , in the weak formulation of the momentum equation , besides and we will also make use of the operator [ [ useful - inequalities . ] ] useful inequalities .+ + + + + + + + + + + + + + + + + + + + we recall the celebrated gagliardo - nirenberg inequality ( cf . ) in a particular case : for all , \phi(\cdot,\nabla\chi(\cdot ) ) \in l^1 ( \omega) ] we will just distinguish the two cases and .we mention in advance that the -regularity for derives from boccardo&gallout - type estimates on the enthalpy equation , combined with the gagliardo - nirenberg inequality .we refer to the forthcoming sec .[ ss:3.2 ] and to for all details .[ prob ] given , , find functions ;w^{1,r'}(\omega)^*)\quad \text{for every } 1 \leq r < \frac{d+2}{d+1 } , \\ & \label{reg - u } { \mathbf{u}}\in h^1(0,t;h_0^{2}(\omega;\r^d ) ) \cap w^{1,\infty } ( 0,t;{h_{0}^1(\omega;\r^d ) } ) \cap h^2 ( 0,t;l^2(\omega;\r^d ) ) , \\ & \label{reg - chi } \chi \in l^\infty ( 0,t;w^{1,p } ( \omega ) ) \cap h^1 ( 0,t;l^2 ( \omega)),\end{aligned}\ ] ] fulfilling the initial conditions the equations ; w^{1,r'}(\omega ) ) \cap w^{1 , r'}(0,t ; l^{r'}(\omega ) ) \text { and for all } t\in ( 0,t ] , \end{aligned } \\\label{eq1d } & { \mathbf{u}}_{tt}+{\mathcal{v}\left({(a(\chi)+\delta)}{{\mathbf{u}}_t}\right)}+{\mathcal{e}\left({b(\chi)}{{\mathbf{u}}}\right ) } + { \mathcal{c}_\rho } ( \theta(w ) ) = \mathbf{f } \quad \text{in \quad a.e.\ in } ( 0,t),\end{aligned}\ ] ] and the subdifferential inclusion }(\chi_t ) + { \mathcal{b}}(\chi)+\beta(\chi)+ \gamma(\chi ) \ni - b'(\chi)\frac{\varepsilon({\mathbf{u } } ) \mathrm{r}_e\varepsilon({\mathbf{u}})}{2}+\theta({w } ) \text { in } w^{1 , p}(\omega)^ * \text { a.e.\ in }. \end{aligned}\end{aligned}\ ] ] [ w - radon ] since ;w^{1,r'}(\omega)^*) ] one has .combining this with the fact that , we have that is a radon measure on for all ] , where is a primitive of .moreover , complies with the _ total energy equality _ .if , in addition , complies with hypothesis ( vi ) , then the further regularity result holds true .if in addition complies with , then holds .[ rmk : afterthm2 ] let us justify the additional regularity for , by developing on a purely _ formal _ level , enhanced estimates on the enthalpy equation , based on the stronger hypothesis ( viii ) .indeed , we ( formally ) choose as a test function for : re - integrating by parts in time and exploiting we obtain for any : now , we observe that and that , due to the poincar inequality and to the fact that , there holds therefore , taking into account the continuous embedding , for the l.h.s .of we have the lower bound clearly , relying on we can absorb the second term on the r.h.s .of into its left - hand side . on the other hand , using the fact that and taking into account the growth of , we can estimate the last summand on the r.h.s .by where the first inequality follows from the fact that thanks to and , and is chosen sufficiently small , in such a way as to absorb into the r.h.s . of . plugging and intowe immediately deduce an estimate for in the space .observe that , as a consequence of , we have , with .therefore , with the conjugate exponent of .a comparison in entails an estimate for , and we conclude .in fact , the -estimate for ( and accordingly , the regularity required of the test functions ) could be slightly improved by resorting to refined interpolation arguments : however , to avoid overburdening this exposition we choose not to detail this point . to prove thm .[ teor1bis ] , we will need to combine the time - discretization procedure for system , with a truncation of the function in the elliptic operator of , cf .problem [ prob : rhoneq0 ] later on .hence , in order to make the estimates developed in rmk .[ rmk : afterthm2 ] rigorous , we will have to pass to the limit in two phases , first with the time - step , and then with the truncation parameter , cf .the discussion in sec .[ sec:4.2new ] . for the _isothermal _ reversible system , in both cases and , we obtain a continuous dependence result , in particular yielding uniqueness of solutions , under the additional convexity property for in hypothesis ( vii ) . indeed , the latter ensures the monotonicity inequality for , which is crucial for the continuous dependence estimate .we also need to restrict to the case in which is constant .[ teor2 ] let , .assume that hypotheses ( iii)(v ) and ( vii ) are satisfied , and , in addition , that let , , be two sets of data complying with and , and , accordingly , let , , be the associated solutions on some ] , , and , which need to be properly identified when passing to the limit in the time - discretization scheme we are going to set up in section [ s : time - discrete ] .we now discuss the attached difficulties on a formal level , treating and } ] by comparison ) , in .to our knowledge , this can be proved by testing by ( cf . also ) .the related calculations ( which we will develop in sec.[s : time - discrete ] , on the time - discrete level , for the _ isothermal _ irreversible system ) would involve an integration by parts of the terms on the right - hand side of .thus , they would rely on an estimate in of the term . however , presently this enhanced bound for does not seem to be at hand due to the poor time - regularity of , cf . .that is why , for the temperature - dependent irreversible system we are only able to obtain the existence of solutions to a suitable _ weak formulation _ of , mutuated from , where we also restrict to the particular case in which in the present irreversible context it is sufficient to choose as in to enforce the constraint ] , it is not difficult to check that is equivalent to the system [ ineq - system ] with a.e . in ( and denoting the duality pairing between and , cf .notation [ not:2.1 ] ) . in order to see this ,it is sufficient to subtract from , and use the definition of .however , for reasons analogous to those mentioned in the above lines , the proof of is at the moment an open problem .therefore , following , in the forthcoming definition [ def - weak - sol ] we weakly formulate by means of , ( an integrated version of ) , and the _ energy inequality _ below , in place of .[ def - weak - sol ] let .we call a triple as in a _ weak solution _ to problem [ prob ] if , besides fulfilling the weak enthalpy and momentum equations , it satisfies for almost all , as well as with in the following sense : and the energy inequality for all ] , for , and for almost all assume now hypotheses ( iii)(v ) .let be as in , and suppose in addition that such that comply with and . then , }(\chi_t ( x , t ) ) \ { \text{for a.a.}}\ , ( x , t)\in \omega \times ( 0,t ) \text { s.t . } \\\chi_t + \zeta+ { \mathcal{b}}(\chi)+\xi+ \gamma(\chi ) = -b'(\chi)\frac{\varepsilon({\mathbf{u } } ) \mathrm{r}_e\varepsilon({\mathbf{u}})}{2}+\theta(w ) \qquad \text{a.e.~in } \\omega \times ( 0,t ) .\end{gathered}\ ] ] in order to prove it is sufficient choose in , test by , integrate in time , perform the calculations in the proof of thm .[ teor1 ] , and add the resulting equalities with the energy inequality . the second part of the statement can be proved considering the energy functional , \qquad\mathscr{e}(\chi):= \phi(\chi)+\int_\omega w(\chi)\dd x.\ ] ] it follows from hypotheses ( iv ) and ( v ) , as well as the chain rule of ( * ? ? ?* lemma 3.3 ) that , if complies with and , then the map is absolutely continuous on and fulfills therefore , differentiating in time and using we conclude that comply with , where the duality pairing is replaced by the scalar product in .likewise , yields .again on account of and , it is not difficult to infer from that }(\chi_t) ] a.e . in , as well as satisfying equations and , with replaced by .[ uni - irrev ] uniqueness of solutions for the _ irreversible _ system , even in the isothermal case , is still an open problem .this is mainly due to the triply nonlinear character of ( cf.also for non - uniqueness examples for a general doubly nonlinear equation ) .[ [ a - more - general - dissipation - potential . ] ] a more general dissipation potential .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as observed in remark [ more - general - alpha ] later on , in thm .[ teor4 ] we could consider a more general dissipation potential in . indeed , in place of subdifferential operator }. } \end{gathered}\ ] ]first , in section [ ss:3.1 ] we will approximate problem [ prob ] via time discretization .in fact , in the reversible case with , we will set up an _ implicit _ scheme ( cf .problems [ probk - rev ] and [ prob : rhoneq0 ] ) , whereas for the irreversible system with , we will employ the _ semi - implicit _ scheme of problem [ probk - irr ] .moreover , we will tackle separately the discretization of the isothermal irreversible system in problem [ probk - irr - iso ] .we refer to remarks [ rmk : comparison-1 ] and [ rem : irrev - discr ] for a thorough comparison between the various time - discretization procedures , and more comments .second , in sec .[ ss:3.2new ] we will prove existence results for problems [ probk - rev][probk - irr - iso ] .third , in sec .[ ss:3.2 ] we will perform suitable a priori estimates . [ not - alpha ] in what follows , also in view of the extension mentioned at the end of sec.[glob - irrev ] ( cf .[ more - general - alpha ] ) , we will use and as place - holders for } ] .we consider an equidistant partition of ] a.e . in .therefore , at the discrete level we loose all positivity information on the coefficient .the lack of the constraint ] a.e . in with on and and , we conclude that ( the bilinear form associated with ) the operator on the left - hand side of the above equation is continuous and coercive .hence , by lax - milgram s theorem , equation admits a ( unique ) solution . since the right - hand side of is in , relying on the regularity results of , e.g. , , we conclude that in fact .the analysis of follows the very same lines .[ [ step-3-discrete - equation - for - w . ] ] step : discrete equation for .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + finally , let us consider the functional now , is lower semicontinuous w.r.t .the topology of .furthermore , in view of and of the young inequality we have for a fixed choosing sufficiently small , we thus obtain that there exist two positive constants and such that this shows that the sublevels of are bounded in .hence , again by the direct method in the calculus of variations , we conclude that there exists , and satisfies the associated euler equation , namely .[ [ step-4-positivity . ] ] step : positivity .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let us assume in addition that holds , and prove by induction on .preliminarily , we prove by induction on that clearly holds for thanks to . it remains to show that , if a.e . in , then a.e . in .indeed , let us test by .taking into account the definition of , we have that . combining this with the inequality and noting that a.e . in , also in view of we obtain yielding a.e . in , whence .now , to prove , we observe that holds for due to .suppose now that a.e . in : in order to prove that a.e . in , we test by . with analogous calculations as abovewe obtain where the last inequality is due to the fact that a.e . in , and that a.e . in by the previously proved and the irreversibility constraint .thus , we conclude .the existence result for problems [ probk - rev ] and [ prob : rhoneq0 ] reads : [ lemma : ex - discr - rev ] let .assume hypotheses ( i ) and( iii)(v ) , and on the data . furthermore, if , assume hypothesis ( ii ) ; if , assume hypothesis ( viii ) and in addition that .then , problem [ probk - rev ] admits at least one solution .moreover , if a.e . in , and for a.a . , then any solution of problem [ probk - rev ] fulfills * step : existence of solutions . * our argument relies on existence results for elliptic systems from the theory of pseudo - monotone operators which can be found , e.g. , in ( * ? ? ?ii ) . indeed, we observe that system can be recast as denoting by the operator acting on the unknown and by the vector of the terms on the r.h.s . of the above equations, we can reformulate system in the abstract form in fact , mimicking for example the calculations in ( * ? ? ?* lemma 7.4 ) , it can be checked that is a pseudo - monotone operator ( according to ( * ? ? ?ii , def .2.1 ) ) on , coercive on that space .therefore , the leray - lions type existence result of ( * ? ? ?ii , thm .2.6 ) applies , yielding the existence of a solution to .* step : non - negativity of .* let us assume in addition that a.e.in and a.e . in .then a.e . in . to prove, we proceed by induction on and show that , if a.e . in , then a.e . in .indeed , let us test by .taking into account the definition of , we have that ( here we have kept also to encompass the case with thermal expansion , cf . below ) . combining this with the inequality and noting that a.e . in , also in view of we obtain yielding a.e . in , whence . under the additional hypothesis ( viii ) ( which gives ) , an analogous proof of existence of solutionscan be given for problem [ prob : rhoneq0 ] , hence we omit to give the details . [ [ notation - and - auxiliary - results . ] ] notation and auxiliary results .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + hereafter , for a given banach space and a -tuple , we shall use the short - hand notation we recall the well - known _ discrete by - part integration _formula we consider the left - continuous and right - continuous piecewise constant , and the piecewise linear interpolants of the values , namely the functions .}\ ] ] note that for ] we have and as .propositions [ prop : aprio ] and [ prop : aprio-2 ] collect in the cases and several a priori estimates on the approximate solutions , obtained by interpolation of the discrete solutions to problems [ probk - rev ] , [ probk - irr ] , [ probk - irr - iso ] , and problem [ prob : rhoneq0 ] , respectively .[ prop : aprio ] let .assume hypotheses ( i)(v ) and on the data .then , 1 .in the case there exist a constant such that for the interpolants of the solutions to problem [ probk - rev ] and to problem [ probk - irr ] there holds : ;w^{1,r'}(\omega)^ * ) } \leq s , \\ & \label{aprio8}\sup_{\tau>0 } \|\theta({\overline{{{w}}}_{\tau}})\|_{l^{2+\epsilon}(0,t;l^{2+\epsilon}(\omega ) ) } \leq s \quad \text{for any .}\end{aligned}\ ] ] 2 .if in addition there exists such that moreover , if also fulfills hypothesis ( vi ) , then 3 . in the isothermal case with ,if ( cf . ) and also fulfills hypothesis ( vi ) , estimates hold .moreover , there exists such that for ( the interpolants of ) the solutions to problem [ probk - irr - iso ] the constants in , , and , also depend on the parameters , , and , respectively .[ prop : aprio-2 ] let and .assume hypotheses ( i ) , ( iii)(v ) , and hypothesis ( viii ) ; suppose that the data comply with , and in addition that .then , for the interpolants of the solutions to problem [ prob : rhoneq0 ] estimates hold with a constant _ independent _ of , whereas estimates , and ( under the additional hypothesis ( vi ) ) hold for a constant _ depending _ on .moreover , there exists a constant such that we will treat the proofs of propositions [ prop : aprio ] and [ prop : aprio-2 ] in a unified way , developing a series of a priori estimates . [ [ proof - of - proposition - propaprio . ] ] proof of proposition [ prop : aprio ] .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + most of the calculations below will be detailed on the discretization scheme for the full _reversible _ system , and whenever necessary we will outline the differences in comparison with the discrete systems of problems [ probk - irr ] and [ probk - irr - iso ] . furthermore , for each estimate we will specify the values of the parameters and for which it is valid and , to make the computations more readable , we will illustrate them first on the time - continuous level , i.e. referring to system . *first a priori estimate for , : * _ we test by by , by , add them and integrate in time .this is the so - called _energy estimate_. _ we test by .note that for all . since ] a.e. in , thus we may again obtain . *second a priori estimate for , : * _ following ( see also ) , we test by and integrate in time ._ we test by .this gives rise to the following terms on the left - hand side : the latter inequality due to .moreover , always on the l.h.s .we have ( where stands for the vectorial laplace operator ) . on the right - hand side, we have where the latter inequality follows from .we now move the integral terms to the right - hand side .let us fix such that ( where is the exponent in ) .then , where the first and second inequalities respectively follow from the hlder and young inequalities , with as in , the third one from , and the last one taking into account estimates for , for , which in particular yields that a.e . in for some , and from choosing .furthermore , taking into account that , one easily checks that where the second inequality follows from the young inequality , from , and from estimating the latter term as in .analogously , again using that and that a.e . in , we have collecting and summing over the index , we obtain applying the discrete gronwall lemma once again , we conclude estimate , whence .it is immediate to check that calculations can also be performed on the discrete momentum equation in problem [ probk - irr - iso ] .[ remk : added - revision ] the calculations for the _ second a priori estimate _ carry over to the case the operator is replaced by the -laplacian , provided that .indeed , this ensures the continuous embedding for some , which is crucial in the above calculations , cf .. * third a priori estimate for , : * _ boccardo&gallout - type estimate on . _ as in the proof of ( * ? ? ?4.2 ) , we test equation by , where \text { is defined by } \pi({{w}})= 1-\frac1{(1+{{w}})^{\varsigma } } \quad \text { for some } \varsigma>0.\ ] ] note that is well - defined , since a.e . in , and it belongs to , as is lipschitz continuous . such a test function has been first proposed in , as a simplification of the technique by boccardo&gallout .we shall denote by the primitive of such that ( hence for ) .summing over , we obtain where the first inequality follows from , the fact that , and the second one from the convex analysis inequality and from the fact that , due to assumption , we have taking into account that for almost all and all , and relying on and on , we conclude that now , we argue in the very same way as in ( * ? ? ?* proof of prop .4.2 ) . combining the hlder and gagliardo - nirenberg inequalities ( cf . ) with the previously proved estimate and with , we see that ( cf .* formula ( 4.35 ) ) ) where the restriction on the index in fact derives from the application of the gagliardo - nirenberg inequality .next , for a sufficiently small such that from fulfills , there holds ( where we have omitted to indicate the dependence of the constants on and ) .the first inequality follows from , the second one from the gagliardo - nirenberg inequality with and : in fact the constraints in accord with formula .finally , the last inequality in is due to the young inequality , with depending on the constant to be suitably specified , under the additional condition that fulfills combining with , we immediately obtain hence , we choose in such a way as to absorb the second term on the right - hand side into the left - hand side .therefore , on account of , which yields via and the poincar inequality . finally , estimate ensues from and .observe that , when performing this estimate on the semi - implicit equation , we will obtain on the r.h.s . of the term , and we can estimate thanks to . *fourth a priori estimate for , : * _ comparison in . _it follows from estimates , , , and from the regularity result , that thus , for estimate follows from a comparison in .the same argument carries over to and to . *fifth a priori estimate for , : * _ comparison in ._ in view of estimates and of , a comparison argument in yields estimate . the same for . *sixth a priori estimate for , : * _ we test by and integrate in time ._ we test by . arguing as for via convexity inequalities and referring to notation for the symbol , we get where the last inequality follows from a.e . in , andthe fact that is lipschitz continuous on \doteq i_7 ] , we have . furthermore , we find where the second inequality also follows from the fact that is constant . collecting and the above inequalities , we thus infer where we set . then , estimate ensues via the discrete gronwall lemma , taking into account that in view of , , and .ultimately , follows from and the regularity result . *eighth a priori estimate for and in the isothermal case : * _ comparison in ._ from a comparison argument in , we conclude that is estimated in .then , and follow from the fact that .[ [ proof - of - proposition - propaprio-2 . ] ] proof of proposition [ prop : aprio-2 ] .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the a priori bounds follow from the calculations developed for the _ first a priori estimate _ , which also yields and for a constant independent of .the boccardo&gallout - type _ third estimate _ is replaced by the following * ninth a priori estimate for , : * _ test by w. _ we test by . summing over and recalling we obtain for a suitably small constant , where we have used that the terms are uniformly bounded ( by a constant depending on ) .hence , estimate follows from using that , taking into account the previously proved bounds and , and applying the discrete gronwall lemma .estimate then ensues from a comparison in , in view of the previously proved estimates . finally ,relying on , we are able to perform the analogue of the _ second a priori estimate _ on the momentum equation in the case as well , as the following calculations show . *tenth a priori estimate for , : * _ test by and integrate in time ._ we test by .every term can be dealt with like in the _ second estimate _, in addition we need to estimate the term choosing sufficiently small in such a way as to absorb into , and estimating via ( observe that is lipschitz continuous ) , we re - obtain , for a constant _ depending _ on .moreover , estimate ensues from a comparison in .finally , estimates and can be obtained by repeating on equation the very same calculations developed for the _ sixth estimate _ : again , we get bounds depending on the truncation parameter .[ rmk : est - indep - delta ] a close perusal of the proof of proposition [ prop : aprio ] , and in particular of the calculations performed in the second and fourth a priori estimates , reveals that in fact estimates hold for constants _ independent _ of in both cases and .this will play a key role in the proof of theorem [ teor5 ] .we conclude this section by mentioning in advance , for the reader s convenience , that the relevant estimates for the proof of thm .[ teor1 ] are the _first , second , third , fourth , fifth _ , and _sixth a priori estimate _ ; for the proof of thm .[ teor1bis ] are the _ first , fourth , sixth , ninth _ , and _tenth a priori estimate _ ; for the proof of thm .[ teor3 ] are the _first , second , third , fourth _ , and _fifth a priori estimate _ ; for the proof of thm .[ teor4 ] are the _first , second , fourth , seventh _ , and the _ eighth a priori estimate_.preliminarily , we rewrite equations in terms of the interpolants and , namely , \end{aligned } \\ & \label{eq - u - interp } \partial_t{\widehat{{\mathbf{u}}}_{\tau}}(t)+ { \mathcal{v}\left({(a({\overline{\chi}_{\tau } } ( t ) ) + \delta)}{\partial_t { { { \mathbf{u}}}_{\tau}}(t)}\right ) } + { \mathcal{e}\left({b({\overline{\chi}_{\tau } } ( t))}{{\overline{{\mathbf{u}}}_{\tau}}(t)}\right ) } = { \overline{\mathbf{f}}_{\tau}}(t ) \quad { \text{a.e .in\,}}\ , \omega , \ { \text{for a.a.}}\ , t \in ( 0,t ) , \\ & \label{eq - chi - interp } \begin{aligned } \partial_t { { \chi}_{\tau}}(t ) + { \mathcal{b}}({\overline{\chi}_{\tau}}(t ) ) + { \overline{\xi}_{\tau}}(t)+\gamma({\overline{\chi}_{\tau}}(t ) ) & = - b'({\underline{\chi}_{\tau}}(t))\frac{\eps({\underline{{\mathbf{u}}}_{\tau}}(t))\mathrm{r}_e \eps({\underline{{\mathbf{u}}}_{\tau}}(t))}2 + \theta({\overline{{{w}}}_{\tau}}(t))\\ & \qquad \qquad { \text{a.e .in\,}}\ , \omega , \ { \text{for a.a.}}\ , t \in ( 0,t ) , \end{aligned}\end{aligned}\ ] ] where for later use in wehave already integrated by parts in time , and is as in . in what follows we will take the limit of as by means of compactness arguments , combined with techniques from maximal monotone operator theory. _ step : compactness . _first of all , we observe that due to estimates and , there holds therefore , , joint with and well - known weak and strong compactness results ( cf . ) , yield that there exist a vanishing sequence of time - steps and as in such that as l^2(0,t;l^{2}(\omega;\r^d ) ) l^2(0,t;h^1(\omega;\r^d ) ) , } \end{array}\ ] ] as well as furthermore , if in addition complies with and , then , due to we also have the enhanced regularity , and the strong convergence as for , estimates and a generalization of the aubin - lions theorem to the case of time derivatives as measures ( see e.g. ( * ? ? ?* chap . 7 , cor . 7.9 ) )yield that there exists as in such that , up to the extraction of a further subsequence , as there hold 1\leq s<\infty ] .taking into account the a priori bound of in , we then conclude that and . } \end{array}\ ] ] with ;{w}^{2,{\mathfrak{s}}}(\omega)') ] and that we observe that the minimum problem yields for every that ( recall the short - hand notation for ) .writing necessary optimality conditions for the above minimum problem , we infer \text { and all } \eta \in w^{1,p}(\omega ) \text { with } 0 \leq \eta \text { and } \eta \leq { \underline{\chi}_{\tau}}(t ) \ { \text{a.e .in\,}}\ , \omega , \end{aligned}\ ] ] where we have used the short - hand notation ( cf . ) letting in and dividing the resulting inequality by , we deduce that \\ & \text { and all } \varphi \in w^{1,p}(\omega ) \text { s.t.\ there exists with } 0 \leq \nu \varphi + { \overline{\chi}_{\tau}}(t ) \leq { \underline{\chi}_{\tau}}(t ) \ { \text{a.e .in\,}}\ , \omega . \end{aligned}\ ] ] choosing ( observe that it complies with the constraint above , upon taking ) , we therefore obtain therefore , upon summing over the index we deduce the _ discrete version _ of the energy inequality for all , viz . where we have estimated the last term on the right - hand side of using that , thanks to the lipschitz continuity of ._ step : compactness ._ in view of the a priori estimates from proposition[ prop : aprio ] , we infer that there exist a vanishing subsequence and limit functions such that convergences , , hold true as .observe that in particular yields that and a.e . in .arguing as in the proof of ( * ? ? ?* lemma 5.11 ) , we now prove that indeed , ( * ? ? ?* lemma 5.2 ) gives a sequence of test functions for , fulfilling observe that the first of and convergences yield in particular we have where the first inequality follows from and the second one from elementary algebraic manipulations . now , choosing in ( which we are allowed to do thanks to ) and integrating in time , we obtain due to the bounds and , and to and .we also have where the second inequality follows from and , and the last passage is due to . taking into account that in by , we also prove that as . in this way ,from we conclude .observe that , combined with the bound then yields ._ step : passage to the limit ._ arguing in the very same way as for the proof of thm. [ teor1 ] , it is possible to prove that solve equations and .it now remains to prove the variational inequality , together with , and the energy inequality .as for the latter , it is sufficient to pass to the limit as in . for this , we use convergences , , , , , as well as , which in particular yields clearly , the last term on the right - hand side of tends to zero . since the argument for is perfectly analogous to the one developed in the proof of ( * ? ? ?4.4 ) , we refer the reader to for all details and here just outline its main steps . passing to the limit in as with suitable test functions from (* lemma 5.2 ) , we prove that for almost all from this , arguing as in the proof of ( * ? ? ?4.4 ) we deduce that for almost all therefore , we take denoting the characteristic function of the set . fromwe deduce that , with this inequality holds .moreover , it is immediate to check that also complies with ._ step : strict positivity of the temperature ._ suppose that holds : the discrete strict positivity and convergences yield that , in the limit , for almost all . therefore , ensues .[ ss:5.2 ] _ step : compactness ._ for the interpolants of the solutions of the discrete problem [ probk - irr - iso ] , estimates and hold . therefore , standard strong and weak compactness results yield that there exist fulfilling and a subsequence such that convergences and hold .moreover , estimates also imply that has the enhanced regularity , and that furthermore , there exist and such that , possibly along a further subsequence , _ step : passage to the limit ._ relying on convergences , , and , we take the limit of the discrete momentum equation . as for, we observe that , thanks to estimate and the second of , there holds therefore , also taking into account we pass to the limit in and conclude fulfill , with replaced by .furthermore , combining with we have thanks to ( * ? ? ?2.5 , p. 27 ) we conclude that a.e. in finally , testing equation by and integrating in time , with calculations analogous to we find for all ] has never been specifically used , therefore thm . [ teor4 ] extends to a maximal monotone operator as in , observing that , up to perturbing with an affine function , it is not restrictive to suppose that , , be two solution pairs like in the statement of theorem [ teor2 ] and set .taking into account that is constant ( cf . ) , hence for ( cf . ) , it is immediate to check that fulfill a.e . in , we test by and integrate in time . recalling ,it is not difficult to infer where we have whereas , the lipschitz continuity of on bounded intervals ( cf . ) and the hlder inequality yield where in the last inequality we have exploited the embeddings and ( * ? ? ?16.4 , p. 102 ) , with a suitable constant to be chosen later and the constant also depending on .moreover , we get noting that we obtain from that next , we test by integrate the resulting equation in time . with elementary computations , also taking into account the lipschitz continuity of , the monotonicity of , and the crucial inequality , we get using and the fact that we get where the last inequality is obtained arguing as for . again exploiting the lipschitz continuity of on bounded intervals andthe bound for in , we get where the last estimate also follows from the continuous embedding and the young inequality .collecting now , we arrive at summing up and and choosing , we conclude the application of the standard gronwall lemma gives immediately the desired continuous dependence estimate .[ slaplcd ] if we replace the -laplacian with the linear -laplacian in the equation for , the continuous dependence estimate of theorem [ teor2 ] can be performed without assuming to be constant ( cf . ) .indeed , in this case we would be able to deal with the additional term , which results from subtracting the equations fulfilled by solution pairs , .it would be possible to estimate it by means of the -norm of , which would pop in on the left - hand side of .we now address the passage to the _ degenerate _limit in the full system . for technical reasons which will be clarified in remark [ rmk : explanation ] later on , we focus on the _ irreversible _ case , and neglect the thermal expansion term in the momentum equation , i.e. take .furthermore , we confine the discussion to the case , in which , for the coefficients of _ both _ the elliptic operators in are truncated , cf .remark [ rmk : added ] below . in particular , we will take the functions and of the form [ trivial - case ] the choice and in and the truncation of both coefficients would lead to the momentum equation for which the asymptotic analysis would be less meaningful in the case of an _ irreversible _ evolution for . for , starting from an initial datum with , we would have for all ] the _ energy inequality _let us mention in advance that estimate for holds true only for the solutions obtained through the time - discretization procedure of section [ ss:3.2 ] .such solutions shall be referred to as _approximable_. indeed , on the one hand , remark [ rmk : est - indep - delta ] ensures that the _ discrete _ estimates are valid with constants independent on : hence they are inherited by the approximable solutions , yielding estimate below . on the other hand , the calculations developed for the fourth a priori estimate in sec .[ ss:3.2 ] suggest that , in order to prove for _ all _ weak solutions to , it would be necessary to test by with as in .this is not an admissible choice due to the poor regularity of . since we do not dispose of a uniqueness result for the _ irreversible full _ system , we can not conclude for _ all _ weak solutions ( in the sense of def .[ def - weak - sol ] ) , and therefore we will restrict to _ approximable _ solutions . [prop : indepe - delta ] assume hypotheses ( i ) , ( ii ) , and ( iv ) with , conditions on the data , and suppose that are given by .then , there exists a constant such that for all and for all _ ( approximable ) weak _ solutions to the irreversible full system , the following estimates hold ;w^{1,r'}(\omega)^ * ) } \leq \overline{s}.\end{aligned}\ ] ] estimates are straightforward consequences of the energy inequality , taking into account that for a constant independent of ] ( with a suitable subsequence of from ) , and for all there holds [ rmk : comparison - with - hk ] let us briefly compare the concept of weak solution ( to the _ degenerating _ irreversible full system ) arising from , with the notion of weak solution ( to the _ non - degenerating_ irreversible full system ) given in definition [ def - weak - sol ] , in the case in which .suppose that the functions in and have further regularity properties , and that a.e . in .then , holds a.e . in , hence it is immediate to realize that coincides with .furthermore , subtracting from the weak enthalpy equation tested by , we obtain a generalized form of the energy inequality for almost all and all , } \\ & \label{convergences - degen - u-1 } { \mathbf{u}}_{\delta_k } { { \rightharpoonup^*}}{\mathbf{u}}\text { in , } \\ & \label{convergences - degen - mu } { { \mbox{\boldmath}}}_{\delta_k } { \rightharpoonup}{{\mbox{\boldmath}}}\text { in , } \\ & \label{convergences - degen - eta } { { \mbox{\boldmath}}}_{\delta_k } { { \rightharpoonup^*}}{{\mbox{\boldmath}}}\text { in , } \\ & \label{convergences - degen - chi-1 } \chi_{\delta_k } { { \rightharpoonup^*}}\chi \text { in , } \\ & \label{convergences - degen - chi-2 } \chi_{\delta_k } \to \chi \text { in , } \end{aligned}\ ] ] the latter convergence due to the compactness results in and the compact embedding .observe that and respectively yield ;l^2 ( \omega;\r^d)) \mathrm{c}^0_{\mathrm{weak}}([0,t];w^{1,p } ( \omega)) ] on which .notice that on these cylinders for some .hence , exploiting convergence , we infer that there exists such that , for any , we have for all .thus also for all ] .hence there exists such that for all , and , by , there exists such that for therefore , since by . also exploiting, we succeed in taking the limit of the left - hand side of . as for the right - hand side , we use and argue in the following way where we have used that , thanks to and , uniformly on , thus the last inequality e.g. follows from the lower semicontinuity result of .finally , follows from taking the limit as of the total energy inequality , written on the interval for _ any _ ] . to identify ,we take the as of the first , second , fourth , and fifth term in , exploiting convergences , , , , as well as , and relying on lower semicontinuity arguments .therefore we conclude that holds .finally , inequality follows combining the following facts : on the one hand , since is uniformly bounded due to estimates , the dominated convergence theorem ensures on the other hand , on account convergences , by weak lower semicontinuity arguments we have that is greater or equal than the right - hand side of .this concludes the proof .these are the reasons why we have restricted the analysis of the degenerate limit to the irreversible system . within this setting, we further need to assume . indeed , because of the lack of estimates on for , we would not be able to the limit in the term in as .we also point out that , seemingly , the total energy inequality can not be improved to an inequality holding on any subinterval .indeed , for the sequence only the weak convergence is available , which does not allow us to take the limit of the right - hand side of but for .finally , observe that the proof of thm .[ teor5 ] simplifies if the operator is given by the nonlocal -laplacian operator . in this case , in order to pass to the limit in it is no longer necessary to prove the strong convergence for .in fact , the term in is replaced by , which can be dealt with by weak convergence arguments due to the linearity of the operator .the authors would like to thank all the referees for their careful reading of the paper , and in particular one of them for a very useful suggestion on how to improve thm .[ teor5 ] .they are also grateful to christiane kraus and christian heinemann for fruitful discussions on some topics related to this paper .h. brezis : oprateurs maximaux monotones et smi - groupes de contractions dans les espaces de hilbert , north - holland mathematics studies , no .5 . , north - holland publishing co. , amsterdam - london ; american elsevier publishing co. , inc . , new york , 1973 .p. krej , e. rocca , j. sprekels : liquid - solid phase transitions in a deformable container , contribution to the book `` continuous media with microstructure '' on the occasion of krzysztof wilmanski s 70th birthday , springer ( 2010 ) , 285300 .
in this paper , we analyze a pde system arising in the modeling of phase transition and damage phenomena in thermoviscoelastic materials . the resulting evolution equations in the unknowns ( absolute temperature ) , ( displacement ) , and ( phase / damage parameter ) are strongly nonlinearly coupled . moreover , the momentum equation for contains -dependent elliptic operators , which degenerate at the _ pure phases _ ( corresponding to the values and ) , making the _ whole _ system degenerate . that is why , we have to resort to a suitable weak solvability notion for the analysis of the problem : it consists of the weak formulations of the heat and momentum equation , and , for the phase / damage parameter , of a generalization of the principle of virtual powers , partially mutuated from the theory of rate - independent damage processes . to prove an existence result for this weak formulation , an approximating problem is introduced , where the elliptic degeneracy of the displacement equation is ruled out : in the framework of damage models , this corresponds to allowing for _ partial damage _ only . for such an approximate system , global - in - time existence and well - posedness results are established in various cases . then , the passage to the limit to the degenerate system is performed via suitable variational techniques . * key words:*phase transitions , damage phenomena , thermoviscoelastic materials , elliptic degenerate operators , nonlocal operators , global existence of weak solutions , continuous dependence . * ams ( mos ) subject classification:*35k65 , 35k92 , 35r11 , 80a17 , 74a45 .
it is well - known that exact inference in _ tree - structured _ graphical models can be accomplished efficiently by message - passing operations following a simple protocol making use of the distributive law .it is also well - known that exact inference in _ arbitrary _ graphical models can be solved by the junction - tree algorithm ; its efficiency is determined by the size of the maximal cliques after triangulation , a quantity related to the treewidth of the graph .figure [ fig : examples_intro ] illustrates an attempt to apply the junction - tree algorithm to some graphical models containing cycles . if the graphs are not chordal ( ( a ) and ( b ) ) , they need to be triangulated , or made chordal ( red edges in ( c ) and ( d ) ) .their clique - graphs are then guaranteed to be _ junction - trees _ , and the distributive law can be applied with the same protocol used for trees ; see for a beautiful tutorial on exact inference in arbitrary graphs .although the models in this example contain only pairwise factors , triangulation has increased the size of their maximal cliques , making exact inference substantially more expensive .hence approximate solutions in the original graph ( such as loopy belief - propagation , or inference in a loopy factor - graph ) are often preferred over an exact solution via the junction - tree algorithm . even when the model s factors are the same size as its maximal cliques , neither exact nor approximate inference algorithms take advantage of the fact that many factors consist only of _ latent _ variables . in many models ,those factors that are conditioned upon the observation contain fewer latent variables than the purely latent cliques .examples are shown in figure [ fig : examps ] .this encompasses a wide variety of models , including grid - structured models for optical flow and stereo disparity as well as chain and tree - structured models for text or speech .simple analysis reveals that the probability of choosing a permutation that does not contain a value inside a square of size is this is precisely , where is the cumulative density function of .it is immediately clear that , which defines the best and worst - case performance of algorithm [ alg1 ] .the case where we are sampling from multiple permutations simultaneously ( i.e. , algorithm [ alg : ext ] ) is analogous .we consider permutations embedded in a -dimensional hypercube , and we wish to find the width of the smallest shaded hypercube that includes exactly one element of the permutations ( i.e. , , \ldots , p_{k-1}[i]$ ] ) .this is represented in figure [ fig : perms](c ) for .note carefully that is the number of _ lists _ in ( eq .[ eq : hatk ] ) ; if we have lists , we require permutations to define a correspondence between them . unfortunately , the probability that there is no non - zero entry in a cube of size is not trivial to compute .it is possible to write down an expression that generalizes ( eq . [ eq : factprob ] ) , such as ( in which we simply enumerate over all possible permutations and ` count ' which of them do not fall within a hypercube of size ) , and therefore state that however , it is very hard to draw any conclusions from ( eq .[ eq : pkm ] ) , and in fact it is intractable even to evaluate it for large values of and .hence we shall instead focus our attention on finding an upper - bound on ( eq . [ eq : runtimeexact ] ) .finding more computationally convenient expressions for ( eq . [ eq : pkm ] ) and ( eq . [ eq : runtimeexact ] ) remains as future work .although ( eq . [ eq : runtimek1 ] ) and ( eq . [ eq : runtimeexact ] ) precisely define the running times of algorithm [ alg1 ] and algorithm [ alg : ext ] , it is not easy to ascertain the speed improvements they achieve , as the values to which the summations converge for large are not obvious . here , we shall try to obtain an upper - bound on their performance , which we assessed experimentally in section [ sec : experiments ] .in doing so we shall prove theorems [ the : alg1 ] and [ the : algext ] .( see algorithm [ alg1 ] ) consider the shaded region in figure [ fig : perms](d ) .this region has a width of , and its height is chosen such that it contains precisely one non - zero entry .let be a random variable representing the height of the grey region needed in order to include a non - zero entry .we note that our aim is to find the smallest such that .the probability that none of the first samples appear in the shaded region is next we observe that if the entries in our grid do not define a permutation , but we instead choose a _ random _ entry in each row , then the probability ( now for ) becomes ( for simplicity we allow to take arbitrarily large values ) .we certainly have that , meaning that is an upper - bound on , and therefore on .thus we compute the expected value this is just a geometric progression , which sums to .thus we need to find such that clearly will do .thus we conclude that ( see algorithm [ alg : ext ] ) we would like to apply the same reasoning in the case of multiple permutations in order to compute a bound on .that is , we would like to consider _ random _ samples of the digits from to , rather than permutations , as random samples are easier to work with in practice . to do so , we begin with some simple corollaries regarding our previous results .we have shown that in a permutation of length , we expect to see a value less than or equal to after steps .there are now other values that are less than or equal to amongst the remaining values ; we note that hence we expect to see the _ next _ value less than or equal to in the next steps also .a consequence of this fact is that we not only expect to see the _ first _ value less than or equal to earlier in a permutation than in a random sample , but that when we sample elements , we expect _ more _ of them to be less than or equal to in a permutation than in a random sample .furthermore , when considering the _ maximum _ of permutations , we expect the first elements to contain more values less than or equal to than the maximum of random samples .( eq . [ eq : pkm ] ) is concerned with precisely this problem .therefore , when working in a -dimensional hypercube , we can consider random samples rather than permutations in order to obtain an upper - bound on ( eq . [ eq : runtimeexact ] ) .thus we define as in ( eq . [ eq : replace ] ) , and conclude that thus the expected value of is again a geometric progression , which this time sums to .thus we need to find such that clearly will do .as mentioned , each step takes , so the final running time is . to summarize , for problems decomposable into groups , we will need to find the index that chooses the maximal product amongst lists ; we have shown an upper - bound on the expected number of steps this takes , namely
_ maximum a posteriori _ inference in graphical models is often solved via message - passing algorithms , such as the junction - tree algorithm , or loopy belief - propagation . the exact solution to this problem is well known to be exponential in the size of the model s maximal cliques after it is triangulated , while approximate inference is typically exponential in the size of the model s factors . in this paper , we take advantage of the fact that many models have maximal cliques that are larger than their constituent factors , and also of the fact that many factors consist entirely of latent variables ( i.e. , they do not depend on an observation ) . this is a common case in a wide variety of applications , including grids , trees , and ring - structured models . in such cases , we are able to decrease the exponent of complexity for message - passing by for both exact _ and _ approximate inference .
users personal data , such as a user s historic interaction with the search engine ( e.g. , submitted queries , clicked documents ) , have been shown useful to personalize search results to the users information need .crucial to effective search personalization is the construction of user profiles to represent individual users interests .a common approach is to use main topics discussed in the user s clicked documents , which can be obtained by using a human generated ontology as in or using an unsupervised topic modeling technique as in .however , using the user profile to directly personalize a search has been not very successful with a _ minor _ improvement or even _ deteriorate _ the search performance .the reason is that each user profile is normally built using only the user s relevant documents ( e.g. , clicked documents ) , ignoring user interest - dependent information related to input queries .alternatively , the user profile is utilized as a feature of a multi - feature learning - to - rank ( l2r ) framework . in this case , apart from the user profile , dozens of other features has been proposed as the input of an l2r algorithm . despite being successful in improving search quality , the contribution of the user profile is not very clear . to handle these problems , in this paper , we propose a new _ embedding _ approach to constructing a user profile , using both the user s input queries and relevant documents .we represent each user profile using two projection matrices and a user embedding .the two projection matrices is to identify the user interest - dependent aspects of input queries and relevant documents while the user embedding is to capture the relationship between the queries and documents in this user interest - dependent subspace .we then _ directly _ utilize the user profile to re - rank the search results returned by a commercial search engine .experiments on the query logs of a commercial web search engine demonstrate that modeling user profile with embeddings helps to significantly improve the performance of the search engine and also achieve better results than other comparative baselines do .we start with our new embedding approach to building user profiles in section [ ssec : profile ] , using pre - learned document embeddings and query embeddings .we then detail the processes of using an unsupervised topic model ( i.e. , latent dirichlet allocation ( lda ) ) to learn document embeddings and query embeddings in sections [ ssec : topics ] and [ ssec : query ] , respectively .we finally use the user profiles to personalize the search results returned by a commercial search engine in section [ ssec : rank ] .let denote the set of queries , be the set of users , and be the set of documents .let represent a triple .the query , user and document are represented by vector embeddings , and , respectively .our goal is to select a _ score function _ such that the implausibility value of a correct triple ( i.e. is a relevant document of given ) is _ smaller _ than the implausibility value of an incorrect triple ( i.e. is not a relevant document of given ) .inspired by embedding models of entities and relationships in knowledge bases , the score function is defined as follows : here we represent the profile for the user by two matrices and and a vector embedding , which represents the user s topical interests .specifically , we use the interest - specific matrices and to identify the interest - dependent aspects of both query and document , and use vector to describe the relationship between and in this interest - dependent subspace . in this paper , and are pre - determined by employing the lda topic model , which are detailed in next sections [ ssec : topics ] and [ ssec : query ] .our model parameters are only the user embeddings and matrices and . to learn these user embeddings and matrices , we minimize the margin - based objective function : where is the margin hyper - parameter , is the training set that contains only correct triples , and is the set of incorrect triples generated by corrupting the correct triple ( i.e. replacing the relevant document / query in by irrelevant documents / queries ) .we use stochastic gradient descent ( sgd ) to minimize , and impose the following constraints during training : , and .first , we initialize user matrices as identity matrices and then fix them to only learn the randomly initialized user embeddings . then in the next step, we fine - tune the user embeddings and user matrices together . in all experiments shown in section [ sec : expsetup ] , we train for 200 epochs during each two optimization step . in this paper, we model document embeddings by using topics extracted from relevant documents .we use lda to _ automatically _ learn topics from the relevant document collection . after training an lda model to calculate the probability distribution over topics for each document , we use the topic proportion vector of each document as its document embedding . specifically , the element ( ) of the vector embedding for document is : where is the probability of the topic given the document .we also represent each query as a probability distribution over topics , i.e. the element of the vector embedding for query is defined as : where is the probability of the topic given the query .following , we define as a mixture of lda topic probabilities of given documents related to .let be the set of top ranked documents returned for a query ( in the experiments we select ) .we define as follows : where is the exponential decay function of which is the rank of in .and is the decay hyper - parameter ( ) .the decay function is to specify the fact that a higher ranked document is more relevant to user in term of the lexical matching ( i.e. we set the larger mixture weights to higher ranked documents ) .we utilize the user profiles ( i.e. , the learned user embeddings and matrices ) to re - rank the original list of documents produced by a commercial search engine as follows : ( 1 ) we download the top ranked documents given the input query .we denote a downloaded document as .( 2 ) for each document we apply the trained lda model to infer the topic distribution .we then model the query as a topic distribution as in section [ ssec : query ] .( 3 ) for each triple , we calculate the implausibility value as defined in equation [ equa : stranse ] .we then sort the values in the ascending order to achieve a new ranked list .* dataset : * we evaluate our new approach using the search results returned by a commercial search engine .we use a dataset of query logs of of 106 anonymous users in 15 days from 01 july 2012 to 15 july 2012 .a log entity contains a user identifier , a query , top- urls ranked by the search engine , and clicked urls along with the user s dwell time .we also download the content documents of these urls for training lda to learn document and query embeddings ( sections [ ssec : topics ] and [ ssec : query ] ) .bennett _ et al ._ indicate that short - term ( i.e. session ) profiles achieved better search performance than the longer - term profiles .short - term profiles are usually constructed using the user s search interactions within a search session and used to personalize the search within the session . to identify a search session , we use 30 minutes of user inactivity to demarcate the session boundary . in our experiments ,we build short - term profiles and utilize the profiles to personalize the returned results . specifically , we uniformly separate the last log entries within search sessions into a _ test set _ and a _ validation set_. the remainder of log entities within search sessions are used for _ training _ ( e.g. to learn user embeddings and matrices in our approach ) .* evaluation methodology : * we use the sat criteria detailed in to identify whether a clicked url is relevant from the query logs ( i.e. , a sat click ) . that is either a click with a dwell time of at least 30 seconds or the last result click in a search session .we assign a positive ( relevant ) label to a returned url if it is a sat click .the remainder of the top-10 urls is assigned negative ( irrelevant ) labels .we use the rank positions of the positive labeled urls as the ground truth to evaluate the search performance before and after re - ranking .we also apply a simple pre - processing on these datasets as follows . at first , we remove the queries whose positive label set is empty from the dataset . after that, we discard the domain - related queries ( e.g. facebook , youtube ) . to this end ,the training set consists of 5,658 correct triples . the test and validation sets contain 1,210 and 1,184 correct triples , respectively .table [ table:1 ] presents the dataset statistics after pre - processing ..basic statistics of the dataset after pre - processing [ cols="^,^,^,^,^,^",options="header " , ] by directly learning user profiles and applying them to re - rank the search results , our embedding approach achieves the highest performance of search personalization . specifically , our mrr score is significantly ( ) higher than that of _ sp _ ( with the relative improvement of 4% over sp ) .likewise , the p score obtained by our approach is significantly higher than that of the baseline _ sp _ ( ) with the relative improvement of 11% . in table[ tb1 ] , we also present the performances of a simplified version of our embedding approach where we fix the user matrices as identity matrices and then only learn the user embeddings . table [ tb1 ] shows that our simplified version achieves second highest scores compared to all others . )than our simplified version with 4% relative improvement . ]specifically , our simplified version obtains significantly higher p score ( with ) than _in this paper , we propose a new embedding approach to building user profiles .we model each user profile using a user embedding together with two user matrices .the user embedding and matrices are then learned using lda - based vector embeddings of the user s relevant documents and submitted queries .applying it to web search , we use the profile to re - rank search results returned by a commercial web search engine .our experimental results show that the proposed method can stably and significantly improve the ranking quality . *acknowledgments * : the first two authors contributed equally to this work .dat quoc nguyen is supported by an international postgraduate research scholarship and a nicta nrpa top - up scholarship .
recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user s topical interests . in this paper , we propose a new embedding approach to learning user profiles , where users are embedded on a topical interest space . we then directly utilize the user profiles for search personalization . experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines .
decision - making under risk attracts attention of economists for a long time , , , , , , , , , ( * ? ? ?* chapter 3 ) .the mathematical expectation of profits and losses in a game or trading is not always the main factor influencing decisions , , , , , , , , , , , , , , , ( * ? ? ?* chapter 4 ) , .understanding the nature of award in a game and construction of suitable measures for comparing awards and choosing between games is a challenging task . here , the _ certainty _ of _ sample mean _ profits and losses and _ play time _ are considered with respect to decision making .based on maurice allais s research , daniel kahneman and amos tversky found out that 80 percent of 95 students and university faculty preferred prospect b in problem 3 : choose between ( a ) 4,000 israeli pounds received with 80 percent of chance and ( b ) 3,000 for sure .3,000 were the median net monthly income .studying these results , the author knew how to compute _ mathematical expectations _* - 69 , chapter iv mathematical expectations ) .it is equal to for b and for a. esteeming the extra 200 , he mentally joined the majority declining the gift .kahneman and tversky label this phenomenon the _ certainty effect_. in problem 3 , 80 percent choosing b is its _experimental measure_. can the _ probability theory _ match _ theoretically _ the experimental _ fractions of respondents _ ?[ [ two - point - distribution ] ] two - point distribution + + + + + + + + + + + + + + + + + + + + + + represents a _ random variable _ with two outcomes , and probabilities , . its mathematical expectation , _ variance _ , _ third central moment _ , _ fourth central moment _ , _ skewness _ , _ excess kurtosis _ , and _ entropy _ are ^ 2)=e(\xi^2)-\alpha_1(\xi)^2=(a_p - a_q)^2 p ( 1 - p),\ ] ] ^ 3 ) = ( a_p - a_q)^3p(1-p)(1 - 2p),\ ] ] ^ 4 ) = ( a_p - a_q)^4p(1-p)(1 - 3p + 3p^2),\ ] ] ^ 3)}{d(\xi)^{\frac{3}{2}}}=\frac{a_p - a_q}{|a_p - a_q| } \times \frac{1 - 2p}{\sqrt{p(1-p)}},\ ] ] ^ 4)}{d(\xi)^2 } - 3 = \frac{1 - 6p + 6p^2}{p(1 - p)},\ ] ] for problem 3 , in a , , , , , _ standard deviation _ , , , , , , and in b , , , , , , , , and are undefined , .computing entropy , we follow and set for . undoubtedly , declining the greater , voters choose a greater _ award_. the _ `` paradox of irrationality '' _ arises because in b the award coincides with but in a it is not ._ indeed , in a the award is random but the mathematical expectation is not . already because of this the award is not the mathematical expectation . _[ [ sample - mean . ] ] sample mean .+ + + + + + + + + + + + in order to see better what the award in prospect a is , the author has `` tortured '' one of the human beings and formulated four variations of problem 3 , where he might feel comfortable choosing a. a gambler may 1 .play a fixed number of times known in advance , and get the mean ; 2 .play unlimitedly , choose the stopping time , and get the known mean ; 3 .play unlimitedly , choose the stopping time , and get the last known value ; 4 .gather any number of helpers ; all choose a once ; gains are summed ; the gambler gets the mean .each variation wastes gambler s and helpers time , if all select b. for the author , choosing a , each variation intensifies his feeling to gain more following to the expectation and even more in the third variation . for him , _ to go or not to go with the fixed positive mathematical expectation _ depends on how many times the game can be played and the cost of each game , if the expectation does not include the latter . in , is represented by the _ floor function _ of the product of the playing frequency and time : .the _ sample mean _ depends on the numbers of outcomes and in trials the is random due to . if the fixed games represent _ independent identically distributed _ ,i.i.d . , binary variables , then obeys a _binomial distribution_. in prospect b of problem 3 , , make a _constant award_. in prospect a of problem 3 and variations 1 , 2 , and 4 , the is a_ random award_. _one cares about all properties of the award but not only one constant _ _ if those are responsible for getting nothing . _ variation 1 fixes in advance .variation 2 makes random and dependent on the observed .variation 3 makes random , values irrelevant , and the award 4,000 likely .variation 4 switches to the number of gamblers . for a fixed , and ^ 2 ) = e(a_1 ^ 2 ) - e^2(a_1 ) = \frac{(a_p - a_q)^2pq}{n}=\frac{d(\xi)}{\lfloor \nu t \rfloor}.\ ] ] the variance decreases with increasing .this formula is in agreement with the _ limit theorems _ applicable to the mean sum of i.i.d . variables with finite variance in _ bernoulli trials_. an analytic method for computing _ beginning moments _ of sums of i.i.d .random variables is suggested in .then , the _ central moments _ can be expressed via as ^k ) = e([\xi - \alpha_1]^k ) = e\left ( \sum_{j = 0}^{j = k } \frac{k!}{j!(k - j)!}\xi^j ( -\alpha_1)^{k - j } \right ) = \ ] ] this yields the first four presented in ( * ? ? ?* - 72 ) from equations [ eqsamplemean ] and [ eqmeanofsamplemean ] ^k ) = e([\frac{a_p - a_q}{n}]^k[n_p - np]^k ) = ( \frac{a_p - a_q}{n})^ke([n_p - np]^k) 15 million . to win, one has to guess five of 75 numbers in the _ game field _ and one of 15 numbers in the _ mega ball field _ , figure [ figlottery ] .a slip allows to play up to five 1,000,000 and 600,000 and 1,000,000 and 600,000 and 15,000,000 the mathematical expectation of profits and losses in mega millions is negative .equation [ eqlotteryme ] and figure [ figlotteryme ] indicate that it can be positive , even , after declining the annuity option , splitting jackpot between tickets winning it , and taxes .since 2002 until 2005 it was needed to guess 5 of 52 and 1 of 52 numbers . since 2005 until 2013 - 5 of 56 and 1 of 46 numbers .the eight prizes after jackpot were 10,000 , 150 , 10 , 2 .equations for the mathematical expectations of profit and loss after 2002 , 2005 , and 2013 are , , and . with ,the expectations are positive for million , million , and million .table [ tbllottery ] marks by stars five such jackpot candidates occurred in 12 years .let us assume that a gambler decides to play only , when the expectation becomes positive ._ does it mean that the game under such a favorable condition is for certain ? _ [ [ game - for - sure ] ] game for sure ?+ + + + + + + + + + + + + + positive does not make the `` favorable '' drawings a _ game for sure_. during 80 years of an adult life a gambler may participate in such infrequent events times .a gambler betting twice every week can make attempts .both numbers are insignificant comparing with the odds 1 : 258,890,850 .the discrete nature of gains and losses implies either a chain of - 552,000,000 jackpot agrees to get annuity , pays for the tickets 30.00 . the next dayit has a quick run - up to 32.50 .you immediately become fearful that if you do nt take the profit , the next day you may see it fade away - so out you go with a small profit , when that is very time you should entertain all the hope in the world . '' _ this is a certainty effect .further , ( * ? ? ?* - 13 ) : _`` on the other hand , suppose you buy a stock at 28.00 , showing a two - point loss .you would not be fearful that the next day would possibly see a three - point loss or more .no , you would regard it merely as a temporary reaction , feeling certain that the next day it would recover its loss .... that is when you should be fearful , because if you do not get out , you might be forced to take a much greater loss later on . ''_ this is a reflection effect .livermore believes that a human being `` injects a hope and fear into the business of speculation '' and `` is apt to get the two confused and in reverse positions '' .his 40 years trading wisdom is _ `` profits always take care of themselves , but losses never do''_. his intuition was trained by winning and losing several fortunes .he played the game many times feeling but not measuring the odds behind his advices .empirical distributions of prices , their increments , and waiting times between transactions vary .this strengthens uncertainty of profits and losses presenting a trading opportunity as the last one . under such conditions following his advicesis difficult psychologically and increasing does not guarantee a fast convergence of .the author considered _ random sample means _ as universal awards in prospects and the st .petersburg game independently on khinchin and only after that found his `` forgotten '' paper .khinchin concentrates on mathematical properties of random geometric and arithmetic means , which can explain the `` paradox '' .he reviews psychology , ) : _ `` let us notice only that in this case , of course , no speech may go about any mathematical paradox but at most about that the mathematical expectation is not always adequate to those worldly - psychological representations , which it is commonly connected to . in the case of the petersburg game , it is often pointed to that petr in his expectation of winning , naturally , orients not on the mathematical expectation of winning in a particular game , which is difficult to account psychologically , but on some average winning during big number of games .such understanding of psychological prerequisites of the ' ' paradox `` puts in front of us a certain mathematical task , which can be formulated as follows : find such an estimate of the mean winning of petr during a big number of games , that its probability would go to unit with infinite increasing the number of games .however , it makes sense to say , that the task will get a quite determined sense only after a certain notion of the mean winning will be exactly defined . in the current note , we shall consider in details the set problem in two of the most simple ( and also the most important ) cases , namely in assumption that the mean winning is defined as the geometric and arithmetic mean of particular games''_. khinchin is indifferent , if is achievable .if `` yes '' , then his two theorems work . for the author ,the sample mean is an award in prospects with .the case is generic .the is a universal random award in the st .petersburg game , prospects , and variations 1 , 2 , and 4 .for it coincides with underlying random variables . in a private conversation with timur misirpashaevthe author discussed _ credit nuances _ of the st .petersburg game and believes that the topic `` correlates '' with samuelson s _ bankruptcy _ consideration and the following buffon s comment : _`` ... all money on the earth is not enough to accomplish this [ vs : to pay the win ] , if the game stops on 40th trial , because it will require 1024 times more money than there exists in the entire kingdom of france''_. during the discussion , the author has proposed to pay 18 to play 10 times , under a condition that in both cases the third party reserves the deposit ] is getting wider .the range of the latter and 0.788 / ] and seven sure outcomes 130.75 ( + ) , 113.98 ( + ) , 99.35 ( + ) , 86.60 ( + ) , 75.49( - ) , 65.80 ( - ) , 57.36 ( - ) . accepted ( the sure outcome is chosen ) and rejected ( the prospect is selected ) values are marked by + and - .the signs in the example are fictional ._ `` to obtain a more refined estimate of the certainty equivalent , a new set of seven outcomes was then shown , linearly spaced between a value 25% higher than the lowest amount accepted in the first set and a value 25% lower than the highest amount rejected '' _* - 306 ) .therefore , the lowest accepted value 86.60 would be increased by 25% to 108.25 and the highest rejected value 75.49 would be decreased by 25% to 60.39 .the interval ] , where .compare the surface on figure [ figf1constrand1 ] with experimental and axiomatic points on figure [ figf1 ]. for large figure [ figf1constrand2 ] is getting closer to figure [ figf1inf ] following from equation [ eqf1inf ] .the surfaces are computed using equations [ eqmeandiff ] , [ eqvariancediff ] , and [ eqthef1 ] .fitting depends on , and . from equations[ eqvariancediff ] and [ eqbinaryconstraintsab ] , if the first variable is constant and the second is random two - point , then , , .swapping constant and random variables yields , , .usually , the greater the jackpot , the faster it grows , figure [ figmegamillionsjackpot ] .if the jackpot is won , then the curve drops to the initial jackpot .this creates the sequence of `` teeth '' with random height and width at the bottom .dependence after resembles the exponential solution of the equation with the initial condition .this solution can not describe random shape of the teeth and sharp dropping at random time .the jackpot , even , without dropping , could not grow to infinity .hence , the s - shaped _logistic curve _ , solving the equation developed with other denominations by pierre francois verhulst and lamberte adolphe jacques quetelet for describing population growth , is a better fitting choice . without data close to ,estimation of this level is inaccurate . the richards _ generalized logistic function _ is more flexible . with credit to http://www.lottostrategies.com/script/jackpot_history/draw_date/113 , the author presents the data in table [ tbljackpot ] for those who wants to try other fitting options .the exponential and logistic curves were applied minimizing the sum of square deviations ( microsoft excel , solver ) , which is not a well justified criterion in this case . with does not take into account a possibility of sharing jackpot between winners . for simplicity , we also ignore multiple winning outcomes and taxes described in section `` mega millions '' . + & & & & & & & + continued from previous page + & & & & & & & + + 3/27/2015 & 0.0000 & 15000000 & 5000000 & 15000000 & 0.0e+00 & 15000000 & 0.0e+00 + 3/31/2015 & 0.0110 & 20000000 & 5000000 & 18490306 & 2.3e+12 & 17920342 & 4.3e+12 + 4/3/2015 & 0.0192 & 25000000 & 5000000 & 21608026 & 1.2e+13 & 20478002 & 2.0e+13 + 4/7/2015 & 0.0301 & 30000000 & 9000000 & 26551757 & 1.2e+13 & 24464854 & 3.1e+13 + 4/10/2015 & 0.0384 & 39000000 & 8000000 & 30941404 & 6.5e+13 & 27956572 & 1.2e+14 + 4/14/2015 & 0.0493 & 47000000 & 8000000 & 37851574 & 8.4e+13 & 33399424 & 1.8e+14 + 4/17/2015 & 0.0575 & 55000000 & 10000000 & 43935991 & 1.2e+14 & 38166318 & 2.8e+14 + 4/21/2015 & 0.0685 & 65000000 & 9000000 & 53417755 & 1.3e+14 & 45596900 & 3.8e+14 + 4/24/2015 & 0.0767 & 74000000 & 11000000 & 61670446 & 1.5e+14 & 52104664 & 4.8e+14 + 4/28/2015 & 0.0877 & 85000000 & 11000000 & 74355473 & 1.1e+14 & 62248896 & 5.2e+14 + 5/1/2015 & 0.0959 & 96000000 & 14000000 & 85225587 & 1.2e+14 & 71133297 & 6.2e+14 + 5/5/2015 & 0.1068 & 110000000 & 16000000 & 101632292 & 7.0e+13 & 84982204 & 6.3e+14 + 5/8/2015 & 0.1151 & 126000000 & 14000000 & 115408834 & 1.1e+14 & 97111190 & 8.3e+14 + 5/12/2015 & 0.1260 & 140000000 & 19000000 & 135724730 & 1.8e+13 & 116017720 & 5.8e+14 + 5/15/2015 & 0.1342 & 159000000 & 14000000 & 152355179 & 4.4e+13 & 132576214 & 7.0e+14 + 5/19/2015 & 0.1452 & 173000000 & 21000000 & 176195256 & 1.0e+13 & 158387412 & 2.1e+14 + 5/22/2015 & 0.1534 & 194000000 & 20000000 & 195128304 & 1.3e+12 & 180993072 & 1.7e+14 + 5/26/2015 & 0.1644 & 214000000 & 19000000 & 221398922 & 5.5e+13 & 216230524 & 5.0e+12 + 5/29/2015 & 0.1726 & 233000000 & 27000000 & 241565629 & 7.3e+13 & 247091774 & 2.0e+14 + 6/2/2015 & 0.1836 & 260000000 & ? & 268580467 & 7.4e+13 & 295197950 & 1.2e+15 + the choice vs. should be made on 5/29/2015 .the jackpot has grown to 260,000,000 .we assume that 27,000,000 is one third of the number of purchased tickets 81,000,000 .since quoting is done by annuity , the number of tickets is less .based on section `` mega millions '' , we estimate the number of purchased tickets as half 40,000,000 .mega millions http://www.megamillions.com/where-to-play is played in 44 states ar 2,966,369 , az 6,731,484 , ca 38,802,500 , co 5,355,866 , ct 3,596,677 , de 935,614 , fl 19,893,297 , ga 10,097,343 , ia 3,107,124 , i d 1,634,464 , il 12,880,580 , in 6,596,855 , ks 2,904,021 , ky 4,413,457 , la 4,649,676 , ma 6,745,408 , md 5,976,407 , me 1,330,089 , mi 9,909,877 , mn 5,458,333 , mo 6,063,589 , mt 1,023,579 , nc 9,943,964 , nd 739,482 , ne 1,881,503 , nh 87,137 2013 , nj 8,938,175 , nm 2,085,572 , ny 19,746,227 , oh 11,594,163 , ok 3,878,051 , or 3,970,239 , pa 12,787,209 , ri 1,055,173 , sc 4,832,482 , sd 853,175 , tn 6,549,352 , tx 27,695,284 , va 8,326,289 , vt 626,562 , wa 7,061,530 , wi 5,757,564 , wv 1,850,326 , wy 584,153 , the district of columbia 658,893 , and u.s .virgin islands 106,405 2010 census .the population estimates are from wikipedia for the year 2014 unless the year is cited .the sum is 302,681,519. one must be 18 years or older to purchase lottery tickets . in accordance with the united states census bureauhttp://www.census.gov/population/age/ , in 2010the population under 18 years was 24 percent .we estimate the number of potential buyers as .if the numbers of people from to buy the numbers of tickets from to , then the number of buyers is , the number of bought tickets is , the mean number of purchased tickets per buyer is , the number of not playing people is , and the number of potential tickets , which they did not buy , is .the fraction of purchased tickets to the total of purchased and potentially not purchased ones is .the latter is the fraction of those , who decided to play . with , . _we conclude that lotteries can be used to estimate fractions of respondents selecting between random and constant variables or prospects_. for this problem , , , , , , , =1.15e-7 .the big positive skewness overweights the certain loss prediction following from the tiny entropy . as long as is not won , the , , , and increase with .in contrast , in this hypothetical lottery , probabilities and dependent only on them , , and remain intact . , figure [ figmegamillionsjackpot ] , grows with acceleration before it reaches the second part of s - shaped curve , where the capital of _ dreamers _ exhausts .this acceleration is the increasing number of tickets from game to game and , therefore , increasing and . to estimate , multiply the values from column of table [ tbljackpot ] by .the previous paragraph evaluates . replacing with yields this formula implies that , if the jackpot is not won , then has a maximum : _ growing interest during the first phase is suppressed by a lack of the capital later_. setting , we get . substituting into equation [ eqlotteryf1 ]leads to , figure [ figtlotteryf1 t ] .the estimate 500,000,000 is inaccurate because of a lack of data at the saturation of .twice jackpot was greater , table [ tbllottery ] .fixing and , equation [ eqthef1 ] draws a surface above the -plane .a curve on the -plane defines a dependence between and .this dependence is linear , equation [ eqdependenceab ] , if is also fixed .it is easy to prove that intersection of the surface and the plane , containing this straight line , and orthogonal to the -plane is also a _ horizontal _ straight line .figure [ figf1abproblem3 ] illustrates these geometric properties for problem 3 , certainty effect .the surface looking curvy is a set of horizontal straight line segments , each located at its own characteristic height , with slope and intersect given by equation [ eqdependenceab ] .figure [ figf1abproblem3l ] presents a similar picture for problem 3 , reflection effect .dependence for fixed and does not have to be linear. if it is not linear or linear but has different slope and intercept than those , implied by equation [ eqdependenceab ] , then is not constant. let , and for , random two - point variable vs. constant .then , from equations [ eqmeandiff ] - [ eqexcesskurtosisdiff ] with equation [ eqpconsted ] we can design , change , , and keep , constant . from equations [ eqgamma12 ]this affects skewness and excess kurtosis ._ symmetry of the distribution can influence on respondents preferences .this leaves degrees of freedom for with constant and . _ from equations [ eqgamma12 ] , for , for , and for , see also figures [ figa1diffproblem3 ] , [ figa1diffproblem3b ] .let us rewrite equation for as and solve the latter .selection of sign and evaluation of root should correspond to the above inequalities and probability intervals .hence , from equation [ eqpconsted ] and from equations [ eqvariancediff ] and [ eqpconsted ] under the specified conditions two - point variable vs. constant problems matching , , of problem 3 are vs. , where , , are given by equations [ eqpgamma1 ] - [ eqap1gamma ] . for problem3 the problems with , , are vs. , figure [ figedg ] ._ in both cases skewness can be arbitrary .it adds a degree of freedom to influence on and . keeping , , intact , variation of skewness changes , , equations [ eqap1gamma ] , [ eqaq12gamma ] , , , , , equations [ eqconstraintsab ] , [ eqbinaryconstraintsab ] .under the specified conditions these equations yield a random difference can be depicted as a horizontal line segment $ ] with an inner point . on figure [ figmmmgamma ] such segments are plotted relative each to other for different values of skewness for problems 3 and 3. we see that overlaps a greater positive area for in problem 3 than for in problem 3 ._ these diagrams support the reflection effect . _let in a hypothetical case and .then , from equations [ eqamaxbmax ] is a hyperbola .this also creates a parametric dependence and and allows to express as a function of for constant and .figure [ figf1gamma ] illustrates fitting in problem 3 with and and fitting in problem 3 with and .more experimental data on fractions of respondents selecting between two - point random variables is needed to clarify further dependencies discussed so far .i would like to thank timur misirpashaev for the discussion on , the st .petersburg paradox and its credit component and pavel grosul for the discussion on relationships between fractions of respondents and wealth . 9allais , maurice .le comportement de lhomme rationnel devant le risque , crtitique des postulats et axiomes de lecole americaine ._ econometrica _ , volume 21 , no . 4 , 1953 , pp .503 - 546 .antonov , i. , a. , saleev , v. , m. an economic method of computing -sequences . _ zhurnal vychsilitelnoi matematiki i matematicheskoi fiziki ._ volume 19 , no . 1 , 1979 , pp .243 - 245 ( in russian ) ; _ ussr computational mathematics and mathematical physics _ , 1979 ,volume 19 , no .252 - 256 .arrow , kenneth , j. _ essays in the theory of risk - bearing_. chicago : markham , 1971 . cited by .bernoulli , daniel .exposition of a new theory on the measurement of risk ._ econometrica _ , volume 22 , no .1 , january 1954 , pp . 23 - 36 .specimen theoriae novae de mensura sortis ._ commentarii academiae sceintiarum imperialis petropolitanae _ , tomus v , 1738 , pp .175 - 192 is translated by louise sommer and includes footnotes made by karl menger editor , and translator .black , fischer , scholes , myron .the pricing of options and corporate liabilities . _ the journal of political economy _ ,volume 81 , may - june 1973 , pp . 637 - 659 .buffon , georges louis leclerc comte de ._ essais darithmtique morale . in : histoire naturelle .supplment tome quatrime_. paris : de limprimerie royale , 1777 , pp .gallica bibliothque numrique http://gallica.bnf.fr/ark:/12148/bpt6k97517m .debreu , gerard .the coefficient of resource utilization , _ econometrica _ , volume 19 , n. 3 , 1951 , pp .273 - 292 .doob , joseph , l. _ stochastic processes _ , copyright , 1953 by john wiley & sons , inc . ,new york : john wiley & sons , 1990 .fabozzi , frank , j. _ bond markets , analysis and strategies ._ third edition , upper saddle river , new jersey : prentice - hall international inc . , 1996feller , william ._ an introduction to probability theory and its applications ( vvedenie v teoriyu veroyatnostei i ee prilozheniya ). _ volume 1 , moscow : mir , 1964 ( in russian ) .gnedenko , boris , v. , kolmogorov , andrey , n. _ limit distributions for sums of independent random variables_. moscow , leningrad : technico - theoretical literature governmental press , 1949 ( in russian ) .the book is translated to english by k.l .chung , cambridge , mass .: addison - wesley , 1954 .gnedenko , boris , v. _ the probability theory .[ kurs teorii veroyatnostei ] . _moscow : nauka , 1988 ( in russian ) .hill , archibald , v. the possible effects of the aggregation of the molecules of hemoglobin on its dissociation curves ._ proceedings of the physiological society _ , january , 1910 , pp .joe , stephen , kuo , frances , y. constructing sobol sequences with better two - dimensional projections ._ siam j. sci ._ , volume 30 , no .2635 - 2654 .kahneman , daniel , tversky , amos .prospect theory : an analysis of decision under risk ._ econometrica _ , volume 47 , no .2 , march , 1979 , pp .263 - 291 .kahneman , daniel .maps of bounded rationality : a perspective on institute judgment and choice . _ nobel lecture _ , december 8 , 2002 , pp .449 - 489 , http://www.nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf .kelly , john , l. , jr .a new interpretation of information rate . _ bell system technical journal _ ,volume 35 , no .4 , july 1956 , pp .917 - 926 .khinchin , aleksandr , ya . on petersburg game ._ matematicheskii sbornik _ ,volume 32 , no . 2 , 1925 , pp .330 - 341 ( in russian ) .khinchin , aleksandr , ya ._ the main laws of the probability theory .the laplace theorem , the law of big numbers , the law of iterated logarithm_. moscow , leningrad : state technical - theoretical publishing , 1932 ( in russian ) .khinchin , alexander , ya .the notion of entropy in probability theory ._ uspehi matematicheskih nauk _ ,volume 8 , no .3 , may - june 1953 , pp . 3 - 20 ( in russian ) .knight , frank , h. _ risk , uncertainty and profit_. reprints of economic classics , augustus m. kelly , bookseller , new york , n. y. 10019 : sentry press , 1964 , kolmogorov , andrey , n. ber das gesetz des iterierten logarithmus , _ mathematische annalen _ ,volume 101 , 1929 , pp .126 - 135 .kolmogorov , andrey , n. _ foundations of the theory of probability .osnovnyie ponyatiya teorii veroyatnostei ._ 2nd edition , moscow : nauka , 1974 ( in russian ) .kolmogorov , andrey , n. , zhurbenko , igor , g. , prokhorov , alexander , v. _ introduction to probability theory ._ moscow : nauka , 1982 ( in russian ) .kolmogorov , andrey , n. , uspenskii , vladimir , a. algorithms and randomness .theory of probab .volume 32 , no . 3 , 1987 , pp .389 - 412 ( translated from russian journal by bernard seckler ) .korn , g. , korn t. , _ mathematical handbook for scientists and engineers .definitions , theorems , and formulas for reference and review _ , 2nd ed . ,new york : mcgraw - hill book company , 1968 .lefvre , edwin . _reminiscences of a stock operator_. new york : john wiley & sons , inc . , 1993 .copyright 1993 , 1994 by expert trading , ltd . originally published in 1923 by george h.doran and company .livermore , jesse , l. _ how to trade in stocks .the livermore formula for combining time element and price_. new york : duell , sloan & pearce , 1940 .malkiel , burton .g. _ a random walk down wall street .the time - tested strategy for successful investing_. 9th ed .new york : w.w.norton & company , 2007 .markowitz , harry , m. the utility of wealth ._ journal of political economy _ ,volume 60 , no .2 , april 1952 , pp . 151 - 158 .menger , karl .das unsicherheitsmoment in der wertlehre ._ nationaloeken , journal of economics _ ,volume 5 , no .4 , september 19 , 1934 , pp .459 - 485 .murphy , john , j. _ technical analysis of the financial markets .a comprehensive guide to trading methods and applications_. new york : new york institute of finance , 1999 .neftci , salih n. naive trading rules in financial markets and wiener - kolmogorov prediction theory : a study of `` technical analysis '' . _journal of business _ ,volume 64 , no .4 , october 1991 , pp .549 - 571 .peters , ole .the time resolution of the st petersburg paradox , _ philosophical transactions of the royal society a _ , volume 369 , october 2011 , pp . 4913 - 4931 .packwood , daniel , m. moments of sums of independent and identically distributed random variables ._ arxiv , mathematics , statistics theory _ , version 2 , january 14 , 2012 , pp . 1 - 13 , http://arxiv.org/abs/1105.6283 . peters , ole .menger 1934 revisited , _ arxiv , quantitative finance , risk management _ , march 23 , 2011 , pp . 1 - 16 , http://arxiv.org/abs/1110.1578 .prokhorov , yurii , v. the law of a large numbers and the law of the iterated logarithm , _ uspekhi matematicheskih naul _ , volume 38 , no 4 , 1983 , pp . 281 - 286( in russian ) or _ russian mathematical surveys _ , volume 38 , no .319 - 326 .richards , f. , j. a flexible growth function for empirical use , _ journal of experimental botany _ , volume 10 , no . 2 , 1959 , pp .290 - 300 .salov , valerii ._ modeling maximum trading profits with c++ : new trading and money management concepts . _hoboken , new jersey : john wiley and sons , 2007 .salov , valerii , v. trading system analysis : learning from perfection . _ futures magazine _ , vol .11 , november , 2011 , pp .34 - 39 , 43 .salov , valerii , v. high - frequency trading in live cattle futures . _ futures magazine _ , vol .6 , may 2012 , pp . 26 - 27 , 31 .salov , valerii , v. optimal trading strategies as measures of market disequilibrium , _ arxiv , quantitative finance , general finance _ , december 6 , 2013 , pp. 1 - 222 , http://arxiv.org/abs/1312.2004 .salov , valerii .`` the gibbon of math history '' . who invented the st .petersburg paradox ?khinchin s resolution ._ arxiv , mathematics , history and overview _ , march 11 , 2014 , pp. 1 - 17 , http://arxiv-web3.library.cornell.edu/abs/1403.3001 .samuelson , paul .petersburg paradox as a divergent double limit ._ international economic review ( blackwell publishing ) _ , volume 1 , no . 1 , 1960 , pp .samuelson , paul .petersburg paradoxes : defanged , dissected , and historically described ._ journal of economic literature ( american economic association ) _ , volume 15 , no . 1 , 1977 , pp .sharpe , william , f. _ investors and markets .portfolio choices , asset prices , and investment advice _ , princeton and oxford : princeton university press , 2007 .smitten , richard ._ how to trade in stocks .jesse livermore . with updates & commentary by richard smitten ._ new york : mcgraw - hill , 2001 .thomson , jesse , h. the livermore system ._ stock & commodities _ , volume v , no .1:4 , may - june 1983 , pp . 82 - 85 .sobol , ilya , m. uniformly distributed sequences with an additional uniform property ._ zhurnal vychsilitelnoi matematiki i matematicheskoi fiziki . _volume 16 , no . 5 , 1976 , pp .1332 - 1337 ( in russian ) ; _ ussr computational mathematics and mathematical physics _ ,volume 16 , no.5 , 1976 , pp .236 - 242 .sobol , ilya , m. _ points that uniformly fill a multidimensional cube . _ moscow : znanie , 1985 ( in russian ) .tversky , amos , kahneman , daniel . ,advances in prospect theory : cumulative representation of uncertainty , _ journal of risk and uncertainty , vol . 5 _ , 1992 , pp .297 - 323 .varma , jayanth , r. time resolution of the st .petersburg paradox : a rebuttal ._ the working paper series of indian institute of management , ahmedabad - 380015 , india _ , w.p . no. 2013 - 05 - 09 , may 2013 , pp . 1 - 5 , http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2270980 .vince , ralph . _ the mathematics of money management .risk analysis techniques for traders_. new york : john wiley & sons , inc . , 1992 .vince , ralph . _ the new money management .a framework for asset allocation_. new york : john wiley & sons , inc . , 1995 .williams , a. , c. attitudes toward speculative risks as an indicator of attitudes toward pure risks ._ journal of risk and insurance _ ,volume 33 , 1966 , pp .577 - 586 .* valerii salov * received his m.s . from the moscow state university , department of chemistry in 1982 and his ph.d . from the academy of sciences of the ussr , vernadski institute of geochemistry and analytical chemistry in 1987 .he is the author of the articles on analytical , computational , and physical chemistry , the book modeling maximum trading profits with c++ , _ john wiley and sons , inc . , hoboken , new jersey _ , 2007 , and papers in _ futures magazine _ and _
the prospects of kahneman and tversky , mega million and powerball lotteries , st . petersburg paradox , premature profits and growing losses criticized by livermore are reviewed under an angle of view comparing mathematical expectations with awards received . original prospects have been formulated as a one time opportunity . an award value depends on the number of times the game is played . the random sample mean is discussed as a universal award . the role of time in making a risky decision is important as long as the frequency of games and playing time affect their number . a function of choice mapping properties of two - point random variables to fractions of respondents choosing them is proposed .
coalescing binaries are the most promising sources of gravitational waves for laser interferometric gravitational wave detectors .the basic reason for the importance of these type of sources is their broadband nature which makes them ideally suited for their detection by the interferometers .the binary systems which are of relevance here are those consisting of compact objects i.e. black holes and neutron stars .it has been estimated that three such coalescences occur per year out to a distance of 200 mpc .a lot of attention has recently been focussed on the issues of detecting the presence of the signal and the extraction of astrophysical information from the estimated parameters of the signal .there are plans to construct such laser interferometers around the globe and by the end of this century the american ligo and french / italian virgo will be in operation .the emphasis in their construction is on the reduction of noise which may be thermal , seismic , quantum , or photon shot noise . in laser interferometric detectors the lower cuttoffis decided by the seismic noise which is very dominant at low fequencies .it is expected that the ligo will be able to go down to 40 hz in its initial stage and to 10 hz in its final stage .this means that that we can start observing the binary when its orbital frequency is 20 hz in the case of the initial detectors and 5 hz in the case of the advanced ones .this leads to sufficiently large integration times which enhances the signal to noise ratio .it was suggested by thorne that matched filtering would be an ideal filtering technique for this purpose .matched filtering is a standard technique used in signal analysis when the waveform is known .it determines for us an optimal linear filter which can decide on the presence or absence of the signal waveform in a given data train .this requires accurate modelling of the waveforms , which is possible for the coalescing binary systems .they are clean systems and their inspiral waveform depends on a few parameters such as the individual masses and spins .tidal interactions do not matter until the very end of the inspiral .a lot of research activity has gone in the direction of obtaining accurate templates under the various approximation schemes such as the quadrupole and the post - newtonian .recently it has been shown that post - newtonian ( pn ) corrections , spin - orbit ( s.o . ) and spin - spin ( s.s . )couplings , produce in the waveform an accumulating phase error as compared to the newtonian expression .therefore , a template constructed from the newtonian waveform would go out of phase with the signal and the so called `` matched filtering '' technique for detection would woefully fail . in this paper we show that as long as we are only _ searching _ for signals a newtonian filter would perform remarkably well even though the signal contains pn corrections .the key idea here is that we allow the parameters of the newtonian filter to vary and adjust so as to produce the maximum possible correlation with the signal .we have found that this flexibility allows for fairly high values of the correlation . in many cases of interestthe correlation obtained is 70% of its maximum possible value which would have been obtained had the template been perfectly matched to the signal . on the other hand, a template with the same parameters as those of the signal produces correlations of about 10 to 20% .we have carried out the analysis for the two noise curves assuming a ligo type detector .the two noise curves are the power spectral densities of the noise for the ligo in its initial and advanced stages , as given in . in the case of the initial ligo detectorthe analysis is also carried out for the case of white noise for the sake of comparision . also a correspondence between the parameters of the filter and the signalcould be set up , it might be possible to estimate the parameters of the signal from those of the filter .in other words the filter parameters may be `` renormalized '' .the paper is divided as follows . in section [ match ]we elaborate on the chirp waveform , and the conventional detection strategy .we discuss the technique of matched filtering and define a quantity which shall be a measure of how well a newtonian waveform can match with a post - newtonian signal.we also make some comments about the signal power spectrum . in section [ numeric ]we discuss the numerical results of the simulations carried out . and finally in section [ end ] we summarise our results and indicate future directions .the waveform of the signal from the coalescing binary system henceforth called the ` chirp ' has been modelled under various approximations . in the quadrupole approximationthe chirp has three parameters other than the amplitude .these are the initial phase , the time of arrival ( i.e the time at which the instantaneous frequency of the equals the lower cuttoff of the detector ) and the coalescence time which form a convenient set of parameters for our purpose . the newtonian waveform is given by , + \phi_0],\ ] ] where and here the lower cuttoff frequency is denoted by , and is called the chirp mass where is the total mass and the reduced mass of the binary system . is the solar mass and it is convenient unit for our purpose . given the form of the signal and the statistical description of the noise one has to design an adequate set of filters to detect the signal .the noise is assumed to be stationary and is further specified by its power spectral density which is defined by the relation , where is the fourier transform of a particular realization of noise and the overbar indicates an ensemble average .the defined above is the two sided power spectral density .we are primarily in search of a filter with an impulse response which correlates best with the signal i.e. when the correlation as defined below takes its maximum value for a particular value of time shift . this implies that the fourier transform of the matched filter to detect the signal is given by the relation , for the numerical computations that follow we use the fast fourier transform algorithm as given in _ numerical recipes _ .the definition of the fourier transform is the same as given there i.e. the impulse response of the filter depends on the parameters , , .it also depends the time shift .it now becomes important to judiciously space out the filters in the parameter space keeping in mind the constraints of computing power .such an analysis has been carried out in great detail for both white and coloured noise by sathyaprakash and dhurandhar ( see ) .we discuss briefly their major results : 1 .it was found that the coalescence time is a convenient parameter to use since the filters are equally spaced in this parameter , where the spacing is decided by a fixed drop in the correlation .2 . for the phase require just two filters for every value of the parameter , one with and the other with . due to their orthogonalitythe correlation is maximised over the phase by simply taking the square root of the sum of squares of the individual correlations , i.e. where and are the correlations corresponding to filters with phases and respectively .we assume in the design of the filter and therefore the value of for which the maximum of the correlation occurs is equal to the time of arrival of the signal .such a procedure of maximising the correlation over the phase and the time is carried out for each value of .the final maximization of the correlation is then carried over the parameter .the set of parameters for which the correlation is maximum are then presumed to be the most likely values of the parameters of the gravitational wave signal .the post - newtonian corrections lead to corrections to the phase and the amplitude of the newtonian signal and also lead to additive terms which are qualitatively different from the quadrupole term . in the case of a general binary systemit is tedious and difficult to get the various corrections to the evolution of the orbits of the binary . if one of the bodies is large compared to the other as in the case of a black hole neutron star binary system one can apply the regge wheeler perturbation formalism to get the evolution of the orbit .this provides us with the evolution of the orbital frequency as a function of time .this has been worked out and is given by where , here represents the first time derivative of frequency and the pn expansion parameter .the newtonian waveform is obtained from the above equation by setting .the phase is obtained by integrating equation ( [ orbev ] ) . for the amplitude we use the newtonian dependence on the frequency _ const .this waveform shall be called ` restricted post - newtonian ' henceforth .although this is not exact , we do not expect the errors in the amplitude to affect the correlation significantly .we assume the initial phase and the arrival time of the signal to be 0 . as the matched filtering processcan also be viewed as a correlation between the incoming signal and the filter it is evident that any secular growth of phase difference will reduce the correlation drastically .thus to have a matched filter one must add one or more parameters .this would increase the number of filters enormously with corresponding increase in computational time .it is worthwhile to explore whether we can substantially increase the correlation by allowing for a shift in the parameters of the newtonian filter i.e. whether the signal is able to achieve better correlation with a newtonian filter whose parameters are different from those of the signal .obtaining large correlations depends on the function space spanned by the signal and filter waveforms and to what extent they overlap .we obtain reasonably large correlations . hereeffects due to s.o . and s.s .coupling are not taken into account .the addition of such terms will not alter the thrust of the argument in that , some other newtonian filter would perform best .a detailed account of the formalism and notation used here and the theory of hypothesis testing using maximum likelihood methods as applied to detection of gravitational waves from coalescing binaries is given in .we define a scalar product and its corresponding norm in the function space between two functions and for future use ; and if an exact matched filter were present then the signal to noise ratio would be simply equal to where is the signal .note that our definition of snr is different by a factor of two from the one given in as they work with the one sided power spectral density .the quantity we are interested in computing is where is the chirp corresponding to the filter .we shall term as the normalized correlation .henceforth when we use the word correlation we shall mean the quantity unless specified otherwise . as mentioned above the initial phase and the time of arrival of the signalis taken to be 0 .the aim is to maximise over the range of parameters of the filter .the quantity takes the value between and and tells us how well a newtonian filter can substitute for a post - newtonian one .geometrically one can visualize as the cosine of the angle between the signal vector and the chirp vector . in figure [ fig1 ]we show the impulse response of a filter . as the noise is very high at lower frequencies the amplitude of the impulse response is very small at earlier times andbecomes appreciable only at the end . due to the same reason the increase of amplitude with time is different from that of the newtonian chirp .figure [ fig2 ] justifies the high correlations obtained .it shows how well the filter matches with the restricted post - newtonian signal .it is to be noted that the filter matches with the signal very well during the late stages where the amplitude is largest .figure [ fig3 ] shows the power spectrum of the signal which is the square of the magnitude of the fourier transform of the signal divided by the power spectral density of noise as a function of frequency , this quantity peaks near 200 hz and it is in this frequency range therefore that the filter must match the signal very well i.e. it should try to keep in phase with the signal to yield a high correlation .this is borne out in figure [ fig2 ] .the stationary phase approximation used in the fourier transform of the chirp waveform predicts the power spectrum to be a smooth power law .this however is not true , and numerical results and further investigations into the stationary phase approximation bears this out .the stationary phase analysis leads to a fresnel integral and the smooth power law fall off ( ) in the power spectrum is obtained if the limits of integration extend from to .however since the chirp waveform is taken to be of finite duration we actually get an incomplete fresnel integral .this leads to oscillations in the power spectrum of the signal .the oscillations are pronounced at the two ends of the bandwidth of the power spectrum of the signal as the limits of the integration are curtailed from the ideal to .as the noise is very high at low frequencies the amplitude of the oscillations in is very less at low frequencies .it is easy to explain these oscillations using the cornu s spiral .the cornu s spiral does not get wound up before the limits of the integration are reached .the thickness of the line indicates the presence of sub structure in the power spectrum .the signal waveform was obtained by numerically integrating equation ( [ cut ] ) .we get time as a function of frequency which we then invert to get frequency as a function of time .this is now used to generate the phase and the amplitude of the signal .we take the initial phase and the time of arrival of the signal to be zero .we now present the results of the numerical simulations .we have considered black hole masses in the range 5 to 10 .the masses taken for the smaller mass are 0.5 , 1.0 , and 1.4 .the analysis has been carried out for the ligo detector both in the initial and the advanced stages .we retain as a parameter for the restricted post - newtonian waveform also defined by the equation ( [ eqxi ] ) though this quantity does not represent the coalescence time of the signal anymore .in general the amount of time the signal lasts is less for the post - newtonian signal as compared with the newtonian one which follows from the fact that the quantity in equation ( [ cut ] ) is less than one in the frequency range considered . in table[ tab1 ] we list the normalised correlations obtained for the ligo detector in the initial stage .the mass of the larger component of the binary ( ) increases from left to right along each row .the mass of the other component ( ) increases from top to bottom in each column .the values of the masses are listed accordingly in the table .the correlations show a very regular behavior in the table .there are two factors controlling the drop of the correlation : 1 .increase of the magnitude of phase corrections with increase of total mass of the binary system , and 2 .decrease of the integration time due to the increase of total mass of the system .these two factors work against each other in producing the total amount of phase error between the newtonian filter and the restricted post - newtonian signal .thus when we increase , the increase in the magnitude of the phase corrections dominates over the loss in integration time and we get lower correlations when we go from left to right .exactly the opposite happens when we increase i.e. the effect of the decrease of integration time dominates and the correlations increase . in table[ tab3 ] we list the correlations for the advanced ligo .the same pattern is observed in this table too .we observe that the correlations for the larger frequency range are smaller as may be expected since the filter is more likely to go out of phase in a broader bandwidth .however , it should be emphasised that these correlations are _normalised_. the correlation would be unity if the filter were exactly matched to the signal . in absolute terms , if we consider a signal with given parameters having the same amplitude then the correlation for the advanced ligo will be much larger than the initial ligo since the noise is less ; first , we get a larger integration time and second , the power spectral density is an order of magnitude less in the common bandwidth .we find that for the parameters considered , the absolute values of the correlations are larger by a factor of twenty for the advanced ligo .we next take up the issue of the shift in the parameters of the filters which produce maximum correlations . in table[ tab2 ] we list the shift in the parameters and for the case of the initial ligo detector .the phase parameter is an extremely sensitive parameter and its shifts are not regular .the value of is always negative .this is because as mentioned earlier a post - newtonian signal will last for a smaller length of time as compared to a newtonian signal with the same values of and .also it can be seen from equation ( [ orbev ] ) and the definition of that the first derivative of the frequency is approximately proportional to . therefore in order to obtain a higher value of the value of is reduced . hereagain the factors , the integration time and magnitude of phase corrections , compete against each other in determining how varies with an increase in either of the masses .the value of decreases with increase in and increases with increase in .also is always negative .this parameter tries to compensate for the reduction in the coalescence time by pushing the filter forward in time . as the table showsthere is apparently a very strong covariance between these two parameters .the value of also decreases with increase in and increases with increase in .typically for , and we get shifts of secs and secs .this should be compared with the coalescence time of the waveform which is about 9 secs . for the case of the advanced ligo detectorthe magnitude of the shifts is much more , but the time for which the signal spends in the frequency range 10 to 400 hz is about a factor of fifty more than that for the initial ligo for similar masses . here the same pattern is observed in the variation of and as in the initial ligo .the typical values of the shifts observed are secs and secs for , and .simulations were also done for the band limited white noise with the power spectral density having a constant value between 40 to 400 hz .the results were compared with those of the initial stage of ligo .the effect of coloured noise of the type considered here is to narrow band the signal .thus the newtonian filter has to match with the signal over a smaller range of frequencies .however if the narrow banding occurs at higher frequencies , for the chirp , the magnitude of the phase corrections is more .thus in addition to changing the first derivative of the frequency through the parameter the values of the higher derivatives of the frequencies would also have to be changed to get a good match .had the narrowbanding been at lower frequencies where the time derivatives of the frequency are relatively less , the correlation would have been much higher as the shifts in the newtonian parmeters would have been sufficient for the purpose . in table[ tab5 ] we show the correlations obtained for band limited white noise and table [ tab6 ] shows the corresponding parameter shifts .we observe that in the case of the initial ligo the correlations obtained are less than those for the white noise case for higher values of the total mass and vice - versa .this can also be seen as an effect of narrow banding .the values of the shifts in the parameters is also much smaller in the case of white noise as a most of the contribution to the correlation comes from the lower frequencies where a small shift in is sufficient for the filter to match well with the signal .till now we have considered our filter bank to have an infinite number of filters i.e. we have allowed for a continuous variation of .however one is limited by the computing power available and one must confine oneself to a finite number of filters . thus in general the signal will be unable to achieve its maximum correlation .our aim is to estimate the drop in the correlation for a given computing speed .the maximum drop in the correlation because of the finiteness of the filter bank will have to be kept small .we consider a discrete set of newtonian filters corresponding to distinct values of the parameter .the filter spacing in the parameter is taken to be constant ( ) across the entire range of values can take ( see ) .we first consider the initial ligo and assume a 1 gigaflop machine on which we intend to do on - line search . the maximum time the signal lastsis found to be 25 secs for the mass range considered .however the data train needs to be padded with zeroes to four times the original length which is optimal for computational purposes ( see ) .this will increase the length of the data train to 100 secs .we allow for an overlap of 25 secs between consecutive data trains.thus we have 75 secs in which to calculate correlations where is the number of filters .we sample the waveform at 1000 hz .thus we get approximately data points per data train .we have to perform one fast fourier transform ( fft ) operation per filter .the fourier transforms will have already been calculated once and for all for the filters in the bank and one inverse fourier transform will have to be performed to obtain the correlation as a function of the time lag .the computation time will be mostly taken up by the ffts as each fft involves operations where is the number of points in the data train . in this particular casetherefore each fourier transform will require about 6.4 million floating point operations ( mfo ) .this has to be compared with the number of floating point operations which can be carried out over the period of 75 secs which is mfos .thus the number of filters that can be accommodated is about 11700 filters .we require two filters for the phase for each value of . asthe maximum value of the coalescence time is 25 secs for the range of masses considered , we get a filter spacing in the parameter to be around 4.3 msecs . in the case of the advanced ligothis number is about 172 msecs .let where is the value of corresponding to the maximum correlation .figure [ fig4 ] shows how the correlation for a given signal normalized to its maximum value varies with along a line of curvature i.e. along the curve parameterised by along which the drop in the correlation is the slowest . in other wordsthe figure shows the correlation maximized over and as a function of .the curve has been plotted for the initial ligo and for , and .however the shape of this curve and the magnitudes in the drop of the correlation is insensitive to the values of and .we observe that even for shifts of 100msecs the correlation does not drop by more than . for the case of the advanced ligo this drop in the correlationis even lower .thus the filter spacing which we have calculated is sufficient for our purpose .we have demonstrated here the possibility of using newtonian filters for detecting the presence of a restricted post - newtonian signal . such a strategy would be very useful in providing a preliminary on line analysis of the data train .the analysis which we have carried out here is valid only for the point mass case where where is the reduced mass and the total mass . for the initial ligothe correlation is 0.65 on an average and for the advanced ligo it is around 0.45 .these are only the normalised correlations as we have already stressed before .the absolute values of the signal to noise will be much higher ( by a factor of about 20 ) for the advanced ligo .it must be noted that the drop of the correlation will translate into a loss in the event rate .the distance upto which we can detect the binary will come down by a factor equal to the normalised correlation .this means that for the initial ligo the distance to which we can detect the binary will be brought down by and for the advanced ligo it will be brought down by from their respective maximum ranges . in absolute termsthe advanced ligo will still be able to look further than the initial ligo .the effect of the discreteness of the filter bank in producing a further drop in the correlations was investigated .it was found that for a one gigaflop machine the drop in correlation due to the discreteness was very small . with better and faster machines we can make the bank of filters still more efficient .if we consider higher derivatives of frequency say _ etc ._ as parameters we should get a better match , but the computation is very likely to increase. it should be possible to construct filters which not only enable us to save on the computation time but also span the set of signal waveforms adequately .a deeper analysis of the signal waveforms is in order so that efficient techniques can be developed .this work is now in progress ..this table displays the correlations for the initial ligo detector for a wide range of masses in units of solar masses .the black hole mass varies from 5 to 10 and the other mass takes the values 0.5 , 1.0 and 1.4 .[ tab1 ] [ cols="^,^,^,^,^,^,^ " , ]
as coalescing binary systems are one of the most promising sources of gravitational waves , it becomes necessary to device efficient detection strategies . the detection strategy should be efficient enough so as not to miss out any detectable signal and at the same time minimize the false alarm probability . the technique of matched filtering used in the detection of gravitational waves from coalescing binaries relies on the construction of accurate templates . until recently filters modelled on the quadrupole or the newtonian approximation were deemed sufficient . such filters or templates have in addition to the amplitude , three parameters which are the chirp mass , the time of arrival and the initial phase . recently it was shown that post - newtonian effects contribute to a secular growth in the phase difference between the actual signal and its corresponding newtonian template . this affects the very foundation of the technique of matched filtering , which relies on the correlation of the signal with the filter and hence is extremely sensitive to errors in phase . in this paper we investigate the possibility of compensating for the phase difference caused by the post - newtonian terms by allowing for a shift in the newtonian filter parameters . the analysis is carried out for cases where one of the components is a black hole and the other a neutron star or a small black hole . the alternative strategy would be to increase the number of parameters of the lattice of filters which might prove to be prohibitive in terms of computing power . we find that newtonian filters perform adequately for the purpose of detecting the presence of the signal for both the initial and the advanced ligo detectors .
despite the rapid progress in designing fully autonomous systems , many systems still require human s expertise to handle tasks which autonomous controllers can not handle or which they have poor performance .therefore , shared autonomy systems have been developed to bridge the gap between fully autonomous and fully human operated systems . in this paper, we examine a class of shared autonomy systems , featured by switching control between a human operator and an autonomous controller to collectively achieve a given control objective .examples of such shared autonomy systems include robotic mobile manipulation , remote tele - operated mobile robots , human - in - the - loop autonomous driving vehicle .in particular , we consider control under temporal logic specifications .one major challenge for designing shared autonomy policies under temporal logic specifications is making trade - offs between two possibly competing objectives : achieving the optimal performance for satisfying temporal logic constraints and minimizing human s effort .moreover , human s cognition is an inseparable factor in synthesizing shared autonomy systems since it directly influences human s performance , for example , a human may have limited time span of attention and possible delays in response to a request . although finding an accurate model of human cognition is an ongoing challenging topic within cognitive science , markov models have been proposed to model and predict human behaviors in various decision making tasks . adopting this modeling paradigm for human s cognition, we propose a formalism for shared autonomy systems capturing three important components : the operator , the autonomous controller and the cognitive model of the human operator , into a stochastic _ shared - autonomy system_. precisely , the three components includes a markov model representing the fully - autonomous system , a markov model for the fully human - operated system , and a markov model representing the evolution of human s cognitive states under requests from autonomous controller to human , or other external events .the uncertainty in the composed system comes from the stochastic nature of the underlying dynamical system and its environment as well as the inherent uncertainty in the operator s cognition .switching from the autonomous controller to the operator can occur only at a particular set of human s cognitive states , influenced by requests from the autonomous controller to the operator , such as , pay more attention , be prepared for a possible future control action . under this mathematical formulation ,we transform the problem of synthesizing a shared autonomy policy that coordinates the operator and the autonomous controller into solving a with temporal logic constraints : one objective is to optimize the probability of satisfying the given temporal logic formula , and another objective is to minimize the human s effort over an infinite horizon , measured by a given cost function . the trade - off between multiple objectives is then made through computing the pareto optimal set .given a policy in this set , there is no other policy that can make it better for one objective than this policy without making it worse for another objective . in literature ,pareto optimal policies for s have been studied for the cases of long - run discounted and average rewards .the authors in proposed the weighted - sum method for s with multiple temporal logic constraints by solving pareto optimal policies for undiscounted time - bounded reachability or accumulated rewards .these aforementioned methods are not directly applicable in our problem due to the time unboundness in both satisfying these temporal logic constraints and the accumulated cost / reward . to this end, we develop a novel two - stage optimization method to handle the multiple objectives and adopt the so - called _ tchebychev scalarization method _ for finding a uniform coverage of all pareto optimal points in the policy space , which can not be computed via weighted - sum ( linear scalarization ) methods as the latter only allows pareto optimal solutions to be found amongst the convex area of the pareto front .finally , we conclude the paper with an algorithm that generates a pareto - optimal policy achieving the desired trade - off from user - defined weights for coordinating the switching control between an operator and an autonomous controller for a stochastic system with temporal logic constraints .we provide necessary background for presenting the results in this paper .a vector in is denoted where are the components of .we denote the set of probability distributions on a set by . given a probability distribution ] is defined such that given a state and an action , gives the probability of reaching the next state . is a finite set of atomic propositions and is a labeling function which assigns to each state a set of atomic propositions that are valid at the state . is a reward function giving the immediate reward for reaching the state after taking action at the state and is the reward discount factor . in this context , gives a probability distribution over the set of states . and both express the transition probability from state to state under action in .a _ path _ is an infinite sequence of states such that for all , there exists , .we denote to be a set of actions enabled at the state .that is , for each , .a _ randomized policy _ in is a function that maps a finite path into a probability distribution over actions .a deterministic policy is a special case of randomized policies that maps a path into a single action .given a policy , for a measurable function that maps paths into reals , we write ] ) for the expected value of when the starts in state ( resp .an initial distribution of states ) and the policy is used .a policy induces a probability distribution over paths in .the state reached at step is a random variable and the action being taken at state is also a random variable , denoted .we use to specify a set of desired system properties such as safety , liveness , persistence and stability . in the following , we present some basic preliminaries for specifications and introduce a product operation for synthesizing policies in s under constraints .a formula in is built from a finite set of atomic propositions , , and the boolean and temporal connectives and ( always ) , ( until ) , ( eventually ) , ( next ) .given an formula as the system specification , one can always represent it by a where is a finite state set , is the alphabet , is the initial state , and is the transition function .the acceptance condition is a set of tuples .the run for an infinite word [1]\ldots \in ( 2^{{\mathcal{ap}}})^\omega ] .a run is accepted in if there exists at least one pair such that and where is the set of states that appear infinitely often in .we define a product operation between a labeled and a .[ def : product ] given a labeled and the , the _ product _ is , with components defined as follows : is the set of states . is the set of actions . ] is the transition probability function .given , , and , let .the reward function is defined as where given , , , for .the acceptance condition is . problem of maximizing the probability of satisfying the formula in is transformed into a problem of maximizing the probability of reaching a particular set in the product , which is defined next . the _ end component _ for the product is a pair where is a non - empty set of states and is a randomized policy .moreover , the policy is defined such that for any , for any , ; and the induced directed graph is strongly connected . here, is an edge in the directed graph if for some .an is an end component such that and for some . let the set of s in be denoted and the set of _ accepting end states _ be denoted by .note that , by definition , for each , by exercising the associated policy , the probability of reaching any state in is 1 . due to this property , once we enter some state , we can find at least one accepting end component such that , and initiate the policy such that for some , all states in will be visited only a finite number of times and some state in will be visited infinitely often .the set can be computed by algorithms in polynomial time in the size of .we aim to synthesize a shared autonomy policy that switches control between an operator and an autonomous controller .the stochastic system controlled by the human operator and the autonomous controller , gives rise to two different s with the same set of states , the same set of atomic propositions and the same labeling function , but possibly different sets of actions and transition probability functions .* autonomous controller : where ] is the transition probability function under human operator .let ] is the initial distribution . ] is the transition probability function , defined as follows . given a state and action , , which expresses that the controller acts and triggers an event that affects the operator s cognitive state . given a state for , and action , , which expresses that the operator controls the system and an event happens and may affect the cognitive state . ] is the initial distribution of states when entering the set . because for single objective optimization the optimal state value does not depend on the initial distribution , can be chosen arbitrarily from the set of distributions over .the physical meaning of is the discounted frequency of visiting the state , which is strictly smaller than the frequency of visiting the state as long as . by enforcing the constraints ,we ensure that the frequency of visiting every state in is non - zero , i.e. , all states in will be visited infinitely often .the solution to produces a memoryless policy that chooses action at a state with probability . using policy evaluation , the state value for each under the optimal policy can be computed .then , the terminal cost is defined as follows . andthe policy after hitting the state is such that .we now present algorithm [ alg : twostage ] to conclude the two - state optimization procedure .[ [ remark ] ] remark + + + + + + although in this paper we only considered two objectives , the methods can be easily extended to more than two objectives for handling specifications and different reward / cost structures in synthesis for stochastic systems , for example , the objective of balancing between the probability of satisfying an formula , the discounted total cost of human effort , and the discounted total cost of energy consumption .we apply algorithm [ alg : twostage ] to a robotic motion planning problem in a stochastic environment .the implementations are in python and matlab on a desktop with intel(r ) core(tm ) processor and 16 gb of memory .figure 4a shows a gridworld environment of four different terrains : pavement , grass , gravel and sand . in each terrain, the mobile robot can move in four directions ( heading north `n ' , south ` s ' , east ` e ' , and west ` w ' ) .there is onboard feedback controller that implements these four maneuver , which are motion primitives . using the onboard controller ,the probability of arriving at the correct cell is for pavement , for grass , for gravel and for sand . alternatively ,if the robot is operated a human , it can implement the four actions with a better performance for terrains grass , sand and gravel .the probability of arriving at the correct cell under human s operation is for pavement , for grass , for gravel and for sand .the objective is that either the robot has to visit region and then , in this order , or it needs to visit region infinitely often , while avoiding all the obstacles .formally , the specification is expressed with an formula .figure 4b is the cognitive model of the operator , including three states : , and represent that human pays low , moderate , and high attention to the system respectively .the costs of paying low , moderate and high attention to the system are , , and , respectively .action ` ' ( resp . ) means a request to increase ( resp .decrease ) the attention and action means a request to maintain the current attention .the operator takes over control at state .gridworld , where the disk represents the robot , the cells , , and are the interested regions , the crossed cells are obstacles .we assume that if the robot hits the wall ( edges ) , it will be bounced back to the previous cell .different grey scales represents different terrains : from the darkest to the lightest , these are `` sand , '' `` grass , '' `` pavement '' and `` gravel . ''( b ) the of the human operator.,scaledwidth=45.0% ] during control execution , we aim to design a policy that coordinates the switching of control between the operator and the autonomous controller , i.e. , onboard software controller. the policy should be pareto optimal in order to balance between maximizing the expected discounted probability of satisfying the formula , and minimizing the expected discounted total cost of human efforts .figure [ fig : gridworldpareto ] shows the state value for the initial state with respect to reward functions for the formula and for the cost of human effort , under the single objective optimal policy and , and a subset of pareto optimal policies , one for each weight vectors in the set .for the specification , all policies are randomized ., under policies , and a set of pareto optimal policies , for each .the -axis represents the values of the initial state for discounted probability of satisfying the specification .the -axis represents the values of the initial state with respect to the cost of human effort.,scaledwidth=40.0% ]we developed a synthesis method for a class of shared autonomy systems featured by switching control between a human operator and an autonomous controller . in the presence of inherent uncertainties in the systems dynamics and the evolution of humans cognitive states , we proposed a two - stage optimization method to trade - off the human effort for the system s performance in satisfying a given temporal logic specification .moreover , the solution method can also be extended for solving multi - objective s with temporal logic constraints . in the following ,we discuss some of the limitations in both modeling and solution approach in this paper and possible directions for future work .we employed two s for modeling the system operated by the human and for representing the evolution of cognitive states triggered by external events such as workload , fatigue and requests for attention .we assumed that these models are given .however , in practice , we might need to learn such models through experiments and then design adaptive shared autonomy policies based on the knowledge accumulated over the learning phase . in this respect ,a possible solution is to incorporate joint learning and control policy synthesis , for instance , pac - mdp methods , into multi - objective s with temporal logic constraints .another limitation in modeling is that the current cognitive model can not capture all possible influences of human s cognition on his performance .consider , for instance , when the operator is bored or tired , his performance in some tasks can be degraded , and therefore the transition probabilities in are dependent on the operator s cognitive states . in this case , we will need to develop a different product operation for combining the three factors : , a set of s for different cognitive states , and , into the shared autonomy system . despite the change in modeling the shared autonomy system ,the method for solving pareto optimal policies developed in this paper can be easily extended .consider a multiobjective where is a vector of reward functions and is the discount factor , let be the vectorial value function optimal for the -th criterion , specified with the reward function .an approximation of the nadir point for the -th criterion is computed as follows , where is a vector value function obtained by evaluating the optimal policy for the -th criterion with respect to the -th reward function .the weight vector after normalization is defined as b. pitzer , m. styer , c. bersch , c. duhadway , and j. becker , `` towards perceptual shared autonomy for robotic mobile manipulation , '' in _ ieee international conference on robotics and automation _ , may 2011 , pp . 62456251 .k. kinugawa and h. noborio , `` a shared autonomy of multiple mobile robots in teleoperation , '' in _ proceedings of ieee international workshop on robot and human interactive communication _ , 2001 , pp .319325 .s. gnatzig , f. schuller , and m. lienkamp , `` human - machine interaction as key technology for driverless driving - a trajectory - based shared autonomy control approach , '' in _ ieee international symposium on robot and human interactive communication _ , sept 2012 , pp .913918 .w. li , d. sadigh , s. sastry , and s. seshia , `` , '' in _ _ , ser .lecture notes in computer science , e. brahm and k. havelund , eds.1em plus 0.5em minus 0.4emspringer berlin heidelberg , 2014 , vol .8413 , pp . 470484 .k. chatterjee , r. majumdar , and t. a. henzinger , `` markov decision processes with multiple objectives , '' in _symposium on theoretical aspects of computer science_.1em plus 0.5em minus 0.4emspringer , 2006 , pp .325336 .v. forejt , m. kwiatkowska , and d. parker , `` pareto curves for probabilistic model checking , '' in _ proceedings of 10th international symposium on automated technology for verification and analysis _ , ser .lncs , s. chakraborty and m. mukund , eds .7561.1em plus 0.5em minus 0.4emspringer , 2012 , pp .317332 .p. perny and p. weng , `` on finding compromise solutions in multiobjective markov decision processes , '' in _ proceedings of the 19th european conference on artificial intelligence_.1em plus 0.5em minus 0.4emios press , 2010 , pp .969970 .i. das and j. e. dennis , `` a closer look at drawbacks of minimizing weighted sums of objectives for pareto set generation in multicriteria optimization problems , '' _ structural optimization _14 , no . 1 ,pp . 6369 , 1997 .k. chatterjee , m. henzinger , m. joglekar , and n. shah , `` symbolic algorithms for qualitative analysis of markov decision processes with bchi objectives , '' _ formal methods in system design _ , vol .42 , no . 3 , pp .301327 , 2013 .d. henriques , j. g. martins , p. zuliani , a. platzer , and e. m. clarke , `` statistical model checking for markov decision processes , '' in _9th international conference on quantitative evaluation of systems _ , 2012 , pp .8493 .mouaddib , s. zilberstein , a. beynier , l. jeanpierre , _et al . _ , `` a decision - theoretic approach to cooperative control and adjustable autonomy . '' in _european conference on artificial intelligence _ , 2010 , pp .
in systems in which control authority is shared by an autonomous controller and a human operator , it is important to find solutions that achieve a desirable system performance with a reasonable workload for the human operator . we formulate a shared autonomy system capable of capturing the interaction and switching control between an autonomous controller and a human operator , as well as the evolution of the operator s cognitive state during control execution . to trade - off human s effort and the performance level , e.g. , measured by the probability of satisfying the underlying temporal logic specification , a two - stage policy synthesis algorithm is proposed for generating pareto efficient coordination and control policies with respect to user specified weights . we integrate the tchebychev scalarization method for multi - objective optimization methods to obtain a better coverage of the set of pareto efficient solutions than linear scalarization methods .
the causal character of a singularity has a well defined meaning within the theory of conformal boundaries .the knowledge of this causal character is fundamental since whenever the spacetime possesses timelike or past null singularities there are always null geodesics which are past incomplete .if such a singularity could develop from a generic gravitational collapse in the framework of general relativity theory , this would mean that the theory would lose its predictability .the question on whether general relativity contains a built - in safety feature that precludes this possibility was put forward by penrose in 1969 and gave rise to what is known as the _ cosmic censorship conjecture _many counterexamples to the ccc have been proposed as well as many arguments in its favour ( see , for example , and references therein ) so that the question of the ccc remains open . in this article we will deal with probably the most interesting type of singularities in spherically symmetric spacetimes : the _ zero - areal - radius singularities _ , i.e. , given the _areal radius _ defined such that the area of a 2-sphere is , we will be interested in singularities _ at _ . in the current literaturethe study of the causal character of the singularities has been carried out for important particular solutions . in a few simple casesthe singular conformal boundary has been obtained by using a conformal compactification ( see , for instance , ) , while in most cases there is not an analytical compactification and , as an alternative method , the causal character of the singularities has been studied through the analysis of radial null geodesics around them ( see , for example , ) .in addition to the analysis of particular cases , this last technique allows some _ general _ approaches for studying zero - areal - radius singularities .in particular , it has led to show that the causal character of the singularity is related to the _ mass function _ .more general studies on the formation of _ naked _ singularities in spherically symmetric spacetimes along these lines can be found in , and , by using ad hoc devised procedure , in .furthermore , in , by using the techniques of the qualitative behaviour of dynamic systems on the differential equations satisfied by the radial null geodesics we were able to present the most comprehensive scheme so far to try to find out their causal characterization taking into account , and analyzing , the possible limitations of the approach . however , this work was carried out in specific coordinates ( the so called _ radiative coordinates _ ) , so that its results were restricted to models described in these coordinates .our aim in this article is to show that the causal character of the zero - areal - radius ( ) singularity in spherically symmetric models is related with some specific invariants .apart from being an interesting result from a theoretical point of view , this coordinate independent approach means that , if some assumptions are satisfied , one could find out the causal character of a model s singularity algorithmically through the computation of these invariants in arbitrary coordinates .in order to try to reach our goal we will base our approach in an analysis of the results in our previous article .we will show that our previous results admit an interpretation and rewriting in terms of some invariants and we will analyze and explicitly state the limits for the applicability of our results coming from our specific approach . on the other hand , throughout the article we will use a geometrical approach requiring only the existence of a spacetime , but not the fulfillment of einstein s equations .thus , we just try to discover the possibilities allowed by this geometrical approach which includes the classical as well as the semiclassical framework .the paper has been divided as follows : in section [ basis ] we revise well - known properties of spherically symmetric spacetimes and of the radial null geodesics , but emphasizing the corresponding degrees of differentiability for each defined object ( what will be an important aspect for the later development of the work ) . in section [ coordchange ]we analyze the relationship between general coordinate systems and the coordinate system used in .the different cases that appear when treating the causal character of singularities are treated from section [ m_neq0 ] on . in particular , section [ m_neq0 ]is devoted to the analysis of _ singularities with non - null mass function_. section [ sec_m=0 ] deals with the preliminaries required to the study of _ singularities with null mass function_. finally , sections [ sechyper ] , [ secnonhyper ] and [ secnoniso ] analyze every _ null mass function _ subcase in detail .let us consider a simply connected open set in a spherically symmetric spacetime and such that a part of its boundary consists of a interval .the metric line element of an oriented spherically symmetric spacetime can be ( and for practical purposes it is usually ) written in the local chart endowed with coordinates in the form where is an oriented two - dimensional lorentzian metric ( = ) , and . in the lorentzian two - surface orthogonal to the 2-spheres two nonvanishing null vector fieldsmay be defined such that they are linearly independent at each point . if some differentiability requirements are satisfied in the integral curves of the two null vector fields provide us with two families ( and ) of affinely parametrized null geodesics called the _ radial null geodesics_. take , for example , the geodesics belonging to the family satisfying where is their affine parameter .the theory of ordinary differential equations together with the definition of the christoffel symbols guarantees the existence and uniqueness of the affinelly parametrized geodesics provided that is at least and . from now on we will guarantee the existence and uniqueness of affinely parametrized null geodesics by assuming that is , where means and that the - order derivatives are locally lipshitz .on the other hand , with means . ] , and that and . under these assumptionsthe theory of odes also guarantees that the solution of the geodesic equation will be .since the radial null tangent vector field to the family has an associated covector satisfying }=0 $ ] , then it can be written as the differential of a function : .the curves define the trajectories of the family of null geodesics in . taking into account that the scalar invariant is a function we can define which is a scalar invariant function . this invariant is related to the invariant _ mass function _ through in order to investigate the causal characterization of a interval we will consider the radial null geodesics around a point in this interval . as we shown in procedure requires that the radial null geodesics from at least one family , say , _ reach _ ( or _ leave _ ) every point in the interval , what will be assumed in the next sections of this article . on the other hand ,let us comment that provided that a interval is not _ reached _ ( or _ left _ ) by radial null geodesics of any family then the causal characterization of the interval is straightforward since this interval can not be translated into a piecewise interval in the conformal boundary of the spacetime .it can only be translated into a _ point _ where the boundary is not a curve and , thus , where there is not tangent vector properly defining its causal character .as we mentioned in the introduction , the main goal of this article is to extend the coordinate dependent results presented in so that they can be used independently of the coordinate system chosen to work with , i.e. , to provide the causal characterization of -singularities in a invariant manner . in order to do this , in this section we will deal with the connection between a general coordinate system of the type used for ( [ mi ] ) and the coordinate system used in . [ coocha ] under the assumptions that the metric ( [ mi ] ) is , with , and ( or , equivalently , ) there exists a coordinate change , where is the areal coordinate and is a null coordinate , such that the metric ( [ mi ] ) can be locally written as where and , are functions .this lemma is based in the fact that , since is and is , there is a class map . the inverse function theorem ( see , for example , ) guarantees the existence of functions and such that and provided that the jacobian determinant is not null in .along this work we will denote the open set by .it follows that we can write the function as , where the chain rule guarantees that will also be at least a function in the variables . on the other hand ,the condition or , equivalently , implies that the vectors associated with these one - forms can not be parallel : is . in this way , taking into account that will be ( ) in we can define the invariant non - null constant note that it is invariant under future directed reparametrizations of . if ( or ) , the expansion of the null geodesics with tangent vector is positive ( negative , respectively ) in every point of .if we perform the coordinate change then , due to the light - like character of the coordinate , the metric of the spacetime ( [ mi ] ) will take the form : where and .the general future directed and affinely parametrized null vector and the future directed null vector tangent to the family satisfying can be written as where depends on the affine parameter chosen for . clearly , sign , so that we can rewrite as with .if we state explicitly the relationship between the component of the metric tensor in these new coordinates with regard to the old ones where , as usual , should be understood as , and taking into account that both and are functions in their respective variables , then the chain rule theorem implies that is at least a function . on the other hand ,if one evaluates the function with this form of the metric and uses ( [ bbeta ] ) one finds thus , taking into account ( [ bbeta ] ) and ( [ achi]),we obtain the required form for the metric ( [ metrad ] ) with the degree of differentiability stated in the lemma .[ corm ] under the assumptions in lemma [ coocha ] the mass function is a function .let us remark here that we do not need to find the explicit form ( [ metrad ] ) since we want to work in the original coordinates .in particular , the function can be written with the help of ( [ beta ] ) , and the appropriate labeling" . ] , as a function of : this section we will discuss the causal character of in a point such that , where .this condition implies that there is a scalar curvature singularity at , so that does not belong to the spacetime , but to its singular boundary .if one radial null geodesic of , say , the family _ reaches _ then we have the following [ t_m_neq0 ] in case the spacetime metric ( [ mi ] ) is and there is a radial null geodesic reaching ( either toward its past or its future ) a -singularity at with a value of its affine parameter then : * if , there is a spacelike singularity at , * if , there is a timelike singularity at .this result can be found in our previous article where it was shown in specific coordinates .however , taking into account that is a scalar invariant , the proposition is true for other coordinate systems and it is , thus , an invariant result .let us reiterate that the requirement of , at least , a metric is a minimum assumption for the existence and uniqueness of radial null geodesics in the spacetime .the theorem only requires the existence of the _ directional _ limit along the radial null geodesic . as a corollary we have the following more applicable result : [ c_m_neq0 ] in case the spacetime metric is and , where is such that , then the causal character of the -singularity at is defined by the sign of the invariant mass function as the function approaches : * if then there is a spacelike singularity at , * if then there is a timelike singularity at is the most involved case .it is known that if the causal character of at admits any possibility : spacelike , lightlike or timelike .therefore , the question is : are in this case other invariants which define the causal character of independently of the coordinate system used ? in order to answer this , let us introduce in this section some new definitions and lemmas .for instance , in spherical symmetry there is an invariantly defined vector , the kodama vector , which is also known to possess very interesting properties ( see , for instance , and references therein ) : where denotes the volume form associated with the two - metric .it satisfies , so that if the orientation of can be chosen in such a way that is a future directed timelike vector .kodama s vector characterizes the spherically symmetric directions tangent to the hypersurfaces and provides and invariantly defined direction in which the area of the two - spheres remains constant . forthe metric ( [ metrad ] ) takes the form : which satisfies in the local chart .we have seen that , if the assumptions in lemma [ coocha ] are satisfied , the functions and are functions defined in an open set .we will now see that these functions can be extended beyond the open set provided that some conditions are fulfilled .in particular , we are mainly interested in an extension of the functions around that , while it has not any physical meaning , will allow us to apply the theory of the qualitative behaviour of dynamic systems in a open set centered in a point of .[ extens ] provided that there is a natural number such that the limits of the functions and and of their ith - order derivatives , for all , as every point in the boundary of the open set is approached , exist ( and are finite ) then the functions and admit a extension and . in order to show this it suffices to define the extended function and its derivatives for all points in the boundary of the open set as for all integers , such that . in this way the lemma can be considered as a simple case of whitney s extension theorem and the existence of extensions and that coincide with and in guaranteed .we will denote the extended domain of definition by .on the other hand , we want to work with the original coordinates and infer from here the extendibility of the functions and .we can do this by using the chain rule applied to the derivatives of and as the _ boundary points _ are approached .for example , if one is looking for a extension , according to lemma [ extens ] one needs , among others , the limit of in the boundary , but taking into account that can be written as a function of ( ) and ( ) , the existence of the limit is guaranteed if we require that the limits of , , and as every point in the boundary of the open set is approached exist ( and are finite ) and we also require that the limit of when approaching the same points is not zero. this can be formalized and generalized for extensions similarly : [ extenscn ] if the limits as every point in the boundary of the open set is approached exist ( and are finite ) for all integers , such that and the limit of as the same points are approached exists and is not zero , then the functions and admit a extension and . in order to analyze the singular conformal boundary of a spherically symmetric spacetime it is possible to perform just the conformal compactification of the two - dimensional surface orthogonal to the _ 2-spheres _ retaining all the important information .this is so because , by means of a coordinate change , the induced lorentzian metric or _ first fundamental form _ of the two - dimensional surface can be brought into a conformally flat form or , equivalently , where are lightlike coordinates ( , ) . in this way , it can be naturally embedded in an _ unphysical _ two - dimensional minkowskian spacetime ( see , for instance , ) . despite an _conformal compactification can only be found for certain simple particular cases some information can be extracted without fully following the procedure .for instance , with regard to the differentiability of the singular boundary we have the following [ teoc3 ] if and are functions , in and the assumptions in lemma [ extenscn ] ( for the existence of extensions ) are satisfied and , in particular , they are satisfied in a connected open interval of the singular boundary in which , then will be in the unphysical spacetime . in order to show this we will study the differentiability of in the unphysical spacetime by analyzing the coordinate change which takes the metric to the form ( [ dobnul ] ) . to begin with , note that , under the assumptions in the theorem we can perform a first coordinate change from coordinates to such that and will be functions .that this is satisfied follows from the requirement of a metric in the theorem and lemma [ coocha ] together with corollary [ corm ] .furthermore , requiring the fulfillment of the assumptions in lemma [ extenscn ] implies that there will be extensions and .we know from ( [ metrad ] ) that the metric of the lorentzian surface can be written in coordinates as where . in order to rewrite this in double - nullcoordinates we will look for an integrating factor such that the integrability condition for ( [ dv ] ) takes the form of the following first - order linear inhomogeneous pde in : where the functions , and are defined as our assumptions imply that and are functions , while is .we now choose the curve ( which can be described as , where is a parameter defined in a connected open real interval ) as the curve of initial conditions for the pde and , in addition , we choose the initial condition on to be , where is an arbitrary function .then the tangent vector to ( ) and the characteristic direction have distinct projections on the ,-plane ( in ) .this together with the fact that the pde is nondegenerate ( in an open neighborhood of since ) implies that the initial value problem has one and only one solution which will be a function . in this way ( [ dv ] ) can now be used to evaluate the slope of the singular boundary in the unphysical spacetime : since there are only functions in the right hand side of this differential equation , its solution will be a curve in the ,-plane .qed as a corollary , under these assumptions , admits a tangent vector whose causal character determines the causal character of .in other words , if the slope of the curve is negative so that the curve in the unphysical spacetime ( therefore , the singularity ) will be spacelike . likewise ,if the singularity will be timelike .this just reiterates the results in theorem [ t_m_neq0 ] , which were then shown under less restrictive assumptions .the so far untreated case in which the singularity can be timelike , lightlike or spacelike and in which the degree of differentiability of the singular boundary can be just under the differentiability assumptions on this subsection will be the subject of the rest of the article .[ m0_mu ] in case and , where is a point in the singular boundary reached or left by a single null geodesic of the family , the causal character of around can be obtained through the invariants , provided that and are functions in and that the assumptions in lemma [ extenscn ] ( for the existence of extensions ) are satisfied .then the causal characterization around is inferred from , according to figure [ hyper ] ._ sketched _ characterization of the singularity when and , where the at is chosen to be reached by the radial null geodesic .as explained in the text , , and . in this casewe have for . in this way , if the requirements in theorem [ teoc3 ] are satisfied the singular boundary should be for .( in these sketches we just draw straight lines for instead of curves ) . however , in , where the singular boundary can be just .consider , for example , the first sketch in the column where a -_spacelike _ singularity for must be abruptly followed by a lightlike singularity for . ] note that the invariant is independent of the parametrization of . in ) the causal characterization in this case was obtained in radiative coordinates provided that and were functions . that this requirement is satisfied follows from the requirement of a metric in the theorem and lemma [ coocha ] together with corollary [ corm ] .another requirement was that there should be extensions and , hence our requirement that the assumptions on lemma [ extenscn ] for the existence of extensions should be satisfied .on the other hand , it was shown that , in radiative coordinates , the causal characterization depends on where we have chosen to analyze the -singularity around which describes the null geodesic reaching or leaving ( what is always allowed through a redefinition of ) .on the other hand , using the expression ( [ lk ] ) for and the kodama vector ( [ kodrad ] ) it is easy to verify that , in radiative coordinates , the invariant can be written as while the invariant is simply . therefore , taking the limit , we can rewrite the quantities , and in an _ explicit invariant form _ as finally , in it was shown that the causal character can be read from a table ( figure [ hyper ] ) depending on these quantities . in this way , one first computes , and in arbitrary coordinates in order to get from them the invariants , and . then ,if the assumptions in the theorem are satisfied , these invariants provide us with the causal characterization of the singularity for this case .[ m0_mu_mr ] in case , and , where is a point in the singular boundary reached or left by a single null geodesic of the family , the causal character of is given by the invariants , provided that and are functions in ( ) and that the assumptions in lemma [ extenscn ] ( for the existence of extensions ) are satisfied .then , where . ]whereas for ( i.e. , we assume that there is a finite such that it is the lowest value satisfying ) , provide us with the causal characterization of the singular boundary around according to figure [ semihyper ] . characterization of the singularity when , and . ] in ( sec .6 ) the causal characterization in this case was obtained in radiative coordinates provided that and were ( ) functions admitting extensions . that this is satisfied follows , as in the previous theorem [ m0_mu ] , first , from the requirement of a metric in this theorem and lemma [ coocha ] together with corollary [ corm ] and , second , from the requirement that the assumptions in lemma [ extenscn ] for the existence of extensions and are satisfied . on the other hand , in was shown that , in radiative coordinates , the causal characterization in this case depends on provides just an extra factor exp( to which does not affect ( [ deln ] ) . ] where for ( ) .but again these quantities are invariant . the second quantity ( [ mr ] )has already been shown to correspond with . with regard to the first one ,let us consider the simplest case . in radiative coordinates the invariant is .\ ] ] since , in the limit as we will have likewise , in case for and , we would have for therefore we can write the two quantities ( [ deln ] ) and ( [ mr ] ) in an explicit invariant form since these quantities define the causal character of in this case according to figure [ semihyper ] , then now the causal character has been determined invariantly .[ m0_mu_mr0 ] in case , and , where is a point in the singular boundary reached or left by a single null geodesic of the family , the causal character of is given by the invariants , provided that and are functions and that the assumptions in lemma [ extenscn ] ( for the existence of extensions ) are satisfied .then the causal characterization can be found according to figures [ nilpotentodd ] and [ nilpotenteven ] through the computation of where whereas for , whereas for and is supposed to exist ant to be finite .characterization of the singularity when , and and is odd . in the _ extra - conditions_ we use and from ( [ lambda ] ) . ]characterization of the singularity when , and and is even . in the _ extra - conditions_ we use . ] in ( sec . 6 ) the causal characterization in this case was obtained in radiative coordinates provided that and were ( , , where , are defined in ( [ sak ] ) and ( [ sbn ] ) , respectively ) functions admitting extensions . that this is satisfied follows , first , from the requirement of a metric in this theorem and lemma [ coocha ] together with corollary [ corm ] and , second , from the requirement that the assumptions in lemma [ extenscn ] for the existence of extensions and are satisfied . according to the causal characterization in this case and for radiative coordinates depends on sign( ) , sign( ) and , where ) appear since in this workwe do not choose .it has to be taken into account that if one now wishes to obtain and as in there is a corresponding slight modification in the change of variables used there for this _ nilpotent case _ : ( . ] with for and for .but again these quantities are invariant .the first quantity sign( ) has already been treated in the previous theorem where we showed that it corresponds to sign . with regard to the second one sign( ) , let us consider the simplest case . in radiative coordinates the invariant can be written as which in the limit as provide us with which is clearly related to ( [ sbn ] ) .if we should consider ,\ ] ] which in the limit as provide us with likewise , for we will have to be compared with ( [ sbn ] ) .in this way we can write the two quantities ( [ sak ] ) and ( [ sbn ] ) and , thus , their signs in an explicit invariant form finally , taking into account its definition , can also be written in an explicit invariant form since these quantities define the causal character of in this case according to figures [ nilpotentodd ] and [ nilpotenteven ] , then now the causal character has been determined invariantly for this case .[ noniso ] if in a connected open interval of then the causal character of is determined by the invariant provided that and are functions in , that and are such that they admit extensions according to lemma [ extens ] and that the limit exists ( and is finite ) for all .then the causal characterization around is inferred from * if then is timelike in this interval .( this case includes both the possibility of a singular or a _ regular _ interval ) . * if then is spacelike in this interval . * if then is lightlike in this interval . in ( sec . 7 ) the causal characterization in this case was obtained in radiative coordinates provided that and were functions admitting extensions and .that these requirements are satisfied thanks to the assumptions in the theorem has already been shown for theorem [ m0_mu ] . on the other hand , in was shown that , in radiative coordinates , the causal characterization depends on that we have shown can be written in an explicit invariant form as . in this way , the relationship between its value and the causal characterization follows directly from the results in ( sec .in this article we have shown that , provided some assumptions are satisfied , the causal character of the in spherically symmetric spacetimes depends on some specific invariants .this allows to deduce the causal character of the singularity algorithmically .basically , one starts with the knowledge of the areal radius , the mass function , a tangent vector field to the radial null geodesics and the kodama vector field . from hereone should compute the invariants where and the exact last value to be computed is determined by the values of the lowest order invariants as the singularity is approached ( and , respectively ) in the following manner : * if this value suffices to characterize the singularity according to theorem [ t_m_neq0 ] provided the spacetime is at least ( i.e. , if the existence and uniqueness of radial null geodesics is guaranteed ) . *if in an isolated point _ in _ then different cases appear : * * if then we will also need .the causal characterization around the singular point can be inferred from figure [ hyper ] , if the assumptions in theorem [ m0_mu ] are satisfied . * * if and then one needs to compute , where we demand the existence of a finite such that it is the lowest value satisfying . the causal characterization around the singular pointcan then be inferred from figure [ semihyper ] , if the assumptions in theorem [ m0_mu_mr ] are satisfied .* * if and then one needs to compute , where we demand the existence of a finite such that it is the lowest value satisfying , and to compute , where we demand the existence of a finite such that it is the lowest value satisfying . the causal characterization around the singular point can be inferred from figures [ nilpotentodd ] and [ nilpotenteven ] , if the assumptions in theorem [ m0_mu_mr0 ] are satisfied . * if in an open interval of then the causal characterization of the singular interval can be deduced from sign( ) according to theorem [ noniso ] , if the assumptions in the theorem are satisfied .note that for every case some assumptions must be satisfied .these assumptions come mainly from the fact that our results are based on the use , in , of the qualitative theory of dynamic systems to the differential equations satisfied by the radial null geodesics .the application of the appropriate theorems to the analysis of these differential equations requires some degree of differentiability for the functions and .( more details on this issue can be found in ) .likewise , the reader can consult for some applications of this technique to the study of the generation of naked singularities or black hole evaporation .we would like to thank j.m.m .senovilla and conan wu for helpful discussions .we would also like to acknowledge the _ generalitat de catalunya _ ( grant 2009sgr-00417 ) for financial support .99 penrose r 1963 _ phys . rev . lett . _ * 10 * 66 garca - parrado a and senovilla j m m 2005 _ class .quantum grav . _* 22 * r1 fayos f and torres r 2011 _ class .quantum grav . _* 28 * 215023 penrose r 1969 _ riv .nuovo cimento _* 1 * 252 penrose r 1999 _ j. astrophys . astr . _ * 20 * 233 hawking s w and ellis g f r 1973 _ the large scale structure of space - time ._ cambridge : cambridge university press volovich i v , zagrebnov v a and frolov v p 1976 _ teoretfiz . _ * 29 * 191 hiscock w a , williams l g and eardley d m 1982 _ phys .d _ * 26 * 751 kuroda y 1984 _ prog . theor .phys . _ * 72 * 63 eardley d m and smarr l 1979 _ phys . rev .* 19 * 2239 christodoulou d 1984 _ commun .phys . _ * 93 * 171 ori a and piran t 1990 _ phys .d _ * 42 * 1068 hayward s a 1996 _ phys .* 53 * 1938 misner c w and sharp d h 1964 _ phys .b _ * 136 * 571 lake k 1992 _ phys .lett . _ * 68 * 3129 singh t p 1999 _ class .quantum grav . _* 16 * 3307 giambo r , giannoni f , magli g and piccione p 2003 _ class . quantum grav . _ * 20 * l75 plebaski j and krasiski a 2006 _ an introduction to general relativity and cosmology _ cambridge : cambridge university press arnold v i 1992 _ ordinary differential equations _ berlin heidelberg : springer - verlag hernndez w c and misner c w 1966 _ astrophys . j. _ * 143 * 452 cahill m e and mcvittie g c 1970 _ j. math .phys . _ * 11 * 1382 zannias t 1990 _ phys .rev . _ * d41 * 3252 boothby w m 1986 _ an introduction to differentiable manifolds and riemannian geometry _florida : academic press inc .kodama h 1980 _ prog .* 63 * 1217 abreu g and visser m 2010 _ phys .d _ * 82 * 044027 bengtsson i and senovilla jmm 2011 _ phys .d _ * 83 * 044012 whitney h 1934 _ trans .* 36 * 63 courant r and hilbert d 1989 _ methods of mathematical physics _new york : john wiley & sons
the causal character of singularities is often studied in relation to the existence of naked singularities and the subsequent possible violation of the cosmic censorship conjecture . generally one constructs a model in the framework of general relativity described in some specific coordinates and finds an _ ad hoc _ procedure to analyze the character of the singularity . in this article we show that the causal character of the zero - areal - radius ( ) singularity in spherically symmetric models is related with some specific invariants . in this way , if some assumptions are satisfied , one can ascertain the causal character of the singularity algorithmically through the computation of these invariants and , therefore , independently of the coordinates used in the model .
recently , stochastic partial differential equations ( spdes ) provide a quantitative description for a lot of mathematical models in areas such as physics , engineering , biology , geography and finance .many specialists exert a strong interest in spdes and develop their mathematical theories and analytical techniques . however , it is a difficult target to obtain the analytical solutions of spdes .thus , how to estimate the numerical solutions of spdes becomes a fast growing research area .a lot of modern numerical tools are devised to solve spdes such as the stochastic collocation method , the it taylor expansions method and the finite element method combining with monte carlo and quasi - mote carlo method .but there are still great many unsolved problems of the numerical solutions of spdes .the books show that the kernel - based approximation method to solve deterministic high - dimensional pdes , and moreover , this method can be also applied into the stochastic models mentioned in .it offers us a new idea to apply the kernel - based approximation method ( kernel - based collocation method , meshfree approximation method ) to obtain numerical solutions of high - dimensional spdes given in our recent paper and doctoral thesis .its approximate basis induced by the positive definite kernel ( reproducing kernel ) is different from the polynomial basis of the finite element method , which means that the construction of the kernel basis does not need an underlying triangular mesh .furthermore , the data points can be flexibly chosen for the use of either deterministic or random design , e.g. , halton points or sobol points .after the discussions of this fresh numerical method with many mathematicians and engineers , we determine to renew and improve the analytical results and numerical algorithms of our previous papers by their nice and helpful suggestions .we want to let this numerical tool for spdes be well readable in the interdisciplinary fields of both computational mathematics and statistics . in section [ sec : gauss - rk - pb ] , we extend [ theorem 3.1 , ] and [ theorem 7.2 , ] into theorem [ t : gauss - pdk - pb ] such that we can replace integral - type kernels by more general positive definite kernels to construct gaussian fields in the classical -based sobolev spaces instead of the reproducing kernel hilbert spaces. we will employ these gaussian fields to introduce the kernel - based approximate solutions of spdes similar as the techniques of .their approximate kernel bases are set up by the positive definite kernels with the related differential and boundary operators defined as in the equation , and the covariance matrixes of the gaussian fields at the collocation points given in the formula are used to compute their expansion random coefficients .the blocks of the covariance matrixes are corresponding to the pairwise of collocation points .the covariance matrixes can be seen as the generalization forms of the traditional kernel - based interpolation matrixes discussed as in .section [ sec : ell - spde - ker - sol ] complements the construction processes and the proofs of the kernel - based approximation method , and we even give the new kernel - based approximate results for solving a system of high - dimensional linear elliptic spdes driven by various kinds of right - hand - side random noises , which are unsolved in the final remark section of .the kernel - based approximate solution is obtained for fitting the observation values simulated by the elliptic spdes .the kernel - based approximate solution of the elliptic spdes is a linear combination of the kernel basis , and moreover , its expansion random coefficients are solved by a random linear system whose random parts are simulated by the elliptic spdes ( see the equations ( [ eq : ker - based - sol - linear - ell]-[eq : ker - based - sol - coef - linear - ell ] ) ) .proposition [ p : prob - convergence - linear - ell - spde ] shows that the errors of the kernel - based estimators can be bounded by the fill distances in the probability sense .the fill distance denotes the radius of the largest ball which is completely contained in the space domain and which does not contain the chosen collocation points .this means that the kernel - based approximate solutions are convergent to the exact solutions of the elliptic spdes in probabilities and distributions .we present more detail of the error analysis for the elliptic spdes than in , and we discuss the convergent analysis by the knowledge of statistical learning replacing the maximum error bound to the confident interval in a different way of the classical kernel - based approximation method for the deterministic pdes . in section [ sec : par - spde - ker - sol ] , we use the implicit euler scheme to discretize the parabolic spde driven by time and space lvy noises at time in order to transform it to serval elliptic spdes at each discretization time step .next we solve these elliptic spdes by the kernel - based approximation method .we also briefly discuss the convergence of the kernel - based approximate solutions of the parabolic spdes undone in .we will consider many other time - stepping schemes and their convergent rates in our future research .the numerical examples for the sobolev - spline kernels and the compact support kernels show that the approximate probability distributions are well - behave for the second - order parabolic spdes driven by the time and space poisson noises ( see figure [ fig : par - spde - num - exa ] ) .since the covariance matrixes of the compact support kernels are sparse , we can solve the related linear systems as fast as the finite element method .more numerical examples will be posed in the author s personal webpage . in our next paper, we will use the extended theoretical results given in this article to set up the kernel - based estimators for the nonlinear spdes driven by lvy noises . in our previous papers and doctoral thesis of spdes , the kernel - based approximation method is also called the kernel - based collocation method . but some people may confuse the original name with another different method as stochastic collocation . in the same way as in the book we recall this numerical method its general name , and moreover , its estimator is said the kernel - based approximate solution or the kernel - based solution in this article .the kernel - based approximation method , the kernel - based collocation method and the meshfree approximation method are the same in all our spdes papers .we want to make a convenience for the engineers and the computer scientists to understand the kernel - based approximation method and avoid its technical details and proofs . in the beginningwe give a traditional example of the parabolic spde to explain the kernel - based approximate processes and algorithms in a simple way .let be a regular bounded open domain of and be a -based sobolev space of degree .suppose that is a poisson noise in with the form where is an orthonormal subset of , is a positive sequence and are the independent scalar poisson processes with parameter for all .we consider the second - order parabolic spde driven by where is a laplace differential operator and .suppose that this parabolic spde is well - posed and its solution almost surely such that .the proposed numerical method for solving the spde can be described as follows : 1 .discretize the spde in time by the implicit euler scheme at equally space time points , i.e. , where , and .let .we choose a finite collection of predetermined pairwise distinct collocation points and a symmetric positive definite kernel to construct the basis and the covariance matrix ( interpolating matrix ) here and mean that we differentiate the kernel function with respect to its first and second arguments , respectively , i.e. , .we can also compute that and .3 . because the white noise increment at each time instance is independent of the solution .the noise term is well - defined and we can simulate at , i.e. , let for . combing the equation with the dirichlet boundary condition we obtain the elliptic spde where is seen as an unknown part , and and viewed as given parts .the kernel - based solution of the spde can be written as and its random expansion coefficients and are computed by the random linear equation where .+ this means that is approximated by for all .4 . repeat ( s3 ) for all .we can also create an algorithm to obtain the sample paths of the spde : + initialize * and are given in the equations ( [ eq : ker - basis - exa ] ) and ( [ eq : cov - matrix - exa ] ) for . * .* . * .repeat * simulate for all . * for all .* . *in this section we want to extend the theoretical results of the integral - type kernels and the related reproducing kernel hilbert spaces given in [ lemma 2.2 and theorem 3.1 , ] and [ lemma 7.1 and theorem 7.2 , ] so that we can apply many various kinds of the positive definite kernels and the related sobolev spaces to create gaussian fields .these gaussian fields and positive definite kernels are used to introduce the kernel - based solutions of spdes in the following sections .[ d : pdk ] a continuous symmetric kernel is called _ positive definite _ on if , for all and all sets of pairwise distinct centers , the quadratic form which is equivalent to the matrix is positive definite .let be the -based sobolev space of degree defined as in section [ sec : diffbound ] and be the borel -algebra on .suppose that is a _ regular _ bounded open domain of for , and the symmetric kernel function is a positive definite kernel on .further suppose that . according to the discussions in section [ sec : rk - rkhs ] ,the kernel function has the positive eigenvalues and the continuous eigenfunctions such that and it possesses the absolutely and uniformly convergent representation for simplifying the notations , we denote for the linear differential and boundary operators and denoted in the equations ( [ eq : diff]-[eq : bound ] ) . here , and , mean that we differentiate the kernel function with respect to its first and second arguments , respectively .the kolmogorov s extension theorem guarantees the existence of countable independent standard normal random variables on some probability space , i.e. , . since for all , the stochastic process is well - defined on . according to [ theorem a.19 , ] ,the random variable is normal for all .we can also compute the mean and the covariance for all .therefore is a gaussian field with mean and covariance kernel ( see definition [ d : gaussian ] ) .[ l : gauss - sobolev ] almost all sample paths of the gaussian field defined in the formula belong to , i.e. , for almost all . denote that . since , we have in the -norm .we fix any . according to lemma [ l : rk - diff - bound ] , the eigenfunctions and the convergent representation is absolute and uniform on .since is bounded , the map and the representation is also convergent in the -norm .thus , the sequence is a cauchy sequence in the hilbert space because when .this ensures that there exists a such that in the -norm . combining the above results with lemma [ l : sobolev - diff - bound ] ,we can conclude that which indicates that for almost all .let be the reproducing kernel hilbert space of a sobolev spline kernel with degree and shape parameter defined in example [ ex : sobolev - spline ] . since is equivalent to the sobolev space , the measurable spaces , where and are the borel -algebras on and .thus belongs to for almost all .[ l : gauss - pdk ] suppose that is a regular bounded open domain of and the symmetric positive definite kernel for .then , for any fixed , there exists a probability measure defined on the measurable space such that the stochastic process is a gaussian field with mean and covariance kernel on the probability space . according to lemma [l : gauss - sobolev ] almost all the sample paths of the gaussian field defined in the formula belong to .lemma [ l : gauss - rkhs ] provides that the probability measure induced by is well - defined on , and has the same probability distribution as .this shows that is a gaussian field with mean and covariance kernel on the probability space . because of the fact that , we have and for all . we can set up the another probability measure by shifting for , i.e. , since and on for , the stochastic process is a gaussian field with mean and covariance kernel on the probability space . using lemma [ l : gauss - pdk ]we can set up many other kinds of gaussian fields with respect to differential and boundary operators on the above probability space .[ t : gauss - pdk - pb ] suppose that is a regular bounded open domain of and the symmetric positive definite kernel for .let and be the linear differential and boundary operators of orders and , respectively , ( see the equations ( [ eq : diff]-[eq : bound ] ) ) .then , for any fixed , there exists a probability measure ( independent of and ) defined on the measurable space such that the stochastic processes are the gaussian fields with means , and covariance kernels , on the probability space , respectively .denote that where is the gaussian field given in lemma [ l : gauss - pdk ] . because and .if we can verify that and are the gaussian fields with means and covariance kernels and , then the proof is completed .since is a gaussian field with mean and covariance kernel defined on the probability space , the karhunen representation theorem provides that , where the random variables on . according to lemma [ l : sobolev - diff - bound ] , and have the representations and . combining the expansions of and with lemma [ l : rk - diff - bound ], we can solve the means and covariance kernels of and , i.e. , and since , we have and . according to the above deductions , we can conclude that and are gaussian with means and covariance kernels and , respectively .the construction of gaussian fields in theorem [ t : gauss - pdk - pb ] is analogous to the form of wiener measure defined on the measurable space , called canonical space , such that the coordinate mapping process is a brownian motion .why do we need these kinds of gaussian fields ? because they help us to produce the normal random variables associated to the differential and boundary operators of spdes and the given collocation points located in the space domain and its boundary . using their joint probability density functions and conditional probability density functions , we can obtain the kernel - based solutions to fit the observation values which are simulated by the spdes .we choose any linear differential and boundary operators , and , of orders , and , , respectively . in the same manner of the proof of theorem [ t : gauss - pdk - pb ], we can compute their covariances : , and .next we consider the vector linear differential and boundary operators composed of the finite linear differential and boundary operators of the orders and , respectively , where their orders are denoted by given the pairwise distinct collocation points we can use the gaussian fields and to create the related multi - normal vector on by theorem [ t : gauss - pdk - pb ] , where the mean and covariance matrix of can be computed by the same method as above .[ c : mean - cov - spbx ] the multi - normal vector given in the equation defined on the probability space has the mean and the covariance matrix where we let the linear operator be a differential operator of order or a boundary operator of order , i.e. , or .we fix any data point or corresponding to the operator , i.e. , when then , or when then .since theorem [ t : gauss - pdk - pb ] shows that the random variable and the random vector are both normal on , the conditional probability density function of given has the the explicit form , where and are the joint probability density functions of and .[ c : conpdf - lsx - spbx ] the conditional probability density function of the random variable given the random vector defined on the probability space ( discussed as in the above paragraph ) has the form for and , with the mean and the variance where and ( here and are the mean and covariance matrix of solved in corollary [ c : mean - cov - spbx ] . ) in particular , for the given observation values , the probability density function of given is equal to .since the covariance matrix is always semi - positive definite , its pseudo - inverse is well - behaved .we observe that is the kernel basis of kernel - based solutions , and the variance function , which is equal to the power function , is used to estimate the error bound .according to theorem [ t : gauss - pdk - pb ] and fubini s theorem we can use the gaussian field defined on to obtain that for all which indicates that the markov s inequality provides that according to the construction of the gaussian field defined on , we can introduce the following corollary .[ c : pdk - gauss - norm ] suppose that the probability space defined same as in lemma [ l : gauss - pdk ] .for any , the subsets have the probabilities this section , we employ the similar techniques as in to introduce the kernel - based solutions of the elliptic spdes and their convergent rates .let be regular bounded open domain of , and the vector linear differential and boundary operators and of the orders and , whose elements defined as in the equations ( [ eq : diff]-[eq : bound ] ) with the coefficients and are linear independent .denote that such that is bijective for any fixed and .suppose that the vector noise composes of finite independent stochastic processes defined on the probability space for .we consider a system of elliptic spdes driven by as follow : where and are the deterministic functions for and .suppose that the spde is well - posed for its differential and boundary operators .thus , when and are convergent to the left - hand sides of the spde , then the estimator is convergent to the exact solution of the spde in the same rate , e.g. , the maximum principle for the laplace differential operator gives for the heat spdes with dirichlet boundary conditions .( here the notation means that if there is a positive constant such that . )further suppose that the solution belongs to the sobolev space almost surely .we choose the sets of pairwise distinct collocation points from the domain and its boundary , i.e. , the _ fill distance _ of for and is denoted by where let be the joint probability density functions of the random vectors for all .same as the techniques in section [ sec : simulation - random ] , we can be simulated by their joint probability density functions .since are independent and is a bijective function for any fixed and , we can use to obtain the joint probability density function of .next we find an appropriate positive definite kernel to introduce the kernel - based solutions of the spdes .the covariance matrix given in the equation ( see corollary [ c : mean - cov - spbx ] ) will be used to evaluate the coefficients of the kernel - based solutions later . because we need to be nonsingular .we suppose that the symmetric positive definite kernel satisfies the condition where is the point evaluation functional at .according to [ theorem 16.8 , ] , the condition ensures that all covariance kernels and are positive definite on and , respectively , for and .the linearly independent condition even indicates that the covariance matrix is positive definite .so the inverse of exists and . show that that can contain enough more polynomials .one technique to verify the condition is to find a polynomial such that if and only if for any finite pairwise distinct collocation points .as a special case , the kernel function has the form for a positive definite function and all coefficients of the differential and boundary operators of the spde are scalars , then [ corollary 16.12 , ] provides that the condition is always true .the main reason of the condition is to let the covariance matrix be always nonsingular so that the system of linear equations is uniquely solvable for any choice of collocation points .this means that the condition can be replaced to choose well - distributed collocation points dependent of the differential and boundary operators of the spde such that is nonsingular .actually , we may not need the condition to obtain the kernel - based solutions because we could solve the linear system by the least square method for the semi - positive definite matrix . because we want to employ the theorems of the power functions given in to introduce the convergent rate directly . in this paperwe always assume that the kernel function satisfies the condition in order to avoid the technical discussions and reproof of the similar theorems in .but we can still introduce the similar convergent rate as the techniques of the proof of [ theorem 14.4 and 14.5 , ] without the condition .since , [ theorem 10.45 , ] shows that the reproducing kernel hilbert space . using theorem [ t : gauss - pdk - pb ] , for any fixed , we can create a probability measure on the measurable space such that the stochastic processes are the gaussian fields with means , and covariance kernels , defined on the probability space , for and .since we have two different kinds of probability spaces and , it is necessary for us to combine them into a new product probability space .we define the tensor product probability space and let all the original random variables be extended in the natural way : if the random variables and are defined on and , respectively , then their extensions the extensions and preserve the original probability distributions , and they are also independent on the product probability space .this means that the gaussian fields induced by the chosen positive definite kernels and the noise terms of spdes can be extended to the product probability space while preserving the original probability distributional properties , and moreover , their extensions are independent , e.g. , the extensions of the gaussian field and the noise for are independent on . in addition , since the solution of spdes can be seen as a mapping from into , we have for all . for any , we let we simulate the observation values of the right - hand side of the spde at the collocation points , i.e. , and denote that we let where the random vector induced by the gaussian fields and at the collocation points is defined in the equation .the approximate probability measure is used to set up the kernel - based solution to approximate the exact solution of the spde .let for and . using the same techniques as in , the _ kernel - based approximate solution _ is a global maximizer of the conditional probability , i.e. , where is the conditional probability density function of the random variable given defined on ( see corollary [ c : conpdf - lsx - spbx ] ) , and the kernel basis is given in the equation .here we can think and as the fixed values to find the approximate mean .the kernel - based solution can be also written as the linear combination of the kernel basis and its random expansion coefficients are solved by the random linear equations it is obvious that satisfies the interpolation conditions at the collocation points almost surely , i.e. , and , and moreover , for all . because the random part of is only associated to its random expansion coefficients uniquely solved by the random vector and the deterministic vector .we can formally rewrite into .this means that can be seen as a random variable defined on the finite - dimensional probability space in order that its probability distribution is the same as the original version , where the probability measure and is the joint probability density function of .now we propose to describe the convergence of the kernel - based solutions .let belong to or corresponded to the linear operator , which is equal to the differential operator for or the boundary operator for , i.e. , when then , or when then .we define where the kernel - based solution and the observation values induced by the linear spde are the same as in section [ sec : lin - ell - spde - ker - sol ] .we fix any .according to section [ sec : powerfun ] , the conditional mean of the conditional probability density function stated in corollary [ c : conpdf - lsx - spbx ] has the uniformly representation independent of , i.e. , when the fill distance is enough small , then the chebyshev s inequality provides where is the direct delta function at and is the joint probability density function of . since the variance function is equal to the power function stated in section [ sec : powerfun ], we have because if and only if for all .we can find a and a such that for all and .this indicates that since the spde is well - posed for its differential and boundary operators , we conclude that : [ p : prob - convergence - linear - ell - spde ] the kernel - based approximate solution given in the equations ( [ eq : ker - based - sol - linear - ell]-[eq : ker - based - sol - coef - linear - ell ] ) is convergent to the exact solution of the spde in the probability for all when the fill distant tends to , i.e. , for any and any , where . [ r : error - linear - spde ] more precisely , the convergent rate can be represented as which indicates that one choice of the optimal designs of and has the form .actually the error bounds of the kernel - based estimators can be also described in terms of the number of the collocation points and the dimension of the domain space , and moreover , the worse - case errors for some special kernel functions can be even dimension - independent and decay as a polynomial in terms of as done in . for the deterministic problem , the maximum error of the kernel - based estimatoris bounded by the power function . for the stochastic problem, we discuss the convergence of the kernel - based estimators by using probability measures , which means that the deterministic error bound is replaced by the confidence interval .the confidence interval can be computed by the variance function and it is employed to predict the error in the probability sense . the power function and the variance function have the same forms but they represent different mathematical meanings .we can also obtain the analogous stable error estimate for the both deterministic and stochastic problems . because the convergence in the probability implies the convergence in the distribution , i.e., is pointwise convergent to when , where and are the cumulative distribution functions of and . according to proposition [ p : prob - convergence - linear - ell - spde ]the distributions of can be estimated by the distributions of .[ c : prob - convergence - linear - ell - spde ] let and be the same as discussed in proposition [ p : prob - convergence - linear - ell - spde ] . if is a continuous and bounded function defined on , then in particular , if , then be a regular bounded open domain of , and be a levy process in the sobolev space for defined on the probability space with the form where is an orthonormal subset of , is a positive sequence and are the independent scalar lvy processes with triples for all such that ( see ) .when is a wiener process , then can be constructed by infinite countable orthonormal bases and independent standard scalar brownian motions , e.g. , is a wiener process in with mean zero and spatial covariance kernel function given by .suppose that the linear differential operator of order and the linear boundary operators of orders for all defined as in the equations ( [ eq : diff]-[eq : bound ] ) with the coefficients and are linear independent . given an initial condition , we consider a parabolic spde driven by where and \times\partial{\mathcal{d}}\to{\mathbb{r}}$ ] are the deterministic functions for .suppose that the spde is well - posed and that its exact solution belongs to for all such that .we transform the parabolic spde into several elliptic spdes by the implicit euler schemes and solve these elliptic spdes using the kernel - based approximation method as in section [ sec : ell - spde - ker - sol ] . 1 .we discretize the equation by the implicit euler schemes at time , i.e. , where and for all .2 . because the levy noise increment at each time instance is independent of the solution . we can view that is an unknown part and is a given part computed at the previous step .the equation together with the corresponding moving boundary condition becomes a well - posed elliptic spde of the form where and .+ for solving the spde , we select the well - distributed pairwise distinct collocation points and . in addition , we simulate the noise at , i.e. , + next we choose a positive definite kernel which satisfies the condition related to and defined same as in section [ sec : ell - spde - ker - sol ] . the kernel - based solution of the spde is computed by the equations ( [ eq : ker - based - sol - linear - ell]-[eq : ker - based - sol - coef - linear - ell ] ) , i.e. , where here the kernel base and the covariance matrix induced by the kernel with and are defined in the equations and .+ this means that is approximated by the kernel - based solution .3 . repeat ( s2 ) for all .we briefly discuss the convergence of the above algorithm for the spde similar as done in .suppose that the distances of all discretization time steps are equal to , and that the collocation points and the positive definite kernel are chosen to be the same at each time step .according to theorem [ t : gauss - pdk - pb ] , we firstly set up a probability measure on the measurable space for . in the same ways of section [ sec : ell - spde - ker - sol ] we define a new product probability space such that the extensions of the white noise and the solution of the spde preserve the original probability distributions in the product probability space , i.e. , now we consider the global error bound of the estimator defined in the equation in the probability .because of , the it formula provides the local truncation error bound of the implicit euler scheme in the probability , i.e. , for all , where is dependent of the scalar lvy processes .moreover , proposition [ p : prob - convergence - linear - ell - spde ] provides the errors for each local elliptic spde where .denote the global errors combining the both local errors given in the equations and , we have where by induction we can deduce that if then the spectral radius of the matrix satisfies , where is dependent of and ( see ) .thus this indicates that is convergent to in the probability when both and tend to , for all . in this paper, we focus mainly on the step ( s2 ) to solve the elliptic spde same as in .the numerical analysis of deterministic parabolic pdes for the kernel - based approximation method is a delicate and nontrivial question , only recently solved in . we will address this question in the case of many other parabolic spdes in our future research .now we do a numerical test for the two - dimensional parabolic spde driven by the poisson noises with dirichlet boundary conditions a typical case of the spde .let the domain , and the poisson noise have the form where are the independent scalar poisson process with parameter for all .we use this poisson noise to create the parabolic spde where and . according to the same algorithm given in section [ sec : alg - tradition - par - spde ] we can compute the kernel - based solutions of the spde by two kinds of positive definite kernels : compact support kernel induced by -compact support radial basis function and sobolev - spline kernel induced by -matrn function where and the cutoff function when otherwise . as collocation pointswe select halton points in and evenly space points on . using the kernel - based approximation method , we can obtain thousand numerical sample paths for by the algorithm given in section [ sec : alg - tradition - par - spde ] .moreover , we can compute its sample means and sample standard deviations by these estimate sample paths , i.e. , observing figure [ fig : par - spde - num - exa ] , we find that the approximate means and the approximate standard deviations for both kernels are symmetric with the line because the time and space poisson noises are symmetric with and in space . compact support kernel , sobolev - spline kernel , in our current numerical experiments , the distribution of collocation points and the shape parameter are chosen empirically and based on the authors experiences .actually , different choices of the shape parameters will affect the convergent rate and the stability of the algorithms .the convergent rate will decrease when the shape parameter becomes large , but the algorithm will be unstable when the shape parameter becomes small .how to select the best shape parameter is still an open problem .we will try to solve this problem using the probability measures of the kernel functions in our future research .in this paper we present how to employ the kernel - based approximation method to estimate the numerical solutions of spdes driven by the lvy noises .we transform the parabolic spde into the elliptic spde by the implicit euler time scheme at each time step .the kernel - based solution of the elliptic spde is a linear combination of the kernel basis with the related differential and boundary operators centered at the chosen collocation points .here we only consider the elliptic spdes with right - hand - side random noises .actually , the kernel - based approximation method can be even applied into solving the spde driven by the random differential and boundary operators as done in .the main idea of this paper is the same as in our recent paper but we give the new contents and extensions to improve the previous theoretical results as follow : * we extend [ theorem 3.1 , ] into theorem [ t : gauss - pdk - pb ] in order that we can apply more general positive definite kernels to set up the kernel - based approximate solutions of spdes instead of the integral - type kernels , and we only need to assume that the exact solutions of spdes belong to the classical sobolev spaces rather than the reproducing kernel hilbert spaces . *we obtain the kernel - based approximate solutions of a system of linear elliptic spdes driven by various kinds of random noises undone before . *we complete the discussions of the error analysis of the kernel - based approximation method for spdes and provide its precise convergent rate in terms of the fill distances ( or possible the time distances ) . comparing the kernel - based approximation method for solving the deterministic and stochastic elliptic pdes similar as in : *the kernel - based solution for the deterministic case is to minimize the reproducing kernel norm for interpolating the data values induced by the pdes , and the kernel - based solution for the stochastic case is to maximize the probability conditioned on the observation values simulated by the spdes . *the estimate error is bounded by the power function for the deterministic case , and the confident interval is computed by the variance function for the stochastic case , and moreover , the formulas of the power function and the variance function are equal when the pde and the spde have the same differential and boundary operators .we also discuss the side - by - side differences of the kernel - based approximation method and the finite element method for the elliptic spde stated as in : * for the kernel - based approximation method we transfer the original spde probability space to the tensor product probability space such that the extension of the noise preserves the same probability distributions . for the finite element method ,we approximate the noise by its truncated noise which means that we truncate the original spde probability space to the finite dimensional probability space . *the bases of the kernel - based solution are set up by the positive definite kernels with the differential and boundary operators of the spde and the collocation points , while the the bases of the finite element solution are the finite element polynomials induced by the triangular meshes .* we can simulate the noise at the collocation points by its probability structure to compute the random coefficients of the kernel - based solution , but we can simulate the random part of the finite element solution on the truncated probability space . *the convergent rate of the kernel - based solution is only dependent of the fill distances .however , the convergent rate of the finite element solution depends on the maximum mesh spacing parameter of the triangulation and the truncation dimension of the original probability space . in our future work we will try to solve the open problems of the kernel - based approximation method for spdes :* we will solve the kernel - based estimators of the nonlinear elliptic spdes based on the theoretical results given in this paper .* we will try many other time - stepping schemes to create the kernel - based solutions of the parabolic spdes , and introduce the precise rates of their convergence .* we will design the best choice of the collocation points and the optimal kernel function by maximizing the conditional probability measure dependent of the observation values simulated by the spdes analogous as the maximum likelihood estimation method .in this section , we review some classical materials of linear differential and boundary operators mentioned in .let be a regular bounded open domain of , e.g. , it satisfies the uniform -regularity condition which implies the strong local lipschitz condition and the uniform cone condition .this means that has a regular boundary .we call the the partial derivative of order and denote its degree . the _ test function space _ is chosen to be composed of all functions with compact support in . for any fixed , if there exist a function such that then is said the _ weak derivative _ of .the -based sobolev space of degree is defined by equipped with the inner product because the weak derivative can be seen as a linear bounded operator from when or a linear bounded operator from when according to the boundary trace embedding theorem . in this paperall linear differential and boundary operators are the linear combinations of weak derivatives with uniformly continuous coefficients .we define that the linear differential operator for all has the order and the linear boundary operator for all has the order it is easy to check that and are linear and bounded .[ l : sobolev - diff - bound ] suppose that a sequence and a function in the -norm . if there exist a function such that in the -norm for any fixed , then .this indicates that in the -norm and in the -norm for any differential and boundary operators and of orders and , respectively . fixing , we have which shows that is equal to the weak derivative of . therefore and in the -norm .most of the detail presented in this section can be found in the monograph . for the readers convenience we repeat here what is essential to the kernel - based approximation method .[ d : rkhs ] a hilbert space consisting of functions is called a _ reproducing kernel hilbert space _ and a kernel function is called a _ reproducing kernel _ for if for all and , where is used to denote the inner product of . according to [ theorem 10.4 , ] the reproducing kernel is always _ semi - positive definite_. moreover , [ theorem 10.10 , ] guarantees the existence of the reproducing kernel hilbert space with the positive definite kernel .suppose that the symmetric positive definite kernel and is a regular bounded open domain of .since is compact , the mercer s theorem shows that there exists a countable set of positive _ eigenvalues _ and continuous _ eigenfunctions _ such that and the kernel has the absolutely and uniformly convergent representation .furthermore , is an orthonormal basis of and .[ l : rk - diff - bound ] if the symmetric positive definite kernel , then its eigenfunctions .moreover , the convergent representations and are absolute and uniform on and for any differential and boundary operators and of orders and defined as in equations ( [ eq : diff]-[eq : bound ] ) , respectively . because of , for any .this indicates that the convergent representation is absolute and uniform on for any .now we show a special class of reproducing kernels whose reproducing kernel hilbert spaces are equivalent to the sobolev spaces .[ ex : sobolev - spline ] we consider the matrn function of degree and shape parameter where is the modified bessel function of the second kind of order .according to the theoretical results given in , we can check that the sobolev spline kernel induced by the matrn function is a positive definite kernel on and its reproducing kernel hilbert space is equivalent to the sobolev space , i.e. , let be a regular domain of . according to [ theorem 6 , ] , the reproducing kernel hilbert space of the sobolev spline kernel restricted on is endowed with the reproduction norm and it is also equivalent to the sobolev space , i.e. , the papers also show many other general kinds of reproducing kernels and their related reproducing kernel hilbert spaces are introduced by green functions and generalized sobolev spaces .let be linear differential operators of orders no more than , whose coefficients , and be linear boundary operators of orders no more than , whose coefficients .denote that and .we choose the set of pairwise distinct collocation points from a regular bounded open domain and its boundary , i.e. , and .suppose that the symmetric positive definite kernel satisfies the condition related to and denoted as in section [ sec : ell - spde - ker - sol ] . according to [ theorem 16.8 , ], belongs to the dual space of the reproducing kernel hilbert space , where is a differential or boundary operator of order and and is the point evaluation functional at or .the _ power function _ induced by the positive definite kernel with the differential and boundary operators at the collocation points is defined by where the matrix and the vector are denoted in the equations and respectively .we can observe that the power function is equal to the formula of the variance defined in the equation .[ theorem 16.11 , ] provides that as the discussion of [ section 16.3 , ] we have where and . combining [ theorem 11.3 and 16.9 , ]the power functions can be also bounded by the fill distances of the collocation points and , respectively , i.e. , where and are the positive constants independent of .the precise form of the fill distance of is equal to using the distance defined on the manifold surface .since the boundary is regular , we have . for convenience we does not consider the manifold distance in this article .in this section we discuss the basic relationship of gaussian fields and reproducing kernel hilbert spaces given in .[ d : gaussian ] let . a stochastic process said to be _ gaussian _ with mean and covariance kernel on a probability space if , for any pairwise distinct points , the random vector is a multi - normal random variable on with mean and covariance matrix , i.e. , where and .suppose that is a reproducing kernel hilbert space and is the borel -algebra on .let be a gaussian field defined on .if the sample paths belong to for almost all , then can be seen as a mapping from into .[ l : gauss - rkhs ] suppose that is a gaussian field on a probability space with almost all sample paths in a reproducing kernel hilbert space . then the probability measure given by is well - defined on the measurable space such that the stochastic process defined by is a gaussian field on the probability space and has the same probability distribution as , i.e., the both gaussian fields and have the same mean and covariance kernel .( here the reproducing kernel may be different from the covariance kernel . )lemma [ l : gauss - rkhs ] shows that we can transfer the original probability space into the new probability space so that the original gaussian field has the invariant element defined on the new probability space .for the kernel - based collocation methods , we need to simulate the spde noise term defined on the spde probability space at the collocation points .for example , if the noise is equal to the product of a poisson random variable with parameter and a deterministic function , i.e. , , then for all , which can be simulated by the monte carlo methods ( see ) .furthermore , we can simulate many other kinds of noises by the monte carlo methods if the joint probability density function of the random vector is known . if there is an one - to - one function such that where is the jacobian matrix of the inverse of evaluated at , then we can simulate by the independent strand uniform random variables , i.e. , for all .the vector function can be computed by where is the cumulative distribution function of and is the cumulative distribution function of given , etc . ,i.e. , author would like to express his gratitude to prof .igor cialenco ( chicago ) and my advisor , prof .gregory fasshauer ( chicago ) for their guide and assistance of this research topic at illinois institute of technique , chicago .the author would also like to thank the following people for their helpful suggestion and discussion : prof .uday banerjee ( syracuse ) , prof .michael griebel ( bonn ) , prof .klaus ritter ( kaiserslautern ) , prof .ian sloan ( sydney ) and prof .xu sun ( wuhan ) .g. e. fasshauer , f. j. hickernell and h. woniakowki , average case approximation : convergence and tractability of gaussian kernels , _ monte carlo and quasi - monte carlo methods 2010 _ , eds .l. plaskota and h. woniakowski ( springer - verlag , 2012 ) , pp .329344 .g. e. fasshauer and q. ye , reproducing kernels of sobolev spaces via a green kernel approach with differential operators and boundary operators , _ adv ._ * 38 * ( 2013 ) 891921 .g. e. fasshauer and q. ye , kernel - based collocation methods versus galerkin finite element methods for approximating elliptic stochastic partial differential equations , in _ meshfree methods for partial differential equations vi _ , eds . m. griebel and m. a. schweitzer ( springer - verlag , 2013 ) , pp . 155 - 170 .g. e. fasshauer and q. ye , a kernel - based collocation method for elliptic partial differential equations with random coefficients , in _monte carlo and quasi - monte carlo methods 2012 _ , eds .j. dick , f. y. kuo , g. w. peters and i. h. sloan ( springer - verlag , 2013 ) , to appear .f. y. kuo , c. schwab and i. h. sloan , quasi - monte carlo finite element methods for a class of elliptic partial differential equations with random coefficient , _ siam j. numer .* 50 * ( 2012 ) 33513374 .
in this paper , we improve and complete the theoretical results of the kernel - based approximation ( collocation ) method for solving the high - dimensional stochastic partial differential equations ( spdes ) given in our previous papers . according to the extended theorems , we can use more general positive definite kernels to construct the kernel - based estimators to approximate the numerical solutions of the spdes . because a parabolic spde driven by lvy noises can be discretized into serval elliptic spdes by the implicit euler scheme at time . we mainly focus on how to solve a system of elliptic spdes driven by various kinds of right - hand - side random noises . the kernel - based approximate solution of the elliptic spdes is a linear combination of the positive definite kernel with the differential and boundary operators of the spdes centered at the chosen collocation points , and its random coefficients are obtained by solving a system of random linear equations , whose random parts are simulated by the elliptic spdes . moreover , we introduce the error bounds confident intervals of the kernel - based approximate solutions of the elliptic ( parabolic ) spdes in terms of fill distances ( or possible time distances ) in the probability sense . we also give a well coding algorithm to compute the kernel - based solutions of the second - order parabolic spdes driven by time and space poisson noises . the two - dimensional numerical experiments show that the approximate probability distributions of the kernel - based solutions are well - behave for the sobolev - spline kernels and the compact support kernels . * keywords : * kernel - based approximation ( collocation ) method ; meshfree approximation method ; high dimension ; stochastic partial differential equation ; positive definite kernel ; gaussian field ; lvy noise ; poisson noise . ams subject classification : 46e22 , 65d05 , 60g15 , 60h15 , 65n35
the idea of mobile ad - hoc networks ( manets ) spontaneous wireless networks , operating without a backbone infrastructure , whose users / nodes relay packets for each other in order to enable multihop communications continues to inspire practitioners and poses interesting theoretical questions regarding its performance capabilities .vehicular ad - hoc networks ( vanets ) may well be currently one of the most promising incarnations of manets .promoters of vanets believe that these networks will both increase safety on the road and provide value - added services .numerous challenging problems must however be solved in oder to be able to propose these services e.g. efficient and robust physical layers , reliable and flexible medium access protocols , routing schemes and optimized applications .an almost ubiquitous stochastic assumption in the theoretical studies of these problems in manets is that the nodes of the network are distributed ( at any given time ) as points of a planar ( 2d ) poisson point process . in conjunction with the aloha medium access ( mac ) scheme simpler but less efficient than csma usually considered in this context by practitioners 2d poisson manet models allow for quite explicit evaluation of several performance metrics ; cf .section [ s.rw ] .however , they mostly regard local ( one - hop ) transmissions .in fact , introducing routing even to the simplest 2d poisson - aloha model is very difficult and it is hard to obtain rigorous theoretical results regarding truly multihop performances of manets ; cf . again section [ s.rw ] .one reason for this is that , while the source - node can be considered as a typical node in the manet and the powerful palm theory of point process can be used in the analysis of the first hop , further relay nodes on a given path ( traced by the dijkstra algorithm or any reasonable local routing on a 2d poisson manet ) can not be seen as typical nodes of the manet .in fact , the route followed by a packet is a random subset of the manet s point pattern ( depending on the routing algorithm ) and the typical point `` seen '' by the packet on a long route is not the typical point of the whole manet in the sense of palm theory , cf _ routing paradox _ in . in order to overcome this impasse and to propose an analytically tractable multihop model , in this paper we propose to `` decouple '' a tagged route from the remaining ( external to the route ) part of the manet .more precisely , we consider two stochastically independent point patterns to model respectively : some given route on which packets of a tagged flow are relayed and nodes which are only sources of interference for packet transmissions on this route. moreover , we assume that the tagged route is modeled by a linear ( 1d ) stationary point process .these assumptions allow us to distinguish the typical node of the route from the typical point of the manet , and appropriately use palm theory to manipulate the two objects .the advantage of the 1d - modeling of the tagged route is that packets relayed on it through the nearest neighbor transmissions `` see '' , at any relaying node , the typical ( 1d palm ) distribution of the whole route .this is a well - known _ point - shift invariance property of the palm distribution in 1d _ , which does not have a natural extension in higher dimensions .another advantage of our decoupling of the two parts of the manet is that we can go beyond poisson assumptions in both of them .specifically , we will also consider a _poisson route appended with a lattice structure of relaying nodes _ , which turns out to be crucial to improve the routing performance on long routes in the presence of external interference and/or noise . regarding external interference, we are able to study the impact of the clustering of nodes , considering e.g. _ poisson - line manet _ ( to be explained in what follows ) . before describing our main results , let us further justify our route - in - manet model .firstly , it is a quite natural scenario in vanets , where vehicles are randomly located on a road and subject to external sources of interference . in the context of a general 2d manet ,the linearity ( 1d - pattern ) of routing is clearly a simplification . in this casewe think of packets as being relayed in a given direction , e.g. in a strip as shown on the left sketch in figure [ fig.lineproj ] .we can approximate the `` real '' route by a `` virtual '' one by taking the orthogonal projections of the real nodes on the line joining the source and the destination .such a situation represents an approximation of a geographic routing .last but not least , in this paper we consider a 2d _ poisson - line manet _ model , in which all the nodes are randomly located on a ( background ) poisson process of lines , and form the so - called _ doubly - stochastic poisson process _ ( or cox process ) see on the right in figure [ fig.lineproj ] . in this caseour decoupled , tagged route is rigorously ( in the sense of palm theory ) the typical route in this poisson - line manet .we consider slotted aloha mac to be used both by the nodes of the tagged route and of the interfering part of the manet and the signal - to - interference - and - noise ratio ( sinr ) capture / outage condition , with the power - law path loss model and rayleigh fading .we assume that all the nodes of the manet are backlogged , i.e. , they always have packets in their buffers to transmit . moreover , we are interested in the routing performance of a tagged packet relayed by successive nodes on the route with priority in all queues on the route ( cf . a discussion of these assumptions in section [ s.rw ] ) .the main results of this paper are the following : * in the absence of external ( to the route ) noise and interference we evaluate the _ mean local delay _ on the poisson route , i.e. the expected number of time slots required for the packet to be transmitted by the typical node of the route to its nearest neighbor in a given direction .the inverse of this delay is intrinsically related to the _ speed of the packet progression on asymptotically infinite routes _ , which can also be related to the _ route transport capacity _ ( number of bit - meters pumped per unit length of the route ) .* the local mean delay is minimized ( equivalently , the progression speed is maximized ) at a unique value of the aloha medium access probability . moreover , the routing is unstable the delay is infinite ( the speed is null ) for larger than some critical value .this observation is fully compliant with the phase - transition phenomenon of the mean local delay discovered in the 2d poisson manet model in .* we evaluate the _ mean end - to - end delay _ of the packet routing between two given nodes of the route .it shows a cut - off phenomenon , namely that _ routing on distances shorter than some critical value is very inefficient in terms of exploiting the route transport capacity_. * next , we study the impact of noise and external interference on our previous observations . confirming the theoretical findings of in 2d poisson manets , we observe that the routing on a poisson route over long distances is unfeasible : _ the speed of packet progression on such routes is null _ due to the existence of large hops in random ( poisson ) paths . evaluating the end - to - end delay on finite distances with noise , we identify another , double cut - off phenomenon : the existence of _ critical values of end - to - end distance and noise power , beyond which the speed of packet progression is close to 0 _ , again making the routing inefficient in terms of exploiting its transport capacity . * in order to allow efficient routing over long routes one can complete the poisson route with a fixed lattice of ( equidistant ) relaying nodes . for this model , we evaluate the mean local delay and show how the route transport - capacity can be maximized by an optimal choice of the inter - relay distance of the lattice structure .* we explicitly evaluate the poisson - line manet model and compare it to that in the ( basic ) poisson - line - in - poisson - field model .we show that the poisson - line exhibits a larger coverage probability but also a larger end - to - end delay .this confirms previous general observations regarding the impact of clustering in interference ; . *regarding applications to vanets , we evaluated delays of some emergency and broadcast communications .local ( one - hop ) characteristics in 2d poisson manets , such as sinr outage probability , the related density of packet progress , mean local delay and many others , have been extensively studied in the literature ; cf .e.g. among others .poisson models also allow one to discover some intrinsic theoretical limitations of manets , regarding e.g. the scaling of the capacity of dense and/or extended networks , or the speed of packet progression on long routes . in the authors study delay and throughput in a model with multihop transmissions with relays which are placed equidistantly on the source - destination line .the model is very similar to ours but with the following differences . in the model presented in topology of relaying nodes is regular ( they are equidistant ) and the pattern on interferers is re - sampled at each slot .the delays include both the service times and waiting times in the buffers on the given route . a combined tdma / aloha mac protocol with intra - route tdma and inter - route aloha is employed .in contrast , we use simple aloha , ignore queueing , and focus on the performance issues caused by the _ randomness of the topology of relaying nodes_. we are interested in intrinsic limitations of the performance of _ long - distance routing _ in wireless networks with an irregular topology .we believe that the observed limitations remain valid for networks employing csma .this is because they primarily depend on the existence of ( arbitrarily ) long hops .csma , which copes better with interference , can not improve upon this situation .in contrast , we show that using a regular structure of relay nodes superposed with irregular manet routes leads to better performance .this solution is also evidently necessary for the stability of queuing processes ( not covered in our paper ) . _the remaining part of this paper _ is organized as follows . in section [ s.linearnearest ] ,we present our nearest neighbor routing model with slotted aloha . in section [ s.deterministic ]we compute routing delays in deterministic networks . in section [ s.endtoend ]we study the end - to - end delays on a poisson route when the interference is limited to the interfering nodes on the poisson route . in section [ s.noise ]we study the impact of external noise and interference .section [ s.conclusion ] concludes the paper .let us denote by , with , the locations of nodes participating in the routing of some _ tagged flow of packets _ from to .this route is assumed not to change on the time scale considered in this paper .the following assumptions regarding will be considered .although it is not the main scenario of this paper , we begin by considering a deterministic , fixed , finite pattern of nodes . in this scenariowe suppose that forms a poisson point process of intensity , on the line . in this modelthe notational convention is such that and packets are sent by any given node to its nearest - to - the - right neighbor .note that the poisson assumption means that the 1-hop distances are independent ( across ) exponential random variables with some given mean .we will call this scenario _nearest neighbor ( nn ) routing on poisson line_. we will also consider a version of this model , where the packet is transmitted to the _ nearest ( available ) receiver ( nr ) _ to the right on . this is an opportunistic routing allowing for longer hops , as will be explained in section [ ss.nnvsnr ] . in this model ,the tagged route consists of a superposition , where the poisson route is completed with equidistantly located `` fixed '' relay nodes ; is some fixed parameter and is a uniform random variable on , independent of , making stationary . in this modelwe also consider nn routing , i.e. , packets are always transmitted to the nearest neighbor to the right in .we consider as some `` tagged '' route obtained in a manet by some routing mechanism . a simple way of extending to a 2d manet model consists in embedding in an external field of nodes , on the plane .when doing so we will always consider that and are independent .the following assumptions regarding will be considered .one may consider a _ fixed , deterministic _pattern , although , once again , it is not the main scenario considered in this paper . in this model ,we assume that is a poisson point process of some given intensity on .the poisson linear route embedded in a poisson process of interferers will be our default _ poisson - line - in - poisson field _ model .we will also consider a case where the interferers are located on a _ poisson process of lines _ ( roads ) on of rate representing the total line - length per unit of surface .assuming that on each line of this process there is an independent poisson process of points of intensity nodes per unit of line length , we obtain a doubly stochastic poisson point process on with intensity nodes per unit of surface ; see . in particular , assuming , one can rigorously consider the poisson liner route embedded in such a poisson line process of interferers , as the typical route of the _ poisson - line manet _ , see figure [ fig.lineproj ] on the right - hand side .we assume that the _ all _ nodes of and try to access the channel ( transmit packets ) according to the aloha scheme , in perfectly synchronized time slots .each node in each time slot independently tosses a coin with some bias for heads , which will be referred to as the _ aloha medium access probability _ ( aloha map ) .nodes whose outcome is heads transmit their packets , the others do not transmit .we denote by the map of nodes in , and by the map of nodes in .the above situation will be modeled by marking the points with random , bernoulli , medium access indicators equal to 1 for the nodes which are allowed to emit in the slot and 0 for the nodes which are not allowed to emit .we have for all .when there is no ambiguity , we will skip the time index in the notation and also use the notation .similarly , we mark the interfering nodes by independent bernoulli , medium access indicators with parameter . at each time slot ,aloha splits into two point processes of emitters ( having a mac indicator equal to 1 at time ) and ( potential ) receivers .it is known that when is a poisson process of intensity then and are independent poisson processes with intensity and , respectively .each transmitting node uses the same transmission power , which without loss of generality is assumed to be equal to 1 .the signal - power path - loss is modeled by the power - law function where are some constants and is the distance between the transmitter and the receiver .signal - power is also perturbed by random fading which is independently sampled for each transmitter - receiver pair at each time slot .thus , the actual signal - power received at from at time is equal to . in this paperwe will restrict ourselves to an important special case where is _ exponentially distributed _ , which corresponds to the situation of independent _ rayleigh fading_. by renormalization of ,if required , we can assume without loss of generality that has mean 1 .when a node located at transmits a signal to a node located at , then successful reception depends on the signal - to - interference - and - noise ratio ( sinr ) where is the shot - noise process of : representing the interference created by the nodes of route and represents an external ( to ) noise .this external noise can be a constant ambient noise or a random field .in particular , may comprise the interference created by the external field of interferers , which do not belong to the route . in this case : where is the subset of nodes of transmitting at time . throughout this paperwe adopt the common assumption throughout this paper is that the _ noise process is independent of the route - interference process _ and that _ the process is stationary in . in this paper we assume a fixed bit - rate coding , i.e. , successfully receives the signal from if where is given by ( [ sinr ] ) and is the sinr - threshold related to the bit - rate given some particular coding scheme .recall that in our nn routing model each transmitter in transmits to the nearest node on its right in , without knowing whether it is authorized to transmit at this time .successful reception requires that this selected node is _ not _authorized by aloha to transmit ( i.e. , it is in ) , and that the sinr condition ( [ eq : sinr ] ) for this transmitter - receiver pair is satisfied .this corresponds to the usual separation of the routing and mac layers .in contrast , nr routing consists in transmitting to the nearest node ( in the given direction ) on _ which , at the given time slot , is not authorized by aloha to transmit_. the nr model hence might be seen as an opportunistic routing , which requires some interplay between mac and the routing layer . as we shall see , both routing schemes allow for quite explicit performance analysis in our basic poisson - line - in - poisson - field model .the default assumption in our numerical examples throughout the paper is nodes per meter in the poisson line model , i.e. the mean distance between two consecutive nodes on the line is 100 m. we will also use , and .the fading is rayleigh , f is exponentially distributed of mean 1 , .unless otherwise specified , in this section we assume that the locations of _ _ the nodes in the network are known and fixed ( deterministic ) .the only source of randomness is aloha mac and independent rayleigh fading between any two given nodes . in what followswe present a simple computation which allows us to express coverage and routing delays in such networks .our observations will be useful in the remaining part of this paper , when we will study random routes in random manets .let us consider a transmitter located at communicating to a receiver located at in the presence of a field of interferers , with all nodes subject to aloha mac with map .we assume that .we denote by the _ probability of successful transmission from to in one time slot_. this probability accounts for the likelihood of being authorized by aloha to transmit , not to being authorized to transmit and , given such circumstances , the probability of achieving the sinr larger than at the receiver .we define two functions : for reasons which will become clear in what follows , we call the _ interference factor _ and the _ noise factor_. [ l.pi ] we have [ r.pi-recur ]let us note that satisfies the following recursion when adding an interferer to the field : in other words , the external noise reduces the probability of successful transmission over the distance by the factor , which in the absence of noise and interference is equal to ( because of the aloha scheme ) .moreover , adding an interfering node to causes a decrease in the successful transmission probability by the factor , where is the distance of the new interferer to the receiver .we have \end{aligned}\ ] ] and = ( 1-p)+p { { \mathbf e}}\big[e^{-tf_{(z , y ) } l(|x - y|)/l(|z - y|)}\big ] = 1-\frac{p}{\frac1{t } \frac{|z - y|^{\beta}}{|x - y|^{\beta}}+1}\,,\ ] ] where we use the assumption that is an exponential variable .the proof follows by induction . is the probability that node can successfully send a tagged packet to node in a single transmission .however , a single transmission is not sufficient in aloha . hence , after an unsuccessful transmission , the transmitter will try to retransmit the packet with aloha , possibly several times , until the packet s reception .we denote by the expected number of time slots required to successfully transmit a packet from to ( considering the previous scenario of transmitting to in the presence of a field of interferers ) . under our assumptionsthis number is a geometric random variable of parameter and hence its expected value is equal to : we call the _ local ( routing ) delay_. by lemma [ l.pi ] and an analogous recurrence to this of remark [ r.pi-recur ] holds for with the reciprocals of the noise and interference factors . let us now consider a route consisting of nodes in the field of interfering nodes ( we assume that ) . the average delay to send a packet from to using the route is simply the sum of the local delays on successive hops note that for the hop from to other nodes of the route act as interferers .we call the _ ( routing ) delay _ on . using ( [ e.l ] )we obtain we also denote by the mean speed of packet progression on the route .the above expressions allow for a relatively simple , explicit analysis of routing delays in fixed networks , i.e. in networks where the locations of the nodes are fixed and known . in the case when such information is not available , one adopts a stochastic - geometric approach averaging over possible geometric scenarios regarding and/or .for example , assume that the route is given and fixed , but the number and precise locations of interferers are not given . assuming some statistical hypothesis regarding the distribution of ( which becomes random point process ) one can calculate the average route delay ] .we then have : & = p(1-p ) w(|x - y|){{\mathcal{l}}}_\psi(h_{x , y})\,,\\ { { \mathbf e}}_\psi[l(x , y,\psi ) ] & = \frac1{p(1-p ) } w^{-1}(|x - y| ) { { \mathcal{l}}}_\psi(-h_{x , y})\label{e.epsil}\end{aligned}\ ] ] and = \frac1{p(1-p ) } \sum_{k=0}^{n-1 } { { \mathcal{l}}}_\psi(-h_{x_k , x_{k+1}})\!\!\!\ !\prod_{z \in { { \mathcal{r}}}\setminus \{x_k , x_{k+1}\ } } h^{-1}(|z - x_{k+1}|,|x_k - x_{k+1}| ) w^{-1}(|x_k - x_{k+1}|)\,.\ ] ] in the next section we will consider a scenario when the route is also modeled as a point process .in this section we assume poisson route and no external interferers ( ) .we also assume that the external noise process is negligible with respect to the interference created by the nodes participating in the routing . in this scenario we will consider nn and nr routing schemes ; cf [ ss.pl ] .we begin with a simple calculation of the capture probability .we consider a typical node on the route , that is of the poisson point process . by the slivnyak theorem ,it can be seen as an `` extra '' node located at the origin , with the other nodes of the route distributed according to the original stationary poisson process .all marks ( bernoulli mac indicators , fading , etc ) of these extra nodes are independent of the marks of the points of .we denote by and the probability of successful transmission of the typical node in a given time slot to the receiver prescribed by the nn and nr routing schemes , respectively . in the capture probability and in the two routing schemes considered , we assume that the typical node is authorized to transmit . obviously and . for arbitrary denote and moreover , for given and let us denote : [ p.pnn ] the probability of successful transmission by a typical node of poisson route authorised by aloha to transmit to its relay node in the nn routing model without noise is equal to : and similarly in the nr receiver model it is easy to see that and hence , i.e. , the opportunistic choice of the receiver pays off regarding the probability of successful transmission .note directly from the form of the sinr in ( [ sinr ] ) that and do not depend on .hence in the remaining part of the proof we take .we consider first the nn model . the scenario with the typical user located at the origin corresponds to the palm distribution of the poisson route . by the known property of the poisson point process, the distance from to its nearest neighbor to the right , , has an exponential distribution with parameter .moreover given , all other nodes of the poisson route form a poisson point process of intensity on .conditioning on we can thus use ( [ e.epsipi ] ) with ( recall that is a poisson process of intensity of nodes transmitting in a given time slot ) to obtain : using the well known formula for the laplace transform of the poisson p.p .( see e.g. , ( * ? ? ? * ( 16.4 ) ) ) we obtain : inserting this expression in ( [ e.pnn-cond ] ) we obtain which boils down to the right - hand side of ( [ e.pnnd ] ) . in order to obtain expression ( [ e.pnrd ] ) for the model, we follow the same process , with the following modifications .the distance to the nearest _ receiver _ to the right has the exponential distribution of parameter .moreover the distribution of the point process of emitters is poisson with parameter and independent of the location of this receiver .note also that by the very choice of the receiver , it is not authorized to emit in the given time slot , hence there is no factor in the numerator .let us denote by the _ local delay _ at the tagged node in the nn routing scheme , i.e. the number of consecutive time slots needed for the tagged node to successfully transmit a given packet to the receiver designated by the given routing scheme . in what follows we will give the expressions for the expected local delay ] .this completes the proof .figure [ fig.speed1 ] shows the mean long - distance speed for , and .we observe that there is an optimal value of which maximizes the long - distance speed of packet progression and that this speed drops to 0 for larger than some critical value .[ rem.critp ] existence of the critical value of can be attributed to hops , traversed by a tagged packet on the infinite route , being statistically too long .indeed , analysing the expression in ( [ e.pnn1 ] ) one sees that the `` rate '' at which hops of length occur it is equal to while the rate at which the packet passes them is .thus , when the packet delay becomes infinite .some manet models ( in particular the bipolar one ) use the _ mean density of progress _ to optimize the performance of the model with respect to ( cf e.g. ) .recall that in a 2d manet is defined as the expected total progress of all the successful transmissions per unit of surface . in our 1d model , by campbell s formula can be expressed as : \,.\ ] ] the density of progress , can be also seen as quantifying the number of bit-(or packet)-meters `` pumped '' per unit length of a route . [ p.dnn ] under the assumptions of proposition [ p.pnn ] the density of progress in the nn receiver model is equal to : and is maximized for equal to : moreover , . using campbell s formula ,the density of progress in the nn model can be expressed as \ ] ] with the notation as in the proof of proposition [ p.pnn ] .following the same arguments as in this latter proof we obtain : which is equal to the right - hand side of ( [ e.dnnd ] ) . taking the derivative of the latter expression in find that its sign is equal to that of the polynomial .note that , , and . hence in the interval , has a unique root : which maximizes .the explicit expression for this root follows from the general formulas for the roots of cubic equations .note that density of progress is a `` static '' quantity , calculated with respect to one slot .it can also be easily evaluated for the nr model in contrast to the mean local delay .however , it fails to discover the existence of the critical value of for the performance of the manet revealed by the analysis of the local delay and packet progression speed .also , one may be tempted to approximate this speed by .let us note , however , in figure [ fig.speed1 ] , that the optimization of this quantity in can not be directly related to the maximization of the packet progression speed .for this reason we will not consider the density of progress any further in this paper . to node in this sectionwe will studythe issues of delays and the speed of packet progression on finite segments of routes . in this regard, we assume that in addition to the node located at the origin ( also called ) , a second node , being the final destination of a given packet is located at a fixed position on the line .mathematically , this corresponds to the palm distribution of the poisson pp , given these two fixed points .we want to compute the mean end - to - end delay for a given packet to leave the node and reach node following nn routing .the situation proposed is shown in figure [ fig.delays ] .we denote this end - to - end delay by .[ p.delaysuv ] under the assumptions of proposition [ p.pnn ] the mean end - to - end delay in the nn routing on the distance is equal to }\\ & = \frac{1}{p(1-p ) } \bigg ( e^{-\lambda m } e(m)\label{a}\tag{a}\\ & \hspace{1em}+\int_0^{m } \lambda e^{-\lambda r } e(r ) g_m(0,r ) { \mathrm{d}}r \label{b}\tag{b}\\ & \hspace{1em } + \lambda \int_0^{m } \int_0^{m - s } e(r ) g_0(s , r ) g_m(s , r)\lambda e^{-\lambda r}{\mathrm{d}}r { \mathrm{d}}s \label{c}\tag{c}\\ & \hspace{1em}+ \lambda \int_0^{m } e(m - s ) g_0(s , m - s ) e^{-\lambda ( m - s ) } { \mathrm{d}}s \bigg ) \label{d}\tag{d } \end{aligned}\ ] ] with , and .the mean sum of the delays from the source node 0 to the destination node under can be expressed using campbell s formula as = { { \mathbf e}}^{0m}[d(0)]+ \lambda \int_{0}^m{{\mathbf e}}^{0ms}[d(s ) ] { \mathrm{d}}s\,,\ ] ] where the first term corresponds to the average exit time from node and ] , expressed in ( [ e.el ] ) , because there is a fixed node at which acts as an additional interferer but which also limits the hop length .moreover , for the transmission from the current node , the node at 0 also acts as an additional interferer .we will show how the terms in ( [ a])([d ] ) of ( [ e0 m ] ) reflect these circumstances .let us first remark first from ( [ e.el ] ) that the function is the expected exit time from a typical node , given its receiver is located at the distance and given no additional ( non - poisson ) interfering nodes .now , it is easy to conclude that the term ( [ a ] ) gives the expected end - to - end delay when it is necessary to make the direct hop from 0 to ( when there is no poisson relay node between them ) .the term ( [ b ] ) gives the expected exit time from 0 to its nearest receiver when it is located at some : }\\ & = \frac{1}{p(1-p)}\int_0^{m } \lambda e^{-\lambda r}e^{\lambda p t { \,{\mathcal{d}}}_1(p ) } h(m - r , r)^{-1 } { \mathrm{d}}r\,,\end{aligned}\ ] ] where the factor ( not present in ( [ e.el ] ) ) is due to the fact that acts as an additional interferer for the transmission from 0 to .similarly , the term ( [ d ] ) corresponds to the direct hop from the running node at to ( when there is no poisson relay node between them ) and term ( [ c ] ) corresponds to the delay of going from to its nearest receiver .the node at 0 interferes with both these transmissions , which is reflected by the factors and in the terms ( [ d ] ) and ( [ c ] ) , respectively .the node at also interferes with the transmission from to , whence in ( [ c ] ) .[ p.speeduv ] the average speed of the packet progression on distance is equal to \bigr)^{-1}\,,\ ] ] where is given by proposition [ p.delaysuv ] .[ r.flight ] in figure [ fig.varm ] we present the speed of the packet progression on distance assuming and , for varying from to .the calculation is performed for the value of that maximizes the asymptotic long - distance speed calculated in proposition [ p.ldspeed ] , presented as the horizontal line in figure [ fig.varm ] .the existence of the destination node in a finite horizon has a double impact on the packet progression . on the one hand it `` attracts '' the packet reducing the negative impact of long hops ( cf remark [ rem.critp ] ) .indeed , no hop can be longer than the direct hop to . on the other hand , it `` repels '' the packet , because it creates an additional interference , which is more significant when the packet is close to the destination .figure [ fig.varm ] shows two phases related to these two `` actions '' .one can interpret these observations by saying that _ routing on distances shorter than some critical values ( ) is very inefficient in terms of exploiting of the route transport capacity_. with respect to in the nn model and long - distance speed . ]in the previous section we assumed that the external noise is negligible ( ) . in this sectionwe study the impact a non - null external noise field .recall that we assume that the noise field is independent of the route process and that it is stationary in ; i.e. , is equal in distribution to .we denote by the laplace transform of with .in is easy to extend the results of propositions [ p.pnn ] and [ p.dnn ] to the case of an arbitrary noise field .[ p.pnnw ] the probability of successful transmission by a typical node of poisson route authorised by aloha to transmit to its relay node in the nn routing model with noise field is equal to the proof is straightforward and follows the same lines as in the proof of proposition [ p.pnn ] .the following results show a very negative impact of arbitrarily small noise on the performance of nn routing on long routes .[ p.emergencyw ] the mean local delay in the poisson nn routing model with a noise field satisfying for some , is infinite =\infty ] ( that occurs with probability ) , we find the first integral in the first line of ( [ el ] ) .we still assume that the typical node is in but now we also assume that the closest node ( towards the right ) is in . computing the average delay in this case ,we find the second double integral of the first line of ( [ el ] ) ; is actually the density of the right - hand neighbor s position .the contribution corresponds to the fact that the node in located at is not taken into account by since the summation in is for .the second part of the contribution for ] .there is no correction needed for since the typical node is in and does not contribute to the delay as it is in .the last term of the second line in ( [ el ] ) corresponds to the case where node and its right - hand neighbor are in .this occurs with probability .the node in located at does not contribute to the delay since we analyze the delay from and its right - hand neighbor in ( thus located at ) .thus must be corrected by . in figure[ fig.grw ] , we plot the mean long - distance speed )$ ] with respect to with fixed noise of levels and . comparing figure [ fig.varmw ] to figure [ fig.grw ]one can observe that the optimal inter - relay distance of the fixed structure corresponds to the smallest length of the route on which the mean speed of packet progression attains its maximum without the fixed infrastructure .in other words , the relay stations should `` break down '' long routes into segments of the length optimal for the end - to - end routing given the noise level . for and .we have studied performances of end - to - end routing in manets , using a linear nearest - neighbor routing model embedded in an independent planar field of interfering nodes using aloha mac .we have developed numerically tractable expressions for several performance metrics such as the end - to - end delay and speed of packet progression .they show how the network performance can be optimized by tuning aloha and routing parameters .in particular , we show a need for a well - tuned lattice structure of wireless relaying nodes ( not back - boned ) which help to relay packets on long random routes in the presence of a non - negligible noise .our analysis does not take into account packet queuing at the relay nodes , which we believe can be introduced into our model in future work .
in our basic model , we study a stationary poisson pattern of nodes on a line embedded in an independent planar poisson field of interfering nodes . assuming slotted aloha and the signal - to - interference - and - noise ratio capture condition , with the usual power - law path loss model and rayleigh fading , we explicitly evaluate several local and end - to - end performance characteristics related to nearest - neighbour packet relaying on this line . we study how these metrics depend on the density of relaying nodes and interferers , tuning of aloha and on the external noise level . we consider natural applications of these results in a _ vehicular ad - hoc network _ , where vehicles are randomly located on a straight road . we also propose to use this model to study a `` typical '' route traced in a ( general ) planar ad - hoc network by some routing mechanism . such a decoupling of a given route from the rest of the network in particular allows us to quantitatively evaluate the non - efficiency of long - distance routing in `` pure ad - hoc '' networks , previously observed in , and the need for a well - tuned structure of `` fixed '' relaying nodes . we consider several extensions of our basic poison - line - in - poisson field - model , notably as _ poisson - line ad - hoc network _ in which all nodes ( including interfering ones ) are randomly located on a poisson process of lines ( routes ) . in this case our analysis rigorously ( in the sense of palm theory ) corresponds to the typical route of this network . manet , vanet , aloha , sinr , mac , routing , end - to - end delay , packet speed , poisson , poisson - line process , doubly - stochastic poisson process , lattice .
k.f . is supported by kakenhi no.16h02211 , presto , jst , crest , jst and erato , jst .mh is partially supported by fund for the promotion of joint international research ( fostering joint international research ) no .the centre for quantum technologies is funded by the singapore ministry of education and the national research foundation as part of the research centres of excellence programme .he is also grateful to dr .michal hajdusek for helpful comments .the error detection on the black vacuum qubits ( edges of the primal cubic lattice ) is executed as follows .if there is no error on the graph state , the outcome of the -basis measurements satisfies the condition : where indicates an addition modulo two over all black qubits adjacent to the vertex . depending on a given error ( ) on the graph state, we can obtain the error syndrome at the vertices belonging to the defect region . from the error syndrome ,the most likely location of the errors is estimated . herewe employ the minimum distance decoding , which can be done by finding a minimum path connecting pairs of vetices of with the minimum - weight - perfect - matching ( mwpm ) algorithm .let be the estimated error location , where indicates the number of 1s in a bit string .if a chain of edges specified by have a nontrivial cycle in the sense of the relative homology , the error correction fails . at the defect regionfar from the singular qubits , a nontrivial cycle have at least length , which is the characteristic length of the defect determined from the required accuracy of quantum computation .let be the size of the quantum computation that alice wants to do fault - tolerantly . to guarantee the accuracy of the output ,it is enough to choose the distance .therefore , the number of the qubits of the graph state is .now we can define the correctable set of errors as follow : an error location belongs to the correctable set of the errors iff there exists a connected component of length in the chain of edges specified by .the error detection and definition of the correctable error set on the white vacuum qubits are done in the same way but on the dual lattice . from the test , we know the error location .since the mwpm algorithm works in polynomial time in the number of vertices with , we can decide whether or not belongs to the correctable error set .the same argument also holds for the error location on the white vacuum qubits tested by .therefore , we can efficiently check whether or not the errors on a given resource belong to .we here , for simplicity , do not employ magic state distillaion but encodes each logical qubit into the reed - muller 15-qubit code .then we perform a fault - tolerant logical -basis measurement by transversal physical -basis measurements on the singular qubits as done in ref .thereby , alice can fix her strategy of quantum computation , which makes easy to define the correctable set of errors for the test .let and be the number of concatenation levels and the number of the logical -basis measurements , respectively .then we need physical -basis measurements , on the singular qubits .note that is enough to reduce the logical error sufficiently . in the following , we the error on the graph stateis specified by by converting it into operators on the graph state , .the logical -basis measurement is done by physical transversal -basis measurements by encoding each qubit into a concatenated reed - muller 15-qubit codes .this is also the case for all pauli - basis measurements . in the vacuum region near the singular qubits, we have a logical error of length smaller than as shown in fig .[ fig2 ] , since they are not topologically protected .correctable error for the fault - tolerant logical -basis measurement is defined for a given error recursively as follows : at physical level , which we call level- , if or becomes a logical error for a singular qubit , the level- ( singular ) qubit is labeled to be faulty . at concatenation level , if the level- logical qubit consisting of 15 level- logical qubits encoded in the reed - muller 15-qubit code has two or more faulty level- logical qubits , the level- logical qubit is labeled to be faulty . at the highest level , if no level- logical qubit is faulty , the given error belongs to the correctable set .and estimated one are denoted by solid and dotted lines , respectively . the vertices ( error syndrome ) of denoted by red squares .the 3d lattice(spatial two dimensions and time - like one dimension ) is depicted as if it is two dimensional ( one dimension for both spatial and time - like axe ) ., width=188 ] let us first consider the pass probability of the test for topological protection .the error is rejected if contains a connected component of length at least .such a probability is calculated to be therefore , if is sufficiently smaller than a constant value , the rejection probability is exponentially suppressed .next we consider the test for the logical -basis measurement .let be the probability that a level-0 ( singular ) qubit is faulty . is evaluated in a similar way to the previous case for the topological protection but we have to count logical errors consist of the chains of length lower than : where is the number of chains of length that contribute to the logical error of length . is counted in ref . rigorously up to , which indicates that we can reduce by decreasing sufficiently .the probability of obtaining the level- faulty qubit is given recursively by the we obtain the probability to obtain no faulty level- logical qubit at the highest level is given by . since , it is sufficient to chose and , which are independent of , the number of the samples of the graph state. therefore , in the large limit for a given , we can reduce the logical error probability polynomially , and hence amplify the acceptance probability arbitrarily close to 1 .
quantum systems , in general , output data that can not be simulated efficiently by a classical computer , and hence is useful for solving certain mathematical problems and simulating quantum many - body systems . this also implies , unfortunately , that verification of the output of the quantum systems is not so trivial , since predicting the output is exponentially hard . as another problem , quantum system is very delicate for noise and thus needs error correction . here we propose a framework for verification of the output of fault - tolerant quantum computation in the measurement - based model . contrast to existing analyses on fault - tolerance , we do not assume any noise model on the resource state , but an arbitrary resource state is tested by using only single - qubit measurements to verify whether the output of measurement - based quantum computation on it is correct or not . the overhead for verification including classical processing is linear in the size of quantum computation . since full characterization of quantum noise is exponentially hard for large - scale quantum computing systems , our framework provides an efficient way of practical verification of experimental quantum error correction . moreover , the proposed verification scheme is also compatible to measurement - only blind quantum computation , where a client can accept the delegated quantum computation even when a quantum sever makes deviation , as long as the output is correct . _ introduction. _ quantum computation provides a new paradigm of information processing offering both fast and secure information processing , which could not be realized in classical computation . recently , a lot of experimental efforts have been paid to realize quantum computation . there , fault - tolerant quantum computation with quantum error correction is inevitable to obtain quantum advantage using noisy quantum devices . due to the recent rapid progresses on experimental quantum error correction techniques , there is an increasing demand on an efficient way of a performance analysis of fault - tolerant quantum computation . in particular , in the majority of performance analyses of fault - tolerant quantum computation , a specific noise model , such as independent and identical pauli error operation and some specific correlation models , is assumed apriori . however , in an actual experiments more general noise occurs including general trace preserving completely positive ( tp - cp ) maps with various correlation between qubits . further , to guarantee the correctness of the output of quantum computation , we need to care about all cases including unexpected types of errors . , i.e.,we should not assume any specific error model . also , in our scenario , the full tomographic approach would not work efficiently for the increasingly many qubits . unfortunately , existing fault - tolerant quantum computations have not equipped an efficient verification scheme yet . the aim of this paper is to develop a fault - tolerant quantum computation equipping a verification scheme without assuming the underlying noise model . as properties of verifiable fault - tolerance , we require following two conditions . one is _ detectability _ which means that if the error of a quantum computer is not correctable , such a faulty output of the quantum computation is detected with high probability . the other is _ acceptability _ which means that an appropriately constructed quantum computer can pass the verification with high probability . in other words , under a realistic noise model , the test accepts the quantum computation with high probability . both properties are important to characterize performance of test in statistical hypothesis testing . in this paper , we develop verifiable fault - tolerance in measurement - based quantum computation ( mbqc ) , which satisfies both detectability and acceptability . we take a rather different approach to fault - tolerance than conventional one . we do not assume any noise model underlying , but define a correctable set of errors on a resource state of mbqc and test whether the error on a given resource state belongs to such a set or not . to this end , we employ the stabilizer test proposed in ref . , where an efficient verification of mbqc can be carried out by testing the graph state . however , this method is not fault - tolerant lacking acceptability ; any small amount of noise on the graph state causes rejection regardless whether or not it is correctable . although the paper extended the stabilizer test to the self - testing for the measurement basis , it still has the same problem . therefore , we crucially extend the stabilizer test for a noisy situation , so that we can decide whether the given resource states belong to a set of fault - tolerant resource states or not . under the condition of a successful pass of the test , the accuracy of fault - tolerant mbqc is guaranteed to be arbitrarily high ( i.e. , contraposition of detectability ) . the total resource required for the verification is linear to the size of quantum computation . as a concrete example , we explicitly define a set of correctable errors on the resource state for topologically protected mbqc , where we can show acceptability by calculating the acceptance probability concretely under a realistic noise model . note that contrast to detectability , the requirement of acceptability is unique for the verification of fault - tolerant quantum computation . indeed , when we can expect no error like the previous case , we do not need fault - tolerance . so , we could correctly judge the no error case with probability , i.e. , acceptability of the test is trivially satisfied because the stabilizer test would be passed in the no error case . on the other hand , in verification of fault - tolerant quantum computation consisting of many elementary parts , each of which can not be checked directly , we have to judge whether the output of the computation is correct or not carefully under an expected error model , which imposes the second requirement , acceptability . that is , to discuss acceptability , we firstly fix our verification method and an expected error model . then , we calculate the acceptance probability , which corresponds to power of test in statistical hypothesis testing . we also discuss an application of verifiable fault - tolerance to verification of blind quantum computation under a quantum server s deviation or quantum channel noise . _ a general setup for fault - tolerant mbqc. _ let us consider a generic scenario of fault - tolerant mbqc on a two - colorable graph state composed of the black system and the white system , which are consist of and qubits , respectively . then , we have two kinds of operators , , on , where . when we restrict them to the black system ( the white system ) , we denote and by and ( and ) . by using the binary - valued adjacency matrix ( i.e. , element is iff vertices and are connected ) corresponding to the graph , the graph state is characterized as for and . this relation explains that any error on the -basis can be converted to an error on the -basis . then , the total space is spanned by . suppose we execute a fault - tolerant mbqc quantum computation on the two - colorable graph state . then a set of correctable errors on the two - colorable graph state is defined such that an ideal state and erroneous one result in the same computational outcome under error correction . such a set of errors can is specified as a subset of . the projection to the subspace is written by . we assume that the subset is written as by using two subsets and . _ test for verification of fault - tolerance. _ similar to ref . , we employ the following sampling protocol to verify whether the error is correctable . our protocol runs as follows : * honest bob generates , where is an -qubit graph state on a bipartite graph , whose vertices are divided into two disjoint sets and . ( see fig . [ fig1](a ) . ) bob sends each qubit of it one by one to alice . evil bob can generate any -qubit state instead of . * alice divides blocks of qubits into three groups by random choice . the first group consists of blocks of qubits each . the second group consists of blocks of qubits each . the third group consists of a single block of qubits . * alice uses the third group for her computation . other blocks are used for the test , which will be explained later . * if alice passes the test , she accepts the result of the computation performed on the third group . for each block of the first and second groups , alice performs the following test : * for each block of the first group , alice measures qubits of in the basis and qubits of in the basis . then , she obtain and . if , then the test is passed . * for each block of the second group , alice measures qubits of in the basis and qubits of in the basis . then , she obtain and . if , then the test is passed . _ detectability and acceptability. _ to show detectability , taking account into unexpected errors , we obtain the following theorem in the same way as : [ l1 ] assume that . if the test is passed , with significance level , we can guarantee that the resultant state of the third group satisfies ( note that the significance level is the maximum passing probability when bob erroneously generates incorrect states so that the resultant state does not satisfy . that is , expresses the minimum probability to detect such incorrect states . ) the previous study considers the case with , , and proves this special case by discussing the two kinds of binary events or and or . replacing these two events by the two kinds of events or and or in the proof given in , we can show theorem 1 with the current general form . from the theorem and the relation between the fidelity and trace norm ( * ? ? ? * ( 6.106 ) ) , we can conclude the verifiability : if alice passes the test , she can guarantee that for any povm with the significance level . that is , the property of fault - tolerant quantum computation guarantees that the probability that the obtained computation outcome is different from the true computation outcome is less than . if we take , for example , this error probability is if , and therefore the verifiability is satisfied . note that the lower bound , , of the significance level is tight , since if bob generates copies of the correct state and a single copy of a wrong state , bob can fool alice with probability , which corresponds to . note that the above theorem on detectability holds without any assumption on the underlying noise . note that noise in the measurements can also be taken as noise on the resource state , if it does not depnd on the measurement bases . even if it is not the case , we can add noise such that the amounts of noise are the same for all measurement bases . next , we consider acceptability . to address the success probability under a realistic noise model , we assume a specific application of pauli channel on as an expected error model . that is , the error given as the distribution on the set of -basis errors and -basis errors . then , we denote the marginal distribution with respect to the pair of -basis errors on and -basis errors on ( -basis errors on and -basis errors on ) by ( ) . hence , the probability that alice passes the test ( ) with one round is ( ) . since we apply them rounds , the probability to be passed is . hence , when the probabilities and are close to , alice can accept the correct computation result on the third group with high probability . _ verifiable fault - tolerance for topologically protected mbqc. _ to show acceptability , below we will explain how to define a correctable set of the errors on a graph state . then , for a concrete example , we will calculate the acceptance probability under a realistic noise model . in the theory of fault - tolerant quantum computation , it is conventional that we translate fault - tolerance in the circuit model into fault - tolerance in the measurement - based model as follows . in the circuit model , we can define a set of correctable ( sparse ) fault paths so that the output of quantum computation does not damaged even if any error occurs on such a fault path . then , translating the correctable ( sparse ) fault paths in the circuit model into the measurement - based model , we can define a correctable set of the errors on the graph state in general . for example , the schemes in refs and refs can be viewed as measurement - based versions of circuit - based fault - tolerant schemes using the concatenated steane 7-qubit code and the surface code with the concatenated reed - muller 15-qubit code , respectively . -basis . singular qubits are located in - between two defect regions , which are measured in the -basis for a transversal logical -basis measurement . other regions are vacuum , where qubits are measured in the -basis to obtain the error syndrome . , width=245 ] let us see a concrete example by using topologically protected mbqc , which has been employed as a standard framework for fault - tolerant mbqc recently . here we focus on the original scheme proposed in ref . , where the surface code and the concatenated reed - muller code are employed to perform two - qubit clifford gate and single - qubit non - pauli - basis measurements , respectively . in the following we will briefly sketch how the correctable set are defined . a detailed description is shown in appendix [ app1 ] . in the following , we characterize the correctable sets of errors and . the errors specified by the set , which correspond to basis ( the pauli- operator ) on black qubits and basis ( the pauli- operator ) on white qubits , are detected on the priaml cubic lattice consisting of the edges on which the black qubits are located as shown in fig . [ fig1](b ) . then , the error configuraion can be associated with a set of edges on the primal cubic lattice . similarly , the errors in the set is detected on the dual cubic lattice and the error configuration is associated with a set of edges on the dual cubic lattice . in the following , all arguments are made independently of black ( primal lattice ) and white ( dual lattice ) qubits . depending on quantum computation that alice wants to do fault - tolerantly , a measurement pattern is determined . specifically , from an analogy of topological quantum computation , the sets of qubits measured in , , and -bases are called defect , vacuum , and singular qubits , respectively . as shown in fig . [ fig2 ] , defect qubits shape tubes , which represent logical degrees of freedom ; at each time slice they corresponds to the surface code with defects . by braiding the defects , a clifford two - qubit gate can be performed . for the surface code , the minimum distance decoding can be done by finding a shortest path connecting the boundary of the error chain on the cubic lattice . then , if the minimum distance decoding results in a logical operator of a weight ( distance ) larger than the code distance by wrapping around a defect or connecting two different defects , such an error is uncorrectable ( see appendix [ app1 ] for the detail ) . accordingly , we can define for as the complement of them . the code distance is chosen to be with being the size of the quantum computation that alice wants to do fault - tolerantly . therefore , the number of qubits of the graph state is given by . around the singular qubits , we still have a logical error of a weight lower than as shown in fig . [ fig2 ] . such a logical error is corrected by using another code , the concatenated reed - muller code . to this end , the fault - tolerant clifford gates using the surface code are further employed to encode the logical qubits into concatenated reed - muller codes , on which we can implement all pauli - bases and -basis measurements transversally . the corresponding physical -basis measurements , i.e. , measurements on the singular qubits , are depicted by red circles in fig . [ fig2 ] . then we can define the correctable set of the errors for the concatenated reed - muller code recursively for as done in ref . ( see appendix [ app2 ] for the detail ) . since we employ two types of error correction codes as seen above , the correctable set of the errors are defined as an intersection of the correctable sets and for the surface code and the concatenated reed - muller code , respectively , for both colors . since both minimum distance decoding for the surface code and recursive decoding for the concatenated code can be done efficiently , we can efficiently decide whether a given error pattern ( ) are in ( ) or not . _ acceptance probability under a typical error model. _ to calculate the acceptance probability , we assume , for simplicity , the errors ( ) are distributed independently and identically for each qubit with probability . it is straightforward to generalize the following argument to any local cptp noise as long as the noise strength measured by the diamond norm is sufficiently smaller than a certain threshold value . then the standard counting argument of the self - avoiding walk for the surface code tells us that for . apparently , if is sufficiently smaller than a certain constant value , converges to 1 for . by considering a recursive decoding of the concatenated code , we obtain ^m,\end{aligned}\ ] ] for where is a logical error probability of a weight lower than , which occurs around the singular qubits . such a logical probability is also calculated as a function of the physical error probaility by counting the number of self - avoiding walk as show in appendix [ app3 ] eq . ( [ saw ] ) . the integer and are the numbers of the logical -basis measurements and the number of concatenation , respectively . again by using counting the number of self - avoiding walks we can evaluate . by choosing smaller than a certain constant value , becomes sufficiently small so that converges to 1 for . since for , the probability also converges to 1 exponentially in the large limit , if the physical error probability is smaller than a certain constant threshold value ( see appendix [ app3 ] for the detailed calculation ) . since can be chosen independently of , the acceptance probability converges to 1 . _ verifiable blind quantum computation. _ a promising application of the proposed framework is verification of measurement - only blind quantum computation . suppose a quantum server generates two - colorable graph states and sends them to a client who execute universal quantum computation by only single - qubit measurements , where client employ the proposed verification . first , our protocol is a one - way quantum communication from bob to alice , and therefore , the blindness is guaranteed by the no - signaling principle as in the protocol of ref . , which contrasts to verifiable blind quantum computation of bfk ( broadbent - fitzsimons - kashefi ) type . according to theorem [ l1 ] ( detectability ) under the condition of acceptance the accuracy of the output is guaranteed . contrast to the earlier verifiable blind quantum computation , by virtue of acceptability , the proposed verification scheme can accept the delegated quantum computation even under quantum server s deviation or quantum channel noise as long as they are correctable . in this way , we can verify the quantum server is honest enough to obtain a correct output by only using single - qubit measurements . it would be interesting to apply the proposed framework to quantum interactive proof systems .
let denote the random matrix , where are independent copies of a given complex valued random variable with mean zero and unit variance : =0\quad\text{and}\quad{\mathbb{e}}\big[|{\mathbf x}|^2\big]=1.\ ] ] let denote the spectral radius of : the well known circular law states that , in probability , the empirical distribution of the eigenvalues of weakly converges to the uniform law on the unit disc of the complex plane .in particular , it follows that with high probability for any and large enough . here and belowwe say that a sequence of events holds with high probability if their probabilities converge to one .the corresponding upper bound on has been established by bai and yin under a finite fourth moment assumption : if <\infty ] then , in probability , , as .we refer to and references therein for related estimates and more background and applications concerning the spectral radius of a random matrix .surprisingly , there seems to be little or no discussion at all in the literature even in the recent works and about the necessity of the fourth moment assumption for the behavior .we propose the following conjecture , which is illustrated by figure [ fig1 ] .[ conju ] the convergence in probability holds under the sole assumptions .another way to put this is to say that there are no outliers in the circular law .this phenomenon reveals a striking contrast between eigenvalues and singular values of , the latter exhibiting poisson distributed outliers in absence of a fourth moment , see for instance .a tentative heuristic explanation of this phenomenon may proceed as follows .suppose has a heavy tail of index , that is , as . if , then with high probability in the matrix there are elements with , for any .any such element is sufficient to produce a singular value diverging as fast as . on the other hand , to create a large eigenvalue , a single large entry is not sufficient .roughly speaking one rather needs at least one sequence of indices with with a large product , i.e. one cycle with a large weight if we view the matrix as an adjacency matrix of an oriented and weighted graph .it is not difficult to see that the sparse matrix consisting of all entries with is acyclic with high probability , as long as .somewhat similar phenomena should be expected for heavy tails with index . as shown in , in that case the circular lawmust be replaced by a new limiting law in the complex plane .more precisely , the empirical distribution of the eigenvalues of tends weakly as to a rotationally invariant light tailed law , while the empirical distribution of the singular values of tends weakly as to a heavy tailed law . by the above reasoning, no significant outliers should appear in the spectrum .the precise analogue of in this case is however less obvious since the support of is unbounded . from the tail of , one might expect that the spectral radius is of order while typical eigenvalues are of order .in this paper we prove that the conjectured behavior holds if is symmetric and has a finite moment of order for an arbitrary .we say that is symmetric if the law of coincides with the law of .[ main * ] suppose that is symmetric and that =1 ] for some .then , in probability , in view of , to prove the theorem one only needs to establish the upper bound with high probability , for every .we shall prove the following stronger non - asymptotic estimate , covering variables whose law may depend on .[ main ] for any and , there exists a constant such that for any , for any symmetric complex random variable with {\leqslant}1 ] , we have the rest of this note is concerned with the proof of theorem [ main ] .we finish this introduction with a brief overview of the main arguments involved .the proof of theorem [ main ] combines the classical method of moments with a novel cycle weight truncation technique . for lightness of notation, we write instead of .the starting point is a standard general bound on in terms of the trace of a product of powers of and .let denote the operator norm of , that is the maximal eigenvalue of , which is also the largest singular value of .recall the weyl inequality . for any integer has it follows that for any integer , setting , {i , j}[(x^*)^{k-1}]_{j , i}.\ ] ] expanding the summands in one obtains where the internal sum ranges over all paths and of length from to , the weight of a path is defined by and denotes the complex conjugate of .so far we have not used any specific form of the matrix entries . as a warm up , it may be instructive to analyze the following simple special case .assume that has the distribution where ] , where is the number of edges in without counting multiplicities .let denote the closed path obtained as follows : start at , follow , then add the edge , then follow , then end with the edge again .thus , is an even closed path of length .notice that { \leqslant}q^{-{\varepsilon}}{\mathbb{e}}[w(p)].\ ] ] since the map is injective we have obtained {\leqslant}q^{-{\varepsilon}}\sum_{p}{\mathbb{e}}[w(p)],\ ] ] where the sum ranges over all even closed paths of length .observe that {\leqslant}q^{-(1-{\varepsilon})k}q^{\ell},\ ] ] where is the number of distinct vertices in .therefore , letting denote the number of even closed paths of length with vertices , is bounded above by combinatorial estimates to be derived below , see lemma [ paths and graphs ] and lemma [ graphs counting ] , imply that .putting all together we have found {\leqslant}k^2 n^{k}\sum_{\ell=1}^k a(k , n , q)^{k-\ell}\ ] ] where .we choose .suppose that .then and therefore if is large enough .it follows that {\leqslant}k^3 n^k ] .this concludes the proof of in the special case of the model . the given argument displays , albeit in a strongly simplified form , some of the main features of the proof of theorem [ main ] : the role of symmetry , the role of combinatorics , and the fact that cycles with too high weights have to be ruled out with a separate probabilistic estimate . the latter point requires a much more careful handling in the general case .since it represents the main technical novelty of this work , let us briefly illustrate the main idea here .consider the collection of all possible oriented cycles with edges of the form with , and with no repeated vertex except for .let denote the uniform distribution over the set .given the matrix , we look at the weight corresponding to the cycle repeated times , where is defined in . since one can restrict to even closed paths , and each such path can be decomposed into cycles that are repeated an even number of times , it is crucial to estimate the empirical averages = \frac1{|{\mathcal{c}}_m|}\sum_{c\sim{\mathcal{c}}_m}|w(c)|^{2t},\ ] ] where the sum runs over all cycles with edges and denotes the total number of them . broadly speaking , we will define an event by requiring that {\leqslant}k^2\,,\quad \text{and } \quad\nu_m[|w(c)|^{2+{\varepsilon}}]{\leqslant}k^2 b^m,\ ] ] for all , where as before .the assumptions of theorem [ main ] ensure that has large probability by a first moment argument .thus , in computing the expected values of we may now condition on the event .actually , on the event we will be able to estimate deterministically the quantities ] denotes the set . a directed graph , or simply digraph , on ] is the set of vertices and \times[n] ] .the path is _ closed _ if the first and the last vertex coincide .each path naturally generates a multi digraph , where and contains the edge with multiplicity if and only if the path contains exactly times the adjacent pair .notice that in general there is more than one path generating the same multi digraph .if the path is closed , then is strongly connected , that is for any one can travel from to by following edges from . a closed path without repeated vertices except for the first and last verticesis called a _ cycle_. a loop considered a cycle of length .a multi digraph will be called a _ double cycle _ if it is obtained by repeating two times a given cycle . in particular ,a double cycle is not allowed to have loops unless its vertex set consists of just one vertex .we say that is an _ even path _ if it is closed and every adjacent pair is repeated in an even number of times .a multi digraph is called an _ even digraph _ if it is generated by an even path ; see figure [ fig : fig1 ] for an example .thus , an even digraph is always strongly connected .the following lemma can be proved by adapting the classical theorems of euler and veblen .[ veblen ] for a strongly connected multi digraph , the following are equivalent : 1 . is an even digraph ; 2 . is even for every vertex ; 3 . can be partitioned into a collection of double cycles .two multi digraphs and are called _ isomorphic _ if there is a bijection such that if and only if and the multiplicities of the corresponding edges coincide .the associated equivalence classes are regarded as unlabeled multi digraphs .given an unlabeled multi digraph , we will write for any multi digraph belonging to the class . an edge - rooted multi digraph , or simply a _ rooted digraph _ , is defined as a multi digraph with a distinguished directed edge . the definition of equivalence classes is extended to rooted digraphs as follows .two rooted digraphs and are called isomorphic if there is a bijection such that if and only if , multiplicities of corresponding edges coincide , and . with minor abuse of notationwe will use the same terminology as above , and write for rooted digraphs belonging to the equivalence class .we turn to an estimate on the number of paths generating a given even digraph .let be an even digraph with edges .unless otherwise specified , multiplicities are always included in the edge count . by lemma [ veblen ] every vertex has even in- and out - degrees satisfying thus has at most vertices .moreover , since the number of edges in is , we have [ paths and graphs ] let be an even digraph with and . the number of paths generating does not exceed there are possibilities for the starting points of the path .the path is then characterized by the order in which neighboring vertices are visited . at each vertex , there are visits , and at most out - neighbors .if , there is only one possible choice for the next neighbor . if , then there are at most possible choices considering all visits to the vertex .hence , the number of paths generating is bounded by where we have used that the product of factorials does not exceed the factorial of the sum .now , let be the number of vertices such that . from , we have estimating the sum in from below by one has hence, using in one finds for integers , let be the set of rooted even digraphs with ] is given the random weight , where are independent copies of a random variable satisfying the assumptions of theorem [ main ] .the weight of an even digraph , is defined as where each edge has multiplicity .note that in this formula we interpret `` '' without taking into account the multiplicity in the multiset .given an unlabeled even graph , consider the equivalence class of even digraphs .we are interested in estimating for moreover , we define we refer to as the _ statistics _ of the unlabeled even digraph .we extend the above definitions to rooted even digraphs as follows .the weight of a rooted even digraph is defined by note that is well defined even if since the root edge satisfies and thus .if is an unlabeled rooted even digraph , that is an equivalence class of rooted even digraphs , then and are defined as in and , provided is replaced by in that expression .estimates for the statistics will be derived from a basic estimate for double cycles .let be the unlabeled double cycle with edges .similarly , will denote the unlabeled rooted double cycle with edges . from the assumptions of theorem [ main ] , for any double cycle we have {\leqslant}1\,,\quad \mathbb{e}[p(c)^{1+\varepsilon/2}]{\leqslant}b^m.\ ] ] note that the same bounds apply for any rooted double cycle , with the weights replaced by .[ cycle stats ] for any , and , define the event where for any one has take any .the first inequality in yields taking the expectation , implies on the other hand , by symmetry any satisfies .\ ] ] hence , from markov s inequality and a union bound over , one has for all .next , as in one shows that then and imply = \sum\limits_{h=0}^\infty2^{h+h\varepsilon/2}{\mathbb{p}}(p(c){\geqslant}2^h ) { \leqslant}2\ , { \mathbb{e}}\left [ p(c)^{1+{\varepsilon}/2}\right ] { \leqslant}2b^m.\ ] ] therefore , from markov s inequality and a union bound over , finally , we observe that the same argument leading to can be repeated for rooted cycles , with no modifications .it follows that from - and the union bound over , it follows that in the remainder of this section , on the event , we will deterministically upper bound the statistics of any unlabeled rooted even digraph ; see proposition [ exp prop ] below .the proof will use the following induction statement .[ induction ] fix integers .let be an unlabeled rooted even digraph with at most vertices and assume that can be decomposed as for some unlabeled rooted even digraph and a double cycle of length having common vertices with .suppose that holds. then 1 . ; 2 . if , then .fix an even rooted digraph and denote by and , respectively , the double cycle with edges and the even rooted digraph isomorphic to so that .further , let be a uniform random permutation of ] is uniformly distributed on the set .hence we may write ){\geqslant}2^h),\;\;h=0,1,\dots \end{gathered}\ ] ] where denotes the probability w.r.t . the random permutation . for any , using this and )=p_r(\pi[g])\,p(\pi[c]) ] for some that agrees with on .since has free vertices ( those which do not fall into ) , and we can pick them among available vertices , the cardinality of is at least where we use that the total number of vertices satisfies .since the number of double cycles of length is , we can write for any : ){\geqslant}\tau\,|\,r ) & = \frac{|\{c'\sim(c;r):\,p(c'){\geqslant}\tau\}|}{|\{c'\sim(c;r)\}|}\\ & { \leqslant}(n - k)^{r - m}|\{c'\sim{\mathcal{c}}_m:\,p(c'){\geqslant}\tau\}| \\&{\leqslant}(n - k)^{r - m}n^m{\mathbb{p}}_\pi(p(\pi[c]){\geqslant}\tau ) { \leqslant}en^r{\mathbb{p}}_\pi(p(\pi[c]){\geqslant}\tau),\end{aligned}\ ] ] where we use to bound . sincethe above estimate is uniform over the realization , for any we have ){\geqslant}2^{h-\ell};\;p(\pi[c]){\geqslant}2^{\ell-1}\right)\\ & \qquad { \leqslant}{\mathbb{p}}_\pi\left ( p_r(\pi[g]){\geqslant}2^{h-\ell}\right)\,\sup\limits_r\,{\mathbb{p}}_\pi\left(\pi[c]{\geqslant}2^{\ell -1}\,|\,r\right ) \\ & \qquad { \leqslant}en^r{\mathbb{p}}_\pi\left(p_r(\pi[g]){\geqslant}2^{h-\ell}\right){\mathbb{p}}_\pi\left(p(\pi[c]){\geqslant}2^{\ell-1}\right ) .\end{aligned}\ ] ] using the definition of and the identity applied to and we obtain , for all : ){\geqslant}2^{h-\ell};\;p(\pi[c]){\geqslant}2^{\ell-1}\right ) { \leqslant}en^r2^{1-h}{\mathcal{s}}({\mathcal{u}}){\mathcal{s}}_{\ell-1}({\mathcal{c}}_m ) .\end{aligned}\ ] ] from one has ){\geqslant}2^h)&{\leqslant}en^r2^{1-h}{\mathcal{s}}({\mathcal{u}})\sum\limits_{\ell=0}^{h-1}{\mathcal{s}}_{\ell}({\mathcal{c}}_m)+2^{-h}{\mathcal{s}}_h({\mathcal{c}}_m ) + 2^{-h}{\mathcal{s}}({\mathcal{u } } ) . \end{aligned}\ ] ] since , on the event of lemma [ cycle stats ] one can estimate ){\geqslant}2^h){\leqslant}2en^r{\mathcal{s}}({\mathcal{u}})\sum\limits_{\ell=0}^{\infty } { \mathcal{s}}_{\ell}({\mathcal{c}}_m ) + { \mathcal{s}}({\mathcal{u } } ) { \leqslant}3ek^2n^r{\mathcal{s}}({\mathcal{u}}).\ ] ] taking the supremum over , the above relation proves the first assertion of the lemma .let us prove the second assertion . on the event of lemma [ cycle stats] , for any , fix .if , then estimating as in for all , we obtain ){\geqslant}2^{h-\ell};\;p(\pi[c]){\geqslant}2^{\ell-1}\right ) { \leqslant}2^{-h+1}ek^2{\mathcal{s}}({\mathcal{u}})n^{r(1-\varepsilon/8)}.\ ] ] on the other hand , using ){\geqslant}2^{h-\ell}\right){\leqslant}2^{-h+\ell}{\mathcal{s}}({\mathcal{u}}) ] , has edges ; 3 . for , has common vertices with .define the rooted even digraphs , .let denote the associated equivalence classes .let be the set of indices such that since for any , , using we see that since is a rooted double cycle with at most edges , and we are assuming the validity of the event , by lemma [ cycle stats ] we have .moreover , by lemma [ induction ] , one has where we used the assumption . next , observe that thus , combining the above estimates one has where .note that implying that .the proof is complete .let denote the event that for all \times [ n] ] shows that .thus , if we define , where is the event from lemma [ cycle stats ] , then we are going to choose eventually .therefore , thanks to , to prove the theorem it will be sufficient to prove the conditional statement to prove this , we estimate the conditional moments $ ] . from the expansion in onehas { \leqslant}\sum_{i , j}\sum_{p_1,p_2:i\mapsto j } { \mathbb{e}}[w(p_1)\bar w(p_2)\mid{\mathcal{e}}_k]\,,\ ] ] where the internal sum ranges over all paths and of length from to , the weight of a path is defined by , and denotes the complex conjugate of .notice that since on the event , all expected values appearing above are well defined . by the symmetry assumptionwe can replace the variables by where are symmetric i.i.d .random variables , independent from the .conditioning on the entries are no longer independent .however , since is measurable with respect to the absolute values , the signs are still symmetric and i.i.d . after conditioning on .it follows that =0,\ ] ] whenever there is an edge with odd multiplicity in .thus , in we may restrict to such that each edge in has even multiplicity .let denote the closed path obtained as follows : start at , follow , then add the edge , then follow , then end with the edge again .thus , is an even closed path of length .note that according to our definition , if is the rooted even digraph generated by the path , with root at the edge , then since the map is injective , allows us to estimate { \leqslant}\sum_{p}{\mathbb{e}}\left[p_r(g_p)\mid { \mathcal{e}}_k\right],\ ] ] where the sum ranges over all even closed paths of length and is defined as the rooted even digraph generated by the path , with root at the edge . by lemma [ paths and graphs ] , the sum in can be further estimated by ,\ ] ] where we used , and denotes the set of all rooted even digraphs with edges and vertices .below we estimate deterministically on the set .using the second inequality in one has , for any : since on the event all entries satisfy , it follows that . therefore the above sum can be truncated at let be a given equivalence class of rooted even digraphs with vertices and edges .summing over all , and recalling , from proposition [ exp prop ] , on the event we can then estimate where .summing over all equivalence classes of rooted even digraphs with vertices with edges , on the event one obtains going back to , using , and lemma [ graphs counting ] to estimate , one finds { \leqslant}3hk^4n^{k}\bigl(3ek^2\bigr)^{\frac{4k\log b } { \varepsilon\log n } } \sum_{x=1}^k ( 4k)^{6(k - x)}n^{-\varepsilon y_x/16}\ ] ] fix . if , then and therefore provided that is sufficiently large .it follows that from , for large enough and , one has {\leqslant}n^k ( \logn)^{c\log n } , \ ] ] where is a constant depending only on .the proof of is concluded by using markov s inequality : for any , \\ & { \leqslant}(1+\delta)^{-2k+2}n ( \log n)^{c\log n}. \ ] ] since , for fixed , the expression above is for any .this ends the proof of theorem [ main ] .bordenave and m. capitaine .outlier eigenvalues for deformed i.i.d random matrices . to appear in _ comm .pure appl .( 2016 ) preprint available at http://arxiv.org/abs/1403.6001[arxiv:1403.6001 ]
consider a square matrix with independent and identically distributed entries of zero mean and unit variance . it is well known that if the entries have a finite fourth moment , then , in high dimension , with high probability , the spectral radius is close to the square root of the dimension . we conjecture that this holds true under the sole assumption of zero mean and unit variance , in other words that there are no outliers in the circular law . in this work we establish the conjecture in the case of symmetrically distributed entries with a finite moment of order larger than two . the proof uses the method of moments combined with a novel truncation technique for cycle weights that might be of independent interest .
many statistical data sets involve covariates that are error - contaminated versions of their true unobserved counterpart .however , the measurement error often does not fit the classical error structure with independent from .a common occurrence is , in fact , the opposite situation , in which with independent from , a situation often referred to as berkson measurement error [ , , ] .a typical example is an epidemiological study in which an individual s true exposure to some contaminant is not observed , but instead , what is available is the average concentration of this contaminant in the region where the individual lives .the individual - specific randomly fluctuate around the region average , resulting in berkson errors .existing approaches to handle data with berkson measurement error [ e.g. , , ] unfortunately require the distribution of the measurement error to be known , or to be estimated via validation data , which can be costly , difficult or impossible to collect .( in classical measurement error problems , the distribution of the error can be identified from repeated measurements via a kotlarski - type equality [ , ] .however , such results do not yet exist for berkson - type measurement error . ) a popular approach to relax the assumption of a fully known distribution of the measurement error is to allow for some adjustable parameters in the distributions of the variables and their relationships , and solve for the parameter values that best reproduce various conditional moments of the observed variables , under the assumption that this solution is unique .this approach has been used , in particular , for polynomial specifications [ ] and , more recently , for a very wide range of parametric models [ wang ( ) ] .the present paper goes beyond this and provides a formal identification result and a general nonparametric regression method that is consistent in the presence of berkson errors , without requiring the distribution of the measurement error to be known a priori . instead , the method relies on the availability of a so - called instrumental variable [ e.g. , see chapter 6 in ] to recover the relationship of interest .for instance , in the epidemiological study of the effect of particulate matter pollution on respiratory health we consider in this paper , suitable instruments could include ( i ) individual - level measurement of contaminant levels that can even be biased and error - contaminated or ( ii ) incidence rates of diseases other than the one of interest that are known to be affected by the contaminant in question .our estimation method essentially proceeds by representing each of the unknown functions in the model by a truncated series ( or a flexible functional form ) and by numerically solving for the parameter values that best fits the observable data .although such an approach is easy to suggest and implement , it is a challenging task to formally establish that such a method is guaranteed to work in general .first , there is no guarantee that the solution ( i.e. , parameter values that best match the distribution of the observable data ) is unique .second , estimation in the presence of a number of unknown parameters going to infinity with sample size is fraught with convergence questions .can the postulated series represent the solution asymptotically ?is the parameter space too large to obtain consistency ? is the noise associated with estimating an increasing number of parameters kept under control ?our solution to these problems is two - fold .first , we target the most difficult obstacle by formally establishing identification conditions under which the regression function and the distribution of all the unobserved variables of the model are uniquely determined by the distribution of the observable variables .a second important aspect of our solution to the berkson measurement error problem is to exploit the extensive and well - developed literature on nonparametric sieve estimation [ e.g. , , , ] to formally address the potential convergence issues that arise when nonparametric unknowns are represented via truncated series with a number of terms that increases with sample size .these theoretical findings are supported by a simulation study and the usefulness of the method is illustrated with an epidemiological application to the effect of particulate matter pollution on respiratory health .we consider a regression model of the general form where the function is the ( unknown ) relationship of interest between , the observed outcome variable and , the _ unobserved _ true regressor , while is a disturbance .information regarding is only available in the form of an observable proxy contaminated by an error .equation ( [ eqz ] ) assumes the availability of an instrument , related to via an unknown function and a disturbance .our goal is to estimate the function in ( [ eqy ] ) nonparametrically and without assuming that the distribution of the measurement error is known .[ as by - products , we will also obtain and the joint distribution of all the unobserved variables . ] to this effect , we require the following assumptions , which are very common in the literature focusing on nonlinear models with measurement error [ e.g. , , , , , , ] . [condindep]the random variables , , , are mutually independent .note that assumption [ condindep ] implies the commonly - made `` surrogate assumption '' , as can be seen by the following sequence of equalities between conditional densities : .[ condloc]the random variables , , are centered ( i.e. , the model s restrictions preclude replacing by for some nonzero constant , and similarly for and ; this includes either zero mean , zero mode or zero median , e.g. ) .as our approach relies on the availability of an instrument to achieve identification , it is instructive to provide practical examples of suitable instruments in common settings .although the use of instrumental variables has historically been more prevalent in the econometrics measurement error literature [ ] , instruments are gathering increasing interest in the statistics literature , especially in the context of measurement error problems [ see chapter 6 entitled `` instrumental variables '' in and the numerous references therein ] . notethat instrument equation ( [ eqz ] ) is entirely analogous to ( [ eqy ] ) , the equation generating the main dependent variable .hence , the instrument is nothing but another observable `` effect '' caused by via a general nonlinear relationship .let us consider a few examples , which were inspired by some of the case studies found in , and .epidemiological studies . in these studies ,the dependent variable is typically a measure of the severity of a disease or condition , while the true regressor is someone s true but unobserved exposure to some contaminant .the average concentration of this contaminant in the region where the individual lives is , however , observed .the error on is berkson - type because individual - specific typically randomly fluctuate around the region average . in this setup ,multiple plausible instruments are available : a measurement of contaminant concentration in the individual s house ( these would be error - contaminated by classical errors , since the concentration at a given time randomly fluctuates around the time - averaged concentration which would be relevant for the impact on health ) . thanks to the flexibility introduced by the function in ( [ eqz ] ) , these measurements can even be biased . they can therefore be made with a inexpensive method ( that can be noisy and not even well - calibrated ) , making it practical to use at the individual level . hence , it is possible to combine ( i ) accurate , but expensive , region averages that are not individual - specific ( ) and ( ii ) inexpensive , inaccurate individual - specific measurements ( ) to obtain consistent estimates .another plausible instrument could be a measure of the severity of another disease or condition that is _ known _ to be caused by the contaminant .the fact that it is _ caused by _ the contaminant , introduces an error structure which is consistent with equation ( [ eqz ] ) .other measurable effects due to the contaminant ( e.g. , the results of saliva or urine tests for the presence of contaminants ) could also serve as instruments .clearly these measurements are not units of exposure , but the function can account for this . experimental studies .researchers may wish to study how an effect ( e.g. , the production of some chemical ) is related to some imposed external conditions ( e.g. , oven or reactor temperature ) , but the true conditions experienced by the sample of interest may deviate randomly from the imposed conditions ( e.g. , temperature may not be completely uniform ) . in this case ,an instrument could be ( i ) another `` effect '' ( e.g. , the amount of another chemical ) that is known to be caused by or ( ii ) a measurement of that is specific to the sample of interest but that may be very noisy or even biased ( e.g. , it could be an easier - to - take temperature measurement after the experiment is completed and the sample has partly cooled down ) .self - reported data . have argued that individuals reporting data ( e.g. , their food intake , or exercise habits ) are sometimes aware of the uncertainty in their estimates of and , as a result , try to report an average over all plausible estimates consistent with the information available to them , thus leading to berkson - type errors , because the individuals try to make their prediction error independent from their report . in this setting , an instrument could be another observable outcome variable that is also related to .we now formally state conditions under which the berkson measurement error model can be identified with the help of an instrument .let , , and denote the supports of the distributions of the random variables , , and , respectively .we consider and to be jointly continuously distributed ( with , , and with ) .accordingly , we assume the following .[ conddens]the random variables admit a bounded joint density with respect to the lebesgue measure on . all marginal and conditional densities are also defined and bounded .we use the notation and to denote the density of the random variable and the density of conditional on , respectively .lower case letters denote specific values of the corresponding upper case random variables .next , as in many treatments of errors - in - variables models [ , , , , schennach ( ) ] , we require various characteristic functions to be nonvanishing .we also place regularity constraints on the two regression functions of the model .[ condinv]for all , \neq0 ] ( where ) . [ condnodup] and are one - to - one ( but not necessarily onto ) .[ condcont] is continuous .assumption [ condnodup ] is somewhat restrictive when has a dimension larger or equal to the ones of ( or ) .fortunately , it is often possible to eliminate this problem by re - defining ( and ) to be a vector containing auxiliary variables in addition to the outcome of interest , in order to allow for enough variation in ( and ) to satisfy assumption [ condnodup ] .each of these additional variables need not be part of the relationship of interest per se , but does need to be affected by is some way . in that sense , such auxiliary variables would also be a type of `` instrument . ''our main identification result can then be stated as follows .( note that the theorem also holds upon conditioning on an observed variable , so that additional , correctly measured , regressors can be straightforwardly included . )[ thid]under assumptions [ condindep][condcont ] , given the true observed conditional density , the solution to the functional equation for all , , is unique ( up to differences on sets of null probability measure ) .a similar uniqueness result holds for the solution to \\[-8pt ] & & \qquad = f_{x } ( x ) \int f_{\delta z } \bigl ( z - h \bigl ( x^{\ast } \bigr ) \bigr ) f_{\delta y } \bigl ( y - g \bigl ( x^{\ast } \bigr ) \bigr ) f_{\delta x^{\ast } } \bigl ( x^{\ast}-x \bigr ) \,dx^{\ast}.\nonumber\end{aligned}\ ] ] establishing this result demands techniques radically different from existing treatment of berkson error models , such as the spectral decomposition of linear operators [ see for a review ] , which are emerging as powerful alternatives to the ubiquitous deconvolution techniques that are typically applied in classical measurement error problems .the proof can be found in the and can be outlined as follows .assumption [ condindep ] lets us obtain the following integral equation relating the joint densities of the observable variables to the joint densities of the unobservable variables : from which equation ( [ eqfyxz ] ) follows directly .uniqueness of the solution is then shown as follows .equation ( [ eqpreindep ] ) defines the following operator equivalence relationship : where we have introduced the following operators : ( z ) & = & \int f_{y , z|x } ( y , z|x ) r ( x ) \,dx,\nonumber\\ { } [ f_{z|x^{\ast}}r ] ( z ) & = & \int f_{z|x^{\ast } } \bigl ( z|x^{\ast } \bigr ) r \bigl ( x^{\ast } \bigr ) \,dx^{\ast } , \nonumber\\ { } [ f_{z|x}r ] ( z ) & = & \int f_{z|x } ( z|x ) r ( x ) \,dx,\\ { } [ d_{y;x^{\ast}}r ] \bigl ( x^{\ast } \bigr ) & = & f_{y|x^{\ast } } \bigl ( y|x^{\ast } \bigr ) r \bigl ( x^{\ast } \bigr ) , \nonumber\\ { } [ f_{x^{\ast}|x}r ] \bigl ( x^{\ast } \bigr ) & = & \int f_{x^{\ast } |x } \bigl ( x^{\ast}|x \bigr ) r ( x ) \,dx \nonumber\end{aligned}\ ] ] for some sufficiently regular but otherwise arbitrary function .note that , in the above definitions , is viewed as a parameter ( the operators do not act on it ) and that is the operator equivalent of a diagonal matrix .next , we note that the equivalence also holds [ e.g. , by integration of ( [ eqleqlll ] ) over all ] . we can then isolate and substitute the result into ( [ eqleqlll ] ) to yield , after rearrangements , where all inverses can be shown to exist over suitable domains under our assumptions .equation ( [ eqdiag ] ) states that the operator admits a spectral decomposition .the operator to be `` diagonalized '' is defined in terms of observable densities , while the resulting eigenvalues ( contained in ) and eigenfunctions ( contained in ) provide the unobserved densities of interest .a few more steps are required to ensure uniqueness of this decomposition , which we now briefly outline .one needs to ( i ) invoke a powerful uniqueness result regarding spectral decompositions [ theorem xv 4.5 in ] , ( ii ) exploit the fact that densities integrate to one to fix the scale of the eigenfunctions , ( iii ) handle degenerate eigenvalues and ( iv ) uniquely determine the ordering and indexing of the eigenvalues and eigenfunctions .this last , and perhaps most difficult , step , addresses the issue that both and , for some one - to - one function , are equally valid ways to state the eigenfunctions that nevertheless result in different operators . to resolve this ambiguity , we note that for any possible operator satisfying ( [ eqdiag ] ) , there exist a unique corresponding operator , via equation ( [ eqgivexsx ] ) .however , only one choice of leads to an operator whose kernel satisfies assumption [ condloc ] .hence , , and are identified , from which the functions , , , and can be recovered by exploiting the centering restrictions on , and . an operator approach has recently been proposed to address certain types of nonclassical measurement error problems [ ] , but under assumptions that rule out berkson - type measurement errors : it should be emphasized that , despite the use of operator decomposition techniques similar to the ones found in ( hereafter hs ) , it is impossible to simply use their results to identify the berkson measurement error model considered here , for a number of reasons .first , the key condition ( assumption 5 in hs ) that the distribution of the mismeasured regressor given the true regressor is `` centered '' around does not hold for berkson errors . consider the simple case where the berkson measurement error is normally distributed and so are the true and mismeasured regressors .the distribution of given is a normal centered at .hence , there is absolutely no reasonable measure of location ( mean , mode , median , etc . )that would yield the appropriate centering at that is needed in assumption 5 of hs .in addition , one can not simply replace the assumption of centering of given ( as in hs ) by a centering of given ( as would be required for berkson errors ) and hope that theorem 1 in hs remains valid .hs exploit the fact that , in a conditional density , there is no jacobian term associated with a change of variable in a conditioning variable ( here ) .however , with berkson errors , the corresponding change of variable would not take place in the conditional variables , and a jacobian term would necessarily appear , which makes the approach used in hs fundamentally inapplicable to the berkson case . solvingthis problem involves ( i ) using a different operator decomposition than in hs and ( ii ) using a completely different approach for `` centering '' the mismeasured variable.=1 a referee suggested an alternative argument ( formalized in the ) that makes a more direct connection with theorem 1 in hs but under the additional assumption that and have the same dimension .such an assumption is rather restrictive because it will often result in the assumption that is one - to - one ( assumption [ condnodup ] ) being violated .for instance , if is scalar and we have access to two instruments and such that neither ] are strictly monotone , then is not one - to - one for either instrument used in isolation .however , the mapping ,e [ z_{2}|x^{\ast } ] ) ] such that .let and be strictly positive and bounded functions with decreasing in and .let .we also define suitable norms and sets for the regression functions . here, we need to allow for functions that diverge to infinity at controlled rates toward infinite values of their argument . in analogy with any existing global measure of expected error, we also use a norm that downweights errors in the tails , which is consistent with the fact that the tails of a nonparametric regression function are always estimated with more noise , since there are fewer datapoints there .[ defsetg]let by some given strictly positive , bounded and differentiable weighting function . for any function ,let where .let and where is a given positive function that is increasing in and symmetric about .we can now state the regularity conditions needed .[ condiid]the observed data are independent and identically distributed across .[ condball]we have and . [conddense]the set of functions representable as series ( [ expf ] ) and ( [ expg ] ) are , respectively , dense in ( in the norm ) and ( in the norm ) .denseness results for numerous types of series are readily available in the literature [ e.g. , , ] .although such results are sometimes phrased in a mean square - type norm rather than the sup norm used here , lemma [ lemnorm2 ] below [ proven in ] establishes that , within the sets and , denseness in a mean square norm implies denseness in the norms we use .[ lemnorm2]let be a sequence in . then implies ( for and as in definition [ defsetf ] ) .we also need standard boundedness and dominance conditions .[ conddom]for any , for and as in definitions [ defsetg ] and [ defsetf ] , respectively .[ condmomex ] there exists such that < \infty ] ( implying a standard deviation of ) .we consider a thick - tailed distribution with 6 degrees of freedom scaled by as the distribution of .the standard deviation of the error is almost identical to the one of the `` signal '' , thus making this estimation problem exceedingly difficult .the distribution of is a logistic scaled by while the distribution of is a distribution with degrees of freedom scaled by .the regression function has the form=1 =0 which is only finitely many times differentiable , thus limiting the convergence rate of its series estimator in the measurement - error - robust estimator ( the naive estimator would be less affected since it would `` see '' a smoothed version of this function ) .the instrument equation has a specification that is strictly convex and therefore tends to exacerbate the bias in many nonparametric estimators , a total of independent samples , each containing observations , were generated as above and fed into our estimator . for estimation purposes , the functions and both represented by polynomials while the densities of , and are represented by a gaussian multiplied by a polynomial [ following , who establish that these choices satisfy a suitable denseness condition ] .the gaussian is centered at the origin , but its width is left as a parameter to be estimated . note that the functional forms considered are not trivially nested within the space spanned by the truncated sieve approximation .this was an intentional choice aimed at properly accounting for the nonparametric nature of the problem ( in which the researcher never has the fortune of selecting a truncated sieve fitting the true model exactly ) .the integral in equation ( [ eqfyxz ] ) is evaluated numerically by discretizing the integral as a sum over the range f_{z|\tilde{x}^{\ast } } ( \cdot variable related to through for some one - to - one function .is also measurable , for otherwise would not be a proper random variable . ] under this alternative indexing , all the assumptions of the original model must still hold with replaced by , so a relationship similar to ( [ eqfzxsidx ] ) would still have to hold , for the same observed or , in operator notation , . in order for to be a valid alternative density, it must satisfy the same assumptions ( and their implications ) as .in particular , the fact that is invertible ( established above via lemma [ leminj ] ) must also hold for .hence , for any alternative , there is a unique corresponding , given by .we can find a more explicit expression for as follows .first note that we trivially have that since and is one - to - one . by performing the change of variable in ( [ eqfzxsidx ] ), we obtain where the measure is defined , via for any measurable set , where denotes the lebesgue measure and . from thiswe can conclude the equality between the two following measures : by comparison to equation ( [ eqaltfzxs ] ) and the uniqueness of the measure due to the injectivity of the operator , shown in lemma [ leminj ] in the general case where the domain of could include finite signed measures .we will now show that necessarily violates assumption [ condloc ] ( with replaced by ) , unless is the identity function .since with independent from , we have and by a similar reasoning with . equation ( [ eqds1 ] ) then becomes now , for a given , consider radom nikodym derivative of with respect to the lebesgue measure , which is , by definition ( almost everywhere ) equal to , a bounded function by assumption [ conddens ] . by equation ( [ eqmueq ] ) , the existence of the radom nikodym derivative of the left - hand side implies the existence of the same radom nikodym derivative on the right - hand side , and we can write almost everywhere . integrating both sides of the equation over all ,we obtain ( after noting that points where the equality may fail have null measure and therefore do not contribute to the integral ) , , since densities integrate to , which implies that , that is , is also the lebesgue measure .it follows from ( [ eqallf ] ) that , almost everywhere in order for assumption [ condloc ] to hold for both and , we must have that , when viewed as a function of for any given , is centered at , and we must simultaneously have that , when viewed as a function of for any given , is centered at , that is , .the two statements are only compatible if .thus , there can not exist two distinct but observationally equivalent parametrization of the eigenvalues / eigenfunctions .hence we have shown , through equation ( [ eqdiagpr ] ) , that the unobserved functions and are uniquely determined ( up to an equivalence class of functions differing at most on a set of null lebesgue measure ) by the observed function .next , equation ( [ eqop2i ] ) implies that is uniquely determined as well .once and are known , the functions and can be identified by exploiting the centering restrictions on , and , for example , if is assumed to have zero mean .next , can be straightforwardly identified , for example , for any .similar arguments yield and from as well as from .it follows that equation ( [ eqfyxz ] ) has a unique solution .the second conclusion of the theorem then follows from the fact that both and are uniquely determined ( except perhaps on a set of null lebesgue measure ) from .the following lemma is closely related to proposition 2.4 in .it is different in terms of the spaces the operators can act on and more general in terms of the possible dimensionalities of the random variables involved .[ leminj]let and be generated by equations ( [ eqxs ] ) and ( [ eqz ] ) .let be the set of finite signed measures on a given set or [ and note that includes as a special case , in the sense that for any function in , there is a corresponding measure whose radom nikodym derivative with respect to the lebesgue measure is ] . under assumptions [ condindep ] , [ conddens ] , [ condinv ] , [ condnodup ] and [ condcont ] , the operators , and , defined in ( [ eqdefop ] ) , are injective mappings .first , one can verify that implies that and similarly for and , since the ( conditional ) densities involving variables and are bounded by assumption [ conddens ] and are absolutely integrable .we now verify injectivity of . by assumptions[ condindep ] , [ conddens ] and equation ( [ eqz ] ) , we have , for any , ( z ) = \int f_{z|x^{\ast } } \bigl ( z|x^{\ast } \bigr ) \,dr \bigl ( x^{\ast } \bigr ) = \int f_{\delta z } \bigl ( z - h \bigl ( x^{\ast } \bigr ) \bigr ) \,dr \bigl ( x^{\ast } \bigr).\ ] ] next , let denote the signed measure assigning , to any measurable set , the value and note that is a finite signed measure since is .then , we can express as ( z ) = \int f_{\delta z } \bigl ( z-\tilde{x}^{\ast } \bigr ) \,d\tilde{r } \bigl ( \tilde{x}^{\ast } \bigr),\ ] ] that is , a convolution between the probability measure of ( represented by its lebesgue density ) and the signed measure ; see chapter 5 in . by the convolution theorem for signed measures [ theorem 5.1(iii ) in ] , one can convert the convolution ( [ eqconvol1 ] ) into a product of fourier transforms , where ( z ) e^{\mathbf{i}\zeta z}\,dz ] and .since , the characteristic function of , is nonvanishing by assumption [ condinv ] , we can isolate as since there is a one - to - one mapping between finite signed measures and their fourier transforms [ by theorem 5.1(i ) in ] , can be recovered as the unique signed measure whose fourier transform is .we now show that the signed measure uniquely determines the measure .let for any measurable , and note that is also measurable since is continuous by assumption [ condcont ] . then observe that by assumption [ condnodup ] , if and only if , and we have since is arbitrary , the knowledge of uniquely determines the value assigned to any measurable set by the signed measure .injectivity of is a special case of the above derivation ( with replaced by ) , in which is the identity function .finally , injectivity of is implied by the injectivity of and , since by assumption [ condindep ] and equations ( [ eqxs ] ) and ( [ eqz ] ) .
this paper establishes that so - called instrumental variables enable the identification and the estimation of a fully nonparametric regression model with berkson - type measurement error in the regressors . an estimator is proposed and proven to be consistent . its practical performance and feasibility are investigated via monte carlo simulations as well as through an epidemiological application investigating the effect of particulate air pollution on respiratory health . these examples illustrate that berkson errors can clearly not be neglected in nonlinear regression models and that the proposed method represents an effective remedy .
in this note we study a class of systems whose parameters are driven by a time reversed markov chain . given a time horizon and a standard markov chain values in the set , we consider the process and the system where , as usual , represents the state variable of the system and is the control variable . these systems may be encountered in real world problems , specially when a markov chain interacts with the system parameters via a _ first in last out _ queue .an example consists of drilling sedimentary rocks whose layers can be modelled by a markov chain from bottom to top as a consequence of their formation process .the first drilled layer is the last formed one .another example is a dc - motor whose brush is grind by a machine subject to failures , leaving a series of imprecisions on the brush width that can be described by a markov chain , so that the last failure will be the first to affect the motor collector depending on how the brush is installed .one of the most remarkable features of system is that it provides a dual for optimal filtering of standard _markov jump linear systems _ ( mjls ) .in fact , if we consider a quadratic cost functional for system with linear state feedback , leading to an optimal control problem that we call _ time reversed markov jump linear quadratic _ problem ( trm - jlqp ) , then we show that the solution is identical to the gains of the _ linear minimum mean square estimator _ ( lmmse ) formulated in , with time - reversed gains and transposed matrices . in perspective with existing duality relations ,the one obtained here is a direct generalization of the well known relation between control and filtering of linear time varying systems as presented for instance in ( * ? ? ?* table 6.1 ) , or also in in different contexts . as for mjls , the duality between control and filteringhave been considered e.g. in , while purely in the context of standard mjls , thus leading to more complex relations involving certain generalized coupled riccati difference equations . here, the duality follows naturally from the simple reversion of the markov chain given in , with no extra assumptions nor complex constructions .another interesting feature of is that the variable , which is commonly used in the literature of mjls , , evolves along time according to a _ time - varying _ linear operator , as shown in remark [ rem_evolution_first_moment ] , in a marked dissimilarity with standard mjls .this motivated us to employ , the conditioned second moment of , leading to time - homogeneous operators .the contents of this note are as follows .we present basic notation in section [ sec - notation ] . in section [ sec - sys ]we give the recursive equation describing , which leads to a stability condition involving the spectral radius of a time - homogeneous linear operator . in section [ sec - problem ] ,we formulate and solve the trm - jlq problem , following a proof method where we decompose into two components as to handle that are visited with zero probability . the duality with the lmmse then follows in a straightforward manner , as presented in section [ sec - duality ] .concluding remarks are given in section [ sec - conclusions ] .let be the -dimensional euclidean space and be the space formed by real matrices of dimension by .we write to represent the hilbert space composed of real matrices , that is , where , .the space equipped with the inner product where is the trace operator and the superscript denotes the transpose , is a hilbert space .the inner product induces the norm .if , we write simply .the mathematical operations involving elements of , are used in element - wise fashion , e.g. for and in we have , where is the usual matrix multiplication .similarly , for a set of scalars we write . regarding the system setup , it is assumed throughout the paper that is a random variable with zero mean satisfying .we have and .the system matrices belong to given sets , , and with and for each .we write , where is the probability measure ; is considered as an element of , that is , . stands for the limiting distribution of the markov chain when it exists , in such a manner that .also , we denote by $ ] , the transition probability matrix of the markov chain , so that for any , no additional assumption is made on the markov chain , yielding a rather general setup that includes periodic chains , important for the duality relation given in remark [ rema - extending - to - time - varying ] . we shall deal with linear operators .we write the i - th element of by and similarly for the other operators . for each , we define : be the expected value of a random variable .we consider the _ conditioned second moment _ of defined by [ lem - def - second - moment ] consider the system with for each .the conditioned second moment is given by and for a fixed , arbitrary , note that from , and the total probability law we obtain : in order to compute the right hand side of , we need the following standard markov chain property : for any function we have then by replacing with and applying the above in ( [ eq_ayuda_1 ] ) yields which completes the proof . [ rem_evolution_first_moment ]let , be given by this variable is commonly encountered in the majority of papers dealing with ( standard ) mjls .however , calculations similar to that in lemma [ lem - def - second - moment ] lead to note that the markov chain measure appears explicitly , leading to a time - varying mapping from to .the only exception is when the markov chain is _ reversible _ , in which case the facts that and that the markov chain starts with the invariant measure ( by definition ) yield in which case evolves exactly as in a standard mjls .the following notion is adapted from ( * ? ? ?* chapter 3 ) .we say that the system with is _ mean square stable _ ( ms - stable ) , whenever this is equivalent to say that the variable converges to zero as goes to infinity , leading to the following result .the system with is ms - stable if and only if the spectral radius of is smaller than one .let the output variable given by and the trm - jlq consists of minimizing the mean square of with stages , as usual in jump linear quadratic problems , regarding the information structure of the problem , we assume that is available to the controller , that the control is in linear state feedback form , where is the decision variable , and that one should be able to compute the sequence prior to the system operation , that is , is not a function of the observations , .the conditioned second moment for the closed loop system is of much help in obtaining the solution .the recursive formula for follows by a direct adaptation of lemma [ lem - def - second - moment ] , by replacing with its closed loop version [ lem - def - second - moment - controlled - version ] the conditioned second moment is given by and in what follows , for brevity we denote [ lem_cost_function ] the trm - jlq problem can be formulated as the mean square of the terminal cost , is : now , by a calculation similar as above leads to substituting and into we obtain .let us denote the gains attaining by . from a dynamic programming standpoint , we introduce value functions by : and for , where and , , satisfies .[ teo_optimal_gains ] define and , , as follows .let and for each and , compute : if , else ( if ) , then , and , .we apply the dynamic programming approach for the costs defined in and the system in , whose state is the variable .it can be checked that is the adjoint operator of , and consequently let us decompose as follows ; let the set of states having zero probability of being visited at time be denoted by we write , where is such that for any , and in a similar fashion for .we now show that the term is zero irrespectively of .first , note that for we have .second , for one can check that for all such that , so that .this yields . bringing these facts together and recalling that by construction for all , we evaluate by substituting into we write by expanding some terms and after some algebra to complete the squares, we have where as given in .this makes clear that the minimal cost is achieved by setting , .now , by replacing with in , with as given in , .finally , by choosing , , we write which completes the proof .we consider the lmmse for standard mjls as presented in .the problem consists of finding the sequence of sets of gains , , that minimizes the covariance of the estimation error when the estimate is given by a luenberger observer in the form where is the output of the mjls and and are i.i.d .random variables satisfying , and .moreover , it is assumed that and .we write , , so that it is the time - reverse of , .note that we are considering the same problem as in , though our notation is slightly different : here we assume that is available for the filter to obtain and the system matrices are indexed by , while in the standard formulation are observed at time and the system matrices are indexed by .this `` time shifting '' in avoids a cluttering in the duality relation . along the same line , instead of writing the filter gains as a function of the variable , given by the coupled riccati difference equation ( * ? ? ?* equation 24 ) whenever and otherwise , in this note we use the variable defined in , leading to replacing this in the above equation , after some algebraic manipulation one obtains : whenever and otherwise , with initial condition .the optimal gains are given for by whenever and otherwise .the duality relations between the filtering and control problems are now evident by direct comparison between and . and are replaced with and , respectively . moreover , comparing the initial conditions of the coupled riccati difference equations , we see replaced with . also , we note that are equivalent to , with a similar relation for the gains and .the markov chains driving the filtering and control systems are time - reversed one to each other .[ rema - extending - to - time - varying ] time - varying parameters can be included both in standard mjls and in by augmenting the markov state as to describe the pair , , , and considering a suitable matrix of higher dimension .although this reasoning leads to a matrix of high dimension , periodic and sparse , it is useful to make clear that our results are readily adaptable to plants whose matrices are in the form .either by this reasoning or by re - doing all computations given in this note for time - varying plants , we obtain the following generalization of ( * ? ? ?* table 6.1 ) .c c + + filtering of mjls & control of + + + & + & + & + & + & + & + & + & +we have presented an operator theory characterization of the conditional second moment , an ms stability test and formulas for the optimal control of system .the results have exposed some interesting relations with standard mjls .for system it is fruitful to use the _ true _ conditional second moment whereas for standard mjls one has to resort to the variable given in to obtain a recursive equation similar to the ones expressed in the lemmas 3.1 and 4.1 .moreover , these classes of systems are equivalent if and only if the markov chain is revertible , as indicated in remark 1 .the solution of the trm - jlq problem is given in theorem 4.1 in the form of a coupled riccati equation that can be computed backwards prior to the system operation , as usual in linear quadratic problems for linear systems .the result beautifully extends the classic duality between filtering and control into the relations expressed in table 1 .
we study a class of systems whose parameters are driven by a markov chain in reverse time . a recursive characterization for the second moment matrix , a spectral radius test for mean square stability and the formulas for optimal control are given . our results are determining for the question : is it possible to extend the classical duality between filtering and control of linear systems ( whose matrices are transposed in the dual problem ) by simply adding the jump variable of a markov jump linear system . the answer is positive provided the jump process is reversed in time .
entanglement of formation ( ) and entanglement of distillation ( ) were invented by bennett et al in ref. and satellite papers . in a series of previous papers, we showed how to express in terms of conditional mutual information ( cmi ) , but we said nothing about . in this brief letter , we will show how to express in terms of cmi .recently , other researchers have expressed some of their entanglement ideas in terms of unconditional mutual information .see , for example , ref. .two reasons why cmi is useful for quantifying entanglement are the following .first , entanglement is an exclusively quantum " effect .cmi satisfies this requirement .it vanishes in the classical regime , but not in the quantum regime , for a fiducial experiment .second , entanglement is associated with a correlation between two events and .but there must be something to distinguish entanglement correlations from classical correlations .cmi satisfies this requirement too .it measures more than just the correlation of and .those two events are assumed to have a common ancestor event ( or cause , or antecedent ) in their past , call it , and we condition on that common ancestor .( see fig.[fig : ent - form ] ) for example , in bohm s version of the epr experiment , might correspond to the event of a spin - zero particle breaking up into two spin - half particles with opposite spins .we will try to make this paper as self contained as we can for such a short document .if the reader has any questions concerning notation or definitions , we refer him to ref. a much longer , tutorial paper that uses the same notation as this paper .we will represent random variables by underlined letters . will be the set of all possible values that can assume , and will be the number of elements in . will represent the cartesian product of sets and . in the quantum case, will represent a hilbert space of dimension . will represent the tensor product of and .red indices should be summed over ( e.g. ) . will denote the set of all probability distributions on , such that . will denote the set of all density matrices acting on .as usual , for any three random variables , , we define the _ mutual information_(mi ) by h ( : ) = h ( ) + h ( ) - h ( , ) , and the _ conditional mutual information_(cmi ) by h ( : | ) & = & h(| ) + h(| ) - h ( , | ) + & = & h ( ) -h ( , ) -h ( , ) + h ( , , ) .since , one might be tempted to assume that also , but this is not generally true .one can construct examples for which cmi is greater or smaller than mi , a fact well known since the early days of classical information theory , .one can define analogous quantities for quantum physics .suppose , with partial traces , etc .then we define s ( : ) = s ( _ ) + s ( _ ) -s ( _ , ) , and s(:| ) = s ( _ ) -s ( _ , ) -s ( _ , ) + s ( _ , , ) .before racing off at full speed , let s warm up with a brief review of the cmi definition of .consider the classical bayesian net shown in fig.[fig : ent - form ] .it represents a probability distribution of the form : p(a , b , ) = p(a| ) p(b | ) p ( ) .[ eq : prob - ent - form]one can easily check that for this probability distribution , is identically zero . in the classical case , we define by e_f(p _ , ) = _ p _ , , k h ( : | ) , [ eq : clas - ent - form]where is the set of all probability distributions with a fixed marginal .thus , is a function of . if contains a of the form given by eq.([eq : prob - ent - form ] ) , then .this is always true if is defined to contain all probability distributions with arbitrary positive values of .but it may not be true if contains only probability distributions with a fixed value .the fact that the right hand side of eq.([eq : clas - ent - form ] ) vanishes in the classical case ( if includes all values ) is an important motivation for defining this way .we want a measure of entanglement that is exclusively quantum .in the quantum case , suppose is a probability distribution for , and is an orthonormal basis for .for all , suppose , and .consider a separable " density matrix of the form _ , , = _[ eq : rho - ent - form]one can easily check that for this density matrix , . in the quantum case, we define by e_f ( _ , ) = _ _ , , ks ( : | ) , [ eq : quan - ent - form]where equals the set arbitrary , fixed marginal .thus , is a function of . if contains a of the form given by eq.([eq : rho - ent - form ] ) , then is zero .the quantum can be nonzero even if contains all density matrices with arbitrary values . in eq.([eq :quan - ent - form ] ) , we could set , where is the subset of which restricts to be of the form _ , , = _ w _ , where .one can show that eq.([eq : quan - ent - form ] ) with is identical ( up to a factor of 2 ) to the definition of originally given by bennett et al in ref. .other possible choices come to mind .for example , one could set equal to , where is that subset of which restricts to be of the form _ , , = _w _ , where need not be pure . represent different degrees of information about how was created . represents total ignorance .in this section , we will define a classical . in the next section, we will find a quantum counterpart for it . consider the classical bayesian net of fig.[fig : ent - dist ] .the arrow from to allows what is often referred to as classical communication from alice to bob " .let , .let and . the net of fig.[fig : ent - dist ] satisfies : p(a , b , a , b ) = _ x , x p(a , a| a , a ) p(b , b |b , b , a , a ) p(x ) p(x ) , [ eq : prob - ent - dist]where [ eq : prob - sepa ] p(x ) = _ p(a|)p(b|)p ( ) , and p(x ) = _ p(a|)p(b|)p( ) .we wish to consider only those experiment in which and are both fixed at a known value , call it 0 for definiteness . for such experiments ,one considers : p(a , b | a=b=0 ) = .henceforth we will use as a short - hand for the string " .we will also use to denote and to denote . in the classical case, we define by e_d(p _ , p_ ) = _ u , v _ p _ , , | k h ( : | , ) , where is the set of all probability distributions with a fixed marginal that satisfies eq.([eq : prob - ent - dist ] ) . depends on . since we maximize over , is a function of and .next we will show that the net of fig.[fig : ent - dist ] , * without the classical communication arrow * , satisfies : e_d(p _ , p_)e_f(p _ ) + e_f(p_ ) .[ eq : ed - lt - ef]suppose we could show that h ( : | , , )h ( ( , ) : ( , )| , ) .[ eq : cmis - goal]after taking limits on the left hand side , this gives e_d h ( , : , | , ) .[ eq : cmis - left]note that by the independence of the prime and unprimed variables h ( , : , | , ) = h(:| ) + h(:| ) .[ eq : cmis - right]eqs.([eq : cmis - left ] ) and ( [ eq : cmis - right ] ) imply eq.([eq : ed - lt - ef ] ) .so let us concentrate on establishing eq.([eq : cmis - goal ] ) .events all occur before so they are independent of . therefore , we can write : h ( , : , | , ) = h ( , : , | , , ) .[ eq : gamma - indep]because of eq.([eq : gamma - indep ] ) , eqs.([eq : cmis - goal ] ) is equivalent to : h ( : | , , )h ( , : , | , , ) .[ eq : cmis - goal2]eq.([eq : cmis - goal2 ] ) follows easily from the following lemma , which is proven in appendix[app : dpis ] .lemma : the net of fig.[fig : data - pro ] satisfies h ( : | ) h ( : | ) . [ eq : dat - pro - cmi ]in this section , we will give a quantum counterpart of the classical defined in the previous section .as in the classical case , let , .let and .suppose and are given .suppose is a unitary transformation mapping onto : u^ _ , | , u _, | , = 1 .likewise , suppose that for each , a unitary transformation mapping onto : v^a _ , | , v _ , | , ^a = 1 .define the following projector on : _ ^a= _ _ . now consider the following density matrix _ , | = _ a _^a u _ , | , v _ , | , ^a _ _ v^a _ , | , u^ _ , | , _^a , [ eq : rho1-ent - dist]where is defined so that .the previous equation can also be expressed in index notation as : & = & + & & + & & + & & + & & + & & .finally , we define by e_d ( _ , _ ) = _ u , v _ _ , , |k s ( : | , ) , [ eq : quan - ent - dist]where contains all density matrices with a fixed marginal that satisfies eq.([eq : rho1-ent - dist ] ) .in this appendix , we will prove two well known data processing inequalities .[ lemma : dpi - re ] ( data processing inequality for relative entropy , see ref. ) if and is a matrix of non - negative numbers such that , then d(p//q)d(tp//tq ) , where should be understood as the matrix product of the column vector times the matrix . for any two random variables , let be shorthand for . in other words , for all .the two cmi we are dealing with can be rewritten in terms of relative entropy as follows : h ( : | ) = _ p ( ) d ( q^ _ , //q^_q^ _ ) , and h ( : | ) = _ p ( ) d ( q^ _ , // q^_q^ _ ) . thus , if we can show that d ( q^ _ , //q^_q^ _ ) d ( q^ _ , //q^_q^ _ ) , then the present lemma will be proven .the last inequality will follow from lemma [ lemma : dpi - re ] if we can find a transition probability matrix such that * r.r .tucci , quantum entanglement and conditional information transmission " , quant - ph/9909041 * r.r .tucci , separability of density matrices and conditional information transmission " , quant - ph/0005119 * r.r .tucci , entanglement of formation and conditional information transmission " , quant - ph/0010041 * r.r .relaxation method for calculating quantum entanglement " , quant - ph/0101123 * r.r .tucci , entanglement of bell mixtures of two qubits " , quant - ph/0103040 this data processing inequality for relative entropy is well known .ref. mentions it on page 300 .our proof of the inequality comes from page 55 of the book by i. csiszar and j. korner , information theory - coding theorems for discrete memoryless systems " , ( academic press , 1981 ) .
in previous papers , we expressed the entanglement of formation in terms of conditional mutual information ( cmi ) . in this brief paper , we express the entanglement of distillation in terms of cmi .
when the beam position change takes place during the conventional global orbit correction processes , the photon beam through the beamline is affected , and it results in the alignment of mirrors and monochromators .this is particularly severe for a long beamline such as the undulator beamlines .this problem can be overcome by introducing a local bump at the particular beamline .however , for some light sources , there are not enough corrector magnets to generate local bumps as much as needed .this difficulty can be overcome when we correct the cod under the condition where the beam positions at particular points are not changed .this is our main objective to develop a method of the closed orbit correction under constraint conditions .the beam position is normally described as a vector measured by beam position monitors ( bpm ) . in order to correct the cod ,we need corrector magnets with their strengths described as a vector .when the corrector magnets kick a beam , the new beam positions can be described as follows. here , is called the response matrix of dimensions whose components are given by where is the betatron tune of the storage ring , and and are the beta function and the phase function for the bpm and corrector magnet , respectively . in order to reduce the cod , we have to choose the kick of each corrector magnet satisfying it is called the psinom algorithm .the cod correction is actually a minimization procedure of defined as by using the relations of vector operators which are hyper - dimensional gradient operators , we can get the same result of psinom algorithm as shown in eq .( [ coc_eqn_ordinary_cod_relation ] ) .this algorithm includes an inversion procedure of the matrix . in some cases , we can get unacceptable corrections due to the ill - posedness of . a regularization method is introduced to avoid this problem . in this case , is written as where is the regularization parameter . then the minimum corrector kicks can be determined by this equation represents the modified psinom algorithm , and we can relax the inversion problem of the singular matrix by using the diagonal matrix when is singular . after having the new closed orbit with the minimum distortion from eq .( [ coc_eqn_modified_psinom ] ) , the new orbit is generally different from the original orbit . sometimes , this difference can be taken place at very sensitive locations such as the entrance and the exit of an undulator .if the beamline is well aligned for this undulator , a cod correction should be avoided in this region .a constraint condition can be described in terms of the beam position at bpm such as here , is the row of the response matrix .also , and are the beam positions before and after the correction , respectively .since we want to keep this position unchanged , should be zero .if there are bpms involved in the constraint condition , we can write the constraint condition as follows . here , is the ( ) sub - matrix of the response matrix .each component of corresponds to the bpm involved in the constraint condition .we also assume that has a non - trivial solution .we now add this constraint condition to the modified psinom algorithm to obtain the new such as , \cr + \frac{1}{2 } \left < { \rm\bf k } \right| \alpha \left| { \rm\bf k } \right > + \left< { \rm\bf \gamma } \right| { \rm\bf c}^t \left| { \rm\bf k } \right > .\end{aligned}\ ] ] here , is the lagrangian multiplier , and it is an dimensional vector . by following the derivative to the corrector strength , we can get the vector which minimizes the closed orbit distortion outside the constraint region such as , here , we define the square matrices of dimensional and dimensional as follows . eq .( [ coc_eqn_new_correction ] ) can be rewritten as now , we can remove the lagrangian multiplier in eq .( [ coc_eqn_new_correction ] ) by using the above equation .then , we can finally get the kick values of the corrector magnets as follows .the algorithm developed in the previous section has been successfully tested for the pohang light source ( pls ) operation .the new correction code is written in c language and it is installed in one of the operator consoles which is a sun workstation . although the pls control system is not using experimental physics and industrial control system ( epics ) which is used in many accelerator laboratories world - widely , there is a plan to upgrade the control system based on epics technology . as a part of such upgrade activity, we have started the development orbit correction algorithm with epics .since we do not have any epics - based control system yet , we decide to develop the orbit correction simulator using epics . in order to seek the way to adopt our orbit correction algorithm into epics , we have considered two ways : one is based on the subroutine record and the other is using the state notation language ( snl ) program .the first method needs new record support that runs the orbit correction algorithm . to do this, the correction code is required to be written upon the protocol required by the record support such as the entry structure and the callback structure .this approach gives relatively fast response because this method uses database access and can access the record by process passive mode .this is a good feature for the realtime system but the large portion of the orbit correction code we have already tested must be rewritten according to the protocol of the record support .one of important tasks of epics input / output controller ( ioc ) is the sequencer that runs programs written in snl .the snl considers the control object as the state machine and treats transitions between states .the sequencer monitors the transitions for the snl and runs callback functions using the entry table the corresponding program written in snl .since the sequencer accesses to the record via channel access ( ca ) and the record access is only possible through non - process passive mode , there are some restrictions in access time or the treatment of records .however , this method can directly imbed the program written in c language . also ,unlike the subroutine record , this method can remove program tasks without rebooting the system , which gives the code debugging very easy .thus , the second method gives more benefits when the system does not require heavy realtime demands . upon reviewing the two methods , we have decided to use the latter method .the orbit correction simulator is developed in ioc level and has the snl program imbedding c codes and several database records , as shown in fig .[ fig_schemetic ] .there are two parts in the snl program : one for the feedback including the orbit correction algorithm and the other for the simulator which emulates the pls storage ring .the latter calculates orbit changes from _ ai : kick(bpm_no ) _ records . in the next step ,the corrector strengths s calculated by the algorithm described in the previous section and the results are stored at _ al : kick(kick_no ) _ record uses the analog input record to represent the corrector strength . there are 70 records altogether as representing 70 correctors in the pls storage ring .they are linking the simulator and the feedback in a parallel manner .the index _ (bpm_no ) _ and _ ai : corrorbit ( bpm_no ) _ is the interger value between one and 108 .the superposition of these two records gives new orbit .this can be done by the calculation record _ calc : orbit(bpm_no ) _ and _ ai : corrorbit$(bpm_no ) _ , the superpositionis newly calculated whenever the values of two records are changed .on the other hand , _ ai : numbpm _ , _ ai : numcorrector _ , _ ai : startconstr _ , and _ ai : endconstr _ are representing the bpms and correctors used in the correction algorithm , start and end point of the constraint region , respectively .they are using analog input records and notifying the snl program if necessary .the other records are binary input records and they are representing necessary status flags .they are used as the mediator of state transitions between state sets in the snl program .we have developed the cod correction algorithm under the constraint condition where the beam position at particular point is not changed .the new algorithm is based on the modified psinom algorithm which includes the regularization process in order to avoid the inversion problem of the ill - posed response matrices .we have confirmed that this algorithm is working well and is in good agreement with the experimental results . even though the pls is planning to upgrade their control system with epics ,there is no working epics based control system at pls . due to this , we have developed the orbit correction simulator using c - code embedded snl program based on epics technology .this simulator part can be replaced by the real control system with minor changes after the completion of the upgrade .1 w. herr : " algorithms and procedures used in the orbit correction package cocu , " cern sl/95 - 07 ( ap ) , 1995 . y. n. tang and s. krinsky : proc. aip conf .* 315 * ( aip press , 1993 ) 87 .see url"_http://www.aps.anl.gov / epics _ " martin r. kraimer : " epics ioc application developer s guide , " aps / anl , 1998 andy kozubal : " state notation language and sequencer user guide , " lanl , 1995 philip stanley , _ et al . _ : " epics record reference manual , " lanl , aps / anl , 1995 kukhee kim , jinhyuk choi , tae - yeon lee , guinyun kim , moohyun cho , won namkung , in soo ko : jpn . j. appl* 40 * ( 2001 ) 4233 .
we have carried out the basic research for the accelerator and tokamak control system based on the experimental physics and industrial control system ( epics ) . we have used the process database and the state notation language ( snl ) in the epics to develop the simulator which represents as a virtual machine . in this paper . in this paper , we introduce the simulator of the global orbit feedback system as an example . this simulates the global orbit feedback system under the constraint conditions for pohang light source ( pls ) storage ring . we describe the details of the feedback algorithm and the realization of the simulator .
in climate dynamics research , analysis of time series data has a central position .detection and quantification of dependence between measured or modeled variables is often of interest .apart from the dependences among different physical quantities , in many contexts the task is given to assess the dependence among the measurements of the same physical variable ( i.e. surface air temperature , sat ) measured at many different geographical locations .the motivation for such a procedure commonly stems from the need to reduce the dimensionality of high - dimensional original data , such as in the application of empirical orthogonal function analysis to uncover the basic modes of dynamics of climate system . on the other side ,dependence quantification might be used to uncover the complex structure of the climate system using approaches such as graph theory .other applications exist including those combining the above named , see e.g. .there is a wide range of methods available for detection of dependence between variables .the most widely known and used is pearson s correlation coefficient , a measure particularly sensitive to linear dependence .while the pearson s correlation detects dependence reliably in the case of multivariate gaussian probability distributions , it may be suboptimal in the case of complex non - gaussian dependence patterns .for particular dependence patterns ( or bivariate probability distributions ) , it may also fail to detect statistical dependence between variables of interest completely .note that in the following we use the terms ` linear ' and ` gaussian ' dependence interchangeably to denote patterns of dependence corresponding to bivariate normal distribution .while the latter term is more precise , the former is more commonly used in the general community together with the distinction between linear and nonlinear methods .however , alternative measures exist that are able to better reflect potential non - linear dependences .these include the spearman s ordinal correlation coefficient and kendal s tau , that are designed to be sensitive to any monotonous dependence pattern , without the restriction to linear relationships .an ultimate alternative to the pearson s correlation coefficient then lies in the utilization of mutual information , an information - theory based measure that is in principle sensitive to any dependence between variables . for this generality ,mutual information is widely used to quantify statistical dependence in complex systems , and has been also introduced to the analysis of climate time series .as the climatic system is highly nonlinear , it seems well motivated to use nonlinear dependence measures for analysis of the measured time series , as suggested e.g. in .this may in theory allow more sensitive detection and quantification of dependences , potentially uncovering new climatic phenomena .on the other side , nonlinear measures such as mutual information may have downsides including more difficult implementation and interpretation , increased computational demands and numerical stability issues .these considerations motivate the central question of the current report : does the non - linear component of the climate time series dependences sufficiently motivate the use of nonlinear dependence measures ?it is important to note , that the answer to this question might be complex , and certainly would be domain specific . to deal with this complication , we outline here first a generally applicable framework , and then show the results obtained by analyzing in detail a specific dataset of particular interest .this is the monthly sat data from the ncep / ncar reanalysis dataset , as well as concatenated era-40 and era - interim data .note that this data has been analyzed in many recent studies , utilizing both linear and nonlinear methods , and so constitutes a well motivated timely and relevant example of application of this framework to guide the decision regarding the method choice .data from the ncep / ncar reanalysis dataset have been used . in particular , we utilize the time series of the monthly mean sat from january 1948 to december 2007 ( time points ) , sampled at latitudes and longitude forming a regular grid with a step of .the points located at the globe poles have been removed , giving a total of spatial sampling points . to explore the generalizability of the results ,analogous analysis has been carried out for the era-40 dataset concatenated with the era - interim dataset ( further referred to together as the era data ) . to minimize the bias introduced by periodic changes in the solar input , the mean annual cycleis removed from the data to produce so - called anomaly time series .we discuss two dependence measures throughout the paper , pearson s correlation coefficient ( linear correlation ) and ( nonlinear ) mutual information . given two real random variables , the well - known pearson s correlation coefficient is defined as }{e((x - e(x))^2)e((y - e(y))^2)}},\ ] ] where denotes the expected value operator .the corresponding finite sample estimate is denoted by . for two discrete random variables with sets of values and ,the mutual information is defined as where is the probability distribution function of , is the probability distribution function of and is the joint probability distribution function of and . for continuous variables ,mutual information is defined by the respective integral . however , in practice the mutual information is estimated using discretization of the theoretically continuous variables .when the discrete variables are obtained from continuous variables on a continuous probability space , then the mutual information depends on a partition chosen to discretize the space .a common choice is a simple box - counting algorithm based on marginal equiquantization method , i.e. , a partition is generated adaptively in one dimension ( for each variable ) so that the marginal bins become equiprobable .this means that there is approximately the same number of data points in each marginal bin . in this paperwe use a simple pragmatic choice of bins for each marginal variable .mutual information is a non - negative quantity , with corresponding to independence of the variables and , and units depending on the base of the logarithm ( base 2 corresponds to bits , while natural logarithm with base corresponds to nats , used here ) .as the estimation of mutual information from finite sample size is prone to sample size dependent bias , to allow quantitative comparison we carry out an approximate correction by recalibration procedure described elsewhere .this procedure is in general based on comparison with samples of the same size and coming from populations with analytically established mutual information . to elucidate our nonlinearity assessment strategy ,we first point out that for a bivariate gaussian distribution , the correlation of the variables uniquely defines the mutual information between them , which is given by .however , for a general non - gaussian bivariate distribution , this equation may not hold .two cases of bivariate non - gaussianity can be distinguished .firstly , the ` simpler ' nonlinearity consists in non - linear rescaling of one or both of the variables .such a rescaling does not affect the mutual information between the variables , however , the correlation may change substantially .rescaling of this type can be suspected in data e.g. due to non - linear properties of the measurement scale , and may be considered as a bias in the correlation estimation .a remedy commonly adopted is the use of spearman rank correlation coefficient .alternative procedure lies in preprocessing the data by applying a monotonous transformation to each variable separately that would render it gaussian ( `` marginal normalization '' ) , and computing the correlation on the transformed data .a second , more ` substantial ' type of non - gaussianity lies in that some bivariate distributions differ from bivariate gaussian not only in their marginal distributions , but also in the form of the interdependence , which can not be resolved by univariate rescaling .this substantial non - gaussianity is the key motivation for the use on nonlinear dependence measures , as the dependence pattern can not be recovered by only considering ranks or other rescaled version of the variables .recently , a quantification method for such deviation from gaussian dependence has been proposed , building on the fact that for univariately gaussian random variables , the correlation gives a lower bound on the mutual information by with the minimum obtained for bivariate gaussian distribution . in particular , one can define the neglected ( ` extra - normal ' or ` non - gaussian ' ) information our investigation of the impact of nonlinear contributions to the climate dependence network is carried out in several steps with increasing level of detail . as a first step , for each pair of local time series , the correlation and mutual information of each pair of local time seriesis computed , and their overall relation shown in a scatter plot for visual inspection of systematic deviation from the relation valid under gaussianity .minor noisy deviations are of course expected due to estimation of the quantities from finite size samples .the step is repeated for univariately normalized variables to isolate the effect of simple rescaling ; the normalized variables are further used in the subsequent analysis .notably , the deviation of the relation among the two estimated quantities and from the theoretical prediction under gaussianity ( ) could be still attributed to the nonlinear properties of the bi - variate dependences , but also to different estimator properties of correlation and mutual information ( including residual mutual information estimator bias uncorrected by the procedure mentioned above ) . to isolate the genuine nonlinearity from apparent differences due to different estimator properties , and to allow more robust quantitative comparison ,the mutual information estimates from data are further compared to mutual information estimates obtained for the respective pairs of _ linear surrogate data _ .the surrogate data conserve the linear structure ( covariance and autocovariance ) and hence also the correlations of the original data , but remove non - gaussianity ( non - linearity ) from the multivariate distribution .thus , in the large sample size limit , the mutual information between pairs of variables in the surrogate data should be technically , linear surrogate data are conveniently constructed as multivariate fourier transform ( ft ) surrogates ; i.e. obtained by computing the fourier transform of the series , keeping unchanged the magnitudes of the fourier coefficients ( the amplitude spectrum ) , but adding the same random number to the phases of coefficients of the same frequency bin ; the inverse ft into the time domain is then performed .thus , instead of comparing the estimators of two different quantities and ( or using the rescaled version of the first one , i.e. , as an estimate of linear information based on correlation estimator ) , we can compare the values and obtained using the same estimator on both the original dataset and its linearized ( surrogate ) version .as an added value , generation of the surrogates allows direct statistical testing of the gaussianity of the studied process , as they provide random samples with predefined covariance structure . for this second purpose , a set of such surrogate datasets is generated . for each pair of locations , one can test the hypothesis that the time series come from a bivariate linear stochastic process with gaussian dependence among the variables by comparing the obtained mutual information estimate with the empirical distribution of gaussian mutual information estimate .the one - sided hypothesis is rejected at significance level if the data mutual information is higher than at least ( out of ) of the surrogate mutual information values .apart from testing statistical significance of the deviations from linearity , we also describe the strength and localization of the effect .this can be conveniently vizualized by estimating for each location the average mutual information between the location and all other locations : for original data by and for a surrogate dataset by . geographical rendering of the difference and the relative difference allows effective visual inspection of the most substantial nonlinearity localization . such automatic localization of substantially nonlinear dependencies can be followed by inspection of the corresponding temporal patterns and expert assessment of their relevance for the studied phenomena .based on this investigation , the use of nonlinear measures for ( a subset of ) the data may be recommended to exploit the nonlinear information in the data , or alternatively further postprocessing may be proposed to clean the data from spurious sources of apparent nonlinearity , as shown below .in particular , the intermediate results of the investigation motivated the introduction of two additional data preprocessing steps adopted in later sections of the paper .the first is the removal of seasonality in variance ( variance normalization ) , which removes the differences in local temperature variability in different parts of year . to achieve that , standard deviation of temperature anomalies is computed for each month separately andthe anomaly data from given month are divided by this standard deviation .the second additional preprocessing step was the removal of slow trends from the data . for simplicity ,only a linear trend was considered here .the general relation between the mutual information and linear correlation for pairs of sat anomalies is visualized in figure [ fig : figuremivsc ] .the mutual information is in general very strongly related to the correlation .the relation is even clearer after removing the estimator differences by representing the linear dependence in terms of mutual information in the surrogates , see figure [ fig : figuremivsmi ] .the univariate gaussianization does not change the correlation coefficients substantially ( see figure [ fig : figurecvscmarg ] ) and does not affect much the relation between data and linear surrogates ( see figure [ fig : figuremivsmimarg ] ) .the deviation from purely gaussian structure of the data has been tested separately for each pair of variables by comparison with a surrogate dataset distribution .more than % time series pairs showed significant at the level . to investigate the observed deviations from linear dependences in more detail, we visualize the total and nonlinear contributions to dependence patterns in figure [ fig : lokalizace - skalovane - anom ] . at first inspectionone can see several well defined areas of relatively high nonlinear contribution to dependence patterns .these are in particular an extensive ring around the anctarctica within the southern ocean , a few locations close to the north pole ( barentz sea , bering sea , baffin bay , greenland sea ) and areas in brazil and southwest asia . to understand the nonlinear dependence pattern in more detail , we select the locations with the highest relative nonlinear dependences and visualize both their linear and nonlinear dependence pattern with respect to all other locations , see figure [ fig:6fcpatterns ] . for most of these areas ,the nonlinear dependences are not generally stronger , but rather include additional distant locations , in contrast with the mostly local character of linear dependence patterns .this might suggest the existence of long - range interactions or `` teleconnections '' of highly or predominantly non - linear character , as discussed e.g. in . to elucidate the nature of these long - range connections we inspected the bivariate distributions and time series of the variables .a representative example is shown in figure [ fig : figure_scatter ] .the shape of the bivariate distribution together with close inspection of the time series suggests that the non - gaussianity might be related to seasonal variability in variance of the signal , which further differs between the two locations . in this particular case ,the variability at the first location is the highest in december to february , when it is at its lowest at the second location and vice versa in july .thus , the information shared by these time series would be explainable just by the seasonal differences of dynamics and ultimately just by variation in local solar influx . to test this hypotheses and control for this source of bias , we re - analyze the data after normalizing the seasonal variance , as described in section [ sec : mat ] .after this additional preprocessing step , there was a marked decrease in detected pairs of locations with statistically significant nonlinear contribution to temperature dependence ( at the level ) , however , this is still more than expected by chance .the localization of nodes with the strongest ( non)linear dependences is shown in figure [ fig : lokalizace - skalovane - varnorm ] . by comparison to figure [ fig : lokalizace - skalovane - anom ] we can see that the strongest contributors to apparent nonlinear dependences have been mitigated by this data cleaning step. the maxima of the relative nonlinear contributions are now much lower and are located almost purely in the equatorial ocean regions .as previously , we investigate the form of a nonlinear dependence pattern related to the strongest source of nonlinearity in thus preprocessed data .this is shown in figure [ fig : figure_scatter_varnorm ] , showing typical example bivariate distributions and time series .the source of the observed non - gaussianity of bivariate dependence can be commonly tracked down to an apparent non - stationarity of the time series .in particular in the example shown in figure [ fig : figure_scatter_varnorm ] there is a strong ( almost linear ) trend , which might be of interest for other reasons , but can be considered as spurious with respect to detection of climate interactions on the time scale considered .this motivates additional detrending of the time series followed by yet another replication of the analysis .the results suggest that the trends are indeed responsible for a major part of the yet remaining apparent non - linearity .in particular , the amount of detected pairs of locations with statistically significant nonlinear contribution to temperature dependence goes further down to at the level . similarly , the average nonlinear contribution to mutual information is further substantially reduced , see figure [ fig : lokalizace - skalovane - varnorm - detrend ] .the analysis was replicated for the era dataset .the general strength and distribution of nonlinear dependence within the era dataset is very similar to the ncep / ncar , see figures [ fig : figuremivsmi_era ] , [ fig : lokalizace - skalovane - anom_era ] , [ fig : lokalizace - skalovane - varnorm_era ] and [ fig : lokalizace - skalovane - varnorm - detrend_era ] .the fraction of pairs of nodes with significant nongaussianity were at the level is , and percent in the original anomalies , variance normalized data and detrended variance normalized data respectively , suggesting just a slightly weaker contribution of the nonlinearity in the era dataset , and clearly not much more than expected by chance for linear data in the last case .in the previous sections we have outlined an approach for a detailed multi - step analysis of the relevance of nonlinearity in the dependence structure of climate time series and the results for monthly sat reanalysis data .the overall picture is in general that of negligible nonlinearities in the data ; the most substantial apparent ` nonlinearities ' are attributable to nonstationarity effects .therefore , the tentative suggestion with respect to choice of dependence measure for this type of data would be to use a linear measure ( pearson correlation coefficient ) , potentially after removing data nonstationarity by preprocessing , as some of the nonstationarities may affect also pearson correlation estimates , albeit differently than the nonlinear mutual information .an obvious question is that of generalizability of the specific findings .explicitly , we have confirmed similar results in the era reanalysis sat data .also in daily ncep / ncar data we have observed relatively negligible nonlinearity , after removing nonstationarities in variance as well as trends ( results not shown ) . while we have only showed results for the most commonly used grid with fixed angular resolution , based on inspection of the spatiotemporal structure of the data we conjecture that analogous results would be observed for other resolutions as well as area - corrected grids as used e.g. by .the present results extend those of who tested for possible nonlinearity in the dynamics of the station ( prague - klementinum ) sat time series and found that the dependence between the sat time series and its lagged twin was well - explained by a linear stochastic process .this result about a linear character of the temporal evolution of sat time series , as well as the results of this study about relations between the reanalysis sat time series from different grid - points can not be understood as arguments for a linear character of atmospheric dynamics .these results rather characterize properties of measurement or reanalysis data at a particularly coarse level of resolution when the data reflecting a spatially and temporally averaged mixture of dynamical processes on a wide range of spatial and temporal scales are considered . for instance , a closer look on the dynamics on specific temporal scales in temperature and other meteorological data has led to identification of oscillatory phenomena with nonlinear behavior , exhibiting phase synchronization . also for other variables with vastly different dynamicswe can expect substantial nonlinearity in bivariate dependencies , especially if the measurement / model has sufficient spatial and temporal resolution .conceptually , similar analysis to that presented is warranted to be carried out before the decision for use of linear or nonlinear dependence measure for each substantially different dataset .the work on preparation of a semi - automated tool for such analysis is undergoing . despite the relative sparsity of the substantially nonlinear dependence patterns , even after the two additional preprocessing steps we have observed more than the expected proportion of significantly nonlinear dependences ( instead of expected ) . given the number of tests carried out ( several million of location - pairs ) , this small deviation likely consists a globally significant deviation from multivariate gaussianity , although the intricate interdependence of the pair tests themselves makes the estimation of the exact p - value for a global linearity hypothesis technically very difficult .from practical point of view , it may be argued that the statistical determination of the above - random presence of apparent nonlinearity should not play a key role in method choice .firstly , the more detailed quantitative analysis has already shown that even where present , the effect is relatively weak . secondly , even though for some region pairs there may be a statistical indication of nonlinear dependence , it is realistic to suspect ( given the results for the raw data , see figures [ fig : figure_scatter ] and [ fig : figure_scatter_varnorm ] ) that this apparent nonlinearity may be due to as yet not discussed type of nonstationarity , and visual inspection of the data is required .indeed , several further spurious nonlinearities have been detected due to various uncorrected problems within the reanalysis data ( some of which corresponded to known problems as described in ) .this leads to the consideration of the strongest nonlinear pattern observed in the current data even after the two additional preprocessing steps on top of those carried out in .as shown in figure [ fig : lokalizace - skalovane - varnorm - detrend ] , ocean areas particularly in the tropical pacific bear still a slightly elevated non - gaussian contribution to dependence patterns .an example of such dependence pattern and related time series is shown in figure [ fig : nonlinearremainder ] .note that the area of strongest residual non - gaussianity roughly corresponds to regions implicated in enso dynamics ; for visual comparison we plot the area and time series for the el nino 3.4 index ( the nino 3.4 region is bounded by - and - ) .the nino3.4 sst index data were downloaded from from noaa / nws climate prediction center , ( http://www.cpc.ncep.noaa.gov , accessed november 15th , 2012 ) .we hypothesize that the observed pattern reflects the nonlinear character of enso dynamics observed e.g. by .quantification of dependences between variables is a common task within the study of climate or other complex systems .the choice of dependence measures is commonly based on some theoretical assumptions about the underlying system . in case of climatic timeseries , this may lead to the choice of nonlinear methods due to the nonlinear nature of the underlying physical processes .however , for various reasons including spatiotemporal sampling or averaging , measured or modeled data might be in fact well captured by linear measures , with their nonlinear counterparts potentially reducing sensitivity or introducing bias . we have presented a multi - step approach that allows the detailed assessment of the nonlinear contribution to dependence patterns in a dataset , including not only statistical testing , but also quantification , localization and analysis of sources of this contribution .this approach can provide rationale for the decision regarding the choice of suitable dependence measure for given type of data , as well as direct the analyst attention to hidden crucial properties of the dataset .importantly , the presented approach is quite general .it is transferable to other geoscientific datasets , as well as to other disciplines such as neuroscience , see e.g. for an earlier application of a related approach and for an example focused to an assessment of the nonlinearity effects on graph - theoretical network characteristics .let us note that the construction and interpretation of graphs representing climate networks poses further challenges .for monthly sat in the ncep / ncar dataset and similar data , the analysis suggests that the use of linear dependence methods is generally sufficient , potentially after the treatment of described nonstationarity sources .this has shown that the quantitative study of amount of nonlinearity in the data provides as a side product valuable hints about potentially hidden specific properties of the data , that may otherwise go unnoticed and bias the results , were just a single method used without detailed analysis .this study is supported by the czech science foundation , project no . p103/11/j068 .dee dp , uppala sm , simmons aj , berrisford p , poli p , kobayashi s , andrae u , balmaseda ma , balsamo g , bauer p , bechtold p , beljaars acm , van de berg l , bidlot j , bormann n , delsol c , dragani r , fuentes m , geer aj , haimberger l , healy sb , hersbach h , holm ev , isaksen l , kallberg p , koehler m , matricardi m , mcnally ap , monge - sanz bm , morcrette jj , park bk , peubey c , de rosnay p , tavolato c , thepaut jn , vitart f ( 2011 ) the era - interim reanalysis : configuration and performance of the data assimilation system. quarterly journal of the royal meteorological society 137(656 , part a):553597 donges jf , schultz hch , marwan n , zou y , kurths j ( 2011 ) investigating the topology of interacting networks theory and application to coupled climate subnetworks .european physical journal b 84(4):635651 hartman d , hlinka j , palu m , mantini d , corbetta m ( 2011 ) the role of nonlinearity in computing graph - theoretical properties of resting - state functional magnetic resonance imaging brain networks .chaos 21(1 ) kalnay e , kanamitsu m , kistler r , collins w , deaven d , gandin l , iredell m , saha s , white g , woollen j , zhu y , chelliah m , ebisuzaki w , higgins w , janowiak j , mo k , ropelewski c , wang j , leetmaa a , reynolds r , jenne r , joseph d ( 1996 ) the ncep / ncar 40-year reanalysis project . bulletin of the american meteorological society 77(3):437471 kistler r , kalnay e , collins w , saha s , white g , woollen j , chelliah m , ebisuzaki w , kanamitsu m , kousky v , van den dool h , jenne r , fiorino m ( 2001 ) the ncep - ncar 50-year reanalysis : monthly means cd - rom and documentation .bulletin of the american meteorological society 82(2):247267 palus m , novotna d ( 2004 ) enhanced monte carlo singular system analysis and detection of period 7.8 years oscillatory modes in the monthly nao index and temperature records .nonlinear processes in geophysics 11(5 - 6):721729 palus m , novotna d ( 2011 ) northern hemisphere patterns of phase coherence between solar / geomagnetic activity and ncep / ncar and era40 near - surface air temperature in period 7 - 8 years oscillatory modes .nonlinear processes in geophysics 18(2):251260 uppala s , kallberg p , simmons a , andrae u , bechtold v , fiorino m , gibson j , haseler j , hernandez a , kelly g , li x , onogi k , saarinen s , sokka n , allan r , andersson e , arpe k , balmaseda m , beljaars a , van de berg l , bidlot j , bormann n , caires s , chevallier f , dethof a , dragosavac m , fisher m , fuentes m , hagemann s , holm e , hoskins b , isaksen l , janssen p , jenne r , mcnally a , mahfouf j , morcrette j , rayner n , saunders r , simon p , sterl a , trenberth k , untch a , vasiljevic d , viterbo p , woollen j ( 2005 ) the era-40 re - analysis. quarterly journal of the royal meteorological society 131(612 , part b):29613012
quantification of relations between measured variables of interest by statistical measures of dependence is a common step in analysis of climate data . the term `` connectivity '' is used in the network context including the study of complex coupled dynamical systems . the choice of dependence measure is key for the results of the subsequent analysis and interpretation . the use of linear pearson s correlation coefficient is widespread and convenient . on the other side , as the climate is widely acknowledged to be a nonlinear system , nonlinear connectivity quantification methods , such as those based on information - theoretical concepts , are increasingly used for this purpose . in this paper we outline an approach that enables well informed choice of connectivity method for a given type of data , improving the subsequent interpretation of the results . the presented multi - step approach includes statistical testing , quantification of the specific non - linear contribution to the interaction information , localization of nodes with strongest nonlinear contribution and assessment of the role of specific temporal patterns , including signal nonstationarities . in detail we study the consequences of the choice of a general nonlinear connectivity measure , namely mutual information , focusing on its relevance and potential alterations in the discovered dependence structure . we document the method by applying it on monthly mean temperature data from the ncep / ncar reanalysis dataset as well as the era dataset . we have been able to identify main sources of observed non - linearity in inter - node couplings . detailed analysis suggested an important role of several sources of nonstationarity within the climate data . the quantitative role of genuine nonlinear coupling at this scale has proven to be almost negligible , providing quantitative support for the use of linear methods for this type of data .
preconditioning is a technique developed originally for the iterative solution of linear systems that aims at the acceleration of convergence of the iterations . in its simplest form ,the system is multiplied by a matrix such that the spectral condition number of , the ratio of the largest to the smallest singular value thereof , is considerably smaller than that of , which generally leads to faster convergence . 0we define a preconditioner of a matrix as a matrix such that their product has a smaller condition number than . instead of solving a linear system , one may solve a preconditioned system .the small condition number of is expected to lead to fast convergence of iterations .iterative methods for solving linear systems normally do not require and to be explicitly formed as matrices : it is sufficient that matrix - vector multiplications are implemented and performed via user - defined procedures .the same is true for iterative methods that compute eigenvalues and eigenvectors of a very large matrix , as , e.g.,in , calculating one eigenvector of a 100-billion size matrix , or in .a classical application area for preconditioned solvers is the discretized boundary value problems for elliptic partial differential operators ; see , e.g. , . with multigrid preconditioning , preconditioned solvers may achieve linear complexity on problems from this area ; see , e.g. , and references there for symmetric eigenvalue problems .dyakonov seminal work , summarized in , proposes `` spectrally equivalent '' preconditioning for elliptic operator eigenvalue problems in order to guarantee convergence that does not deteriorate with the increasing dimension of the discretized problem .owing to this , for large enough problems such preconditioners outperform direct solvers , which factorize the original sparse matrix .inevitable matrix fill - ins , especially prominent in discretized differential problems in more than two spatial dimensions , destroy the matrix sparsity , resulting in computer memory overuse and non - optimal performance .preconditioning has also long since been a key technique in _ ab initio _ calculations in material sciences ; see , e.g. , and references therein . in the last decade , preconditioning for graphs is attracting growing attention as a tool for achieving an optimal complexity for large data mining problems , e.g. , for graph bisection and image segmentation using graph laplacian and fiedler vectors since ; for recent work see , e.g. , .preconditioned iterative methods for the original linear system are in many cases mathematically equivalent to standard iterative methods applied for the preconditioned system .for example , the classical richardson iteration step applied to the preconditioned system becomes where is a suitably chosen scalar .turning now to eigenvalue problems , let us consider the computation of an eigenvector of a real symmetric positive definite matrix corresponding to its smallest eigenvalue .borrowing an argument from , suppose that the targeted eigenvalue , or a sufficiently good approximation thereof , is known .then the corresponding eigenvector can be computed by solving a homogeneous linear system , or , equivalently , the system , where is an identity .the richardson iteration step now becomes theoretically , the best preconditioners for and are , correspondingly , and , where denotes a pseudo - inverse , making both richardson iteration schemes , and , converge in a single step with . under the standard assumption , both in and ,convergence theory is straightforward , e.g. , in terms of the spectral radius of .sharp explicit convergence bounds , not relying on generic constants , can be derived in the form of inequalities that allow one to determine whether the convergence deteriorates with the increasing problem size by analyzing every term in the bound .for some classes of eigenvalue problems , the efficiency of choosing has been demonstrated , both numerically and theoretically , in .this choice allows the easy adaptation of a vast variety of preconditioners already developed for linear systems to the eigensolvers . in practice, the theoretical value in the richardson iteration above has to be replaced with its approximation .a standard choice for is a rayleigh quotient function , leading to it is well known that the rayleigh quotient gives a high quality ( quadratic ) approximation of the eigenvalue , if the sequence converges to the corresponding eigenvector .thus , asymptotically as where , methods and are equivalent , and so may be their asymptotic convergence rate bounds .however , asymptotic convergence rate bounds naturally contain generic constants , which are independent of , but may depend on the problem size . due to the changing value , a non - asymptotic theoretical convergence analysis is much more difficult , compared to the case for linear systems , even for the simplest methods , such as the richardson iteration .dyakonov pioneering work from the eighties , summarized in ( * ? ? ?* chapter 9 ) , includes the first non - asymptotic convergence bounds for preconditioned eigensolvers proving their linear convergence with a rate , which can be bounded above independently of the problem size .just a few of the known bounds are sharp .one of them is proved for the simplest preconditioned eigensolver with a fixed step size in a series of papers by neymeyr over a decade ago ; see and references therein .the original proof has been greatly simplified and shortened in by using a gradient flow integration approach .in this paper we present a new self - contained proof of a sharp convergence rate bound from for the preconditioned eigensolver , theorem [ t.1 ] . following the geometrical approach of , we reformulate the problem of finding the convergence bound for as a constrained optimization problem for the rayleigh quotient .the main novelty of the proof is that here we use inequality constraints , which brings to the scene the karush - kuhn - tucker ( kkt ) theory ; see , e.g. .kkt conditions allow us to reduce our convergence analysis to the simplest scenario in two dimensions , which is the key step in the proof .we have also found several simplifications in the two dimensional convergence analysis , compared to that of .we believe that the new proof will greatly enhance the understanding of the convergence behavior of increasingly popular preconditioned eigensolvers , whose application area is quickly expanding : see , e.g. , .we consider a real generalized eigenvalue problem with real symmetric positive definite matrices and . the objective is to approximate iteratively the smallest eigenvalue by minimizing the rayleigh quotient .a direct formulation of the convergence analysis with respect to this form of the eigenvalue problem has some disadvantages . instead , the inverted form with results in more compact representation of the problem and the proof ( many inverses like and can be avoided ) , cf . . for this inverted formthe objective is to approximate the largest eigenvalue of by maximizing the rayleigh quotient .we denote the eigenvalues by , which can have arbitrary multiplicity .corresponding eigenspaces are denoted by .the increase of can be achieved by correcting the current iterate along the preconditioned gradient of the rayleigh quotient , i.e. see and references therein . if , then and method turns into with , discussed in the introduction . in all our prior work on preconditioned eigensolvers for symmetric eigenvalue problems , including , we have always assumed that the preconditioner is a symmetric and positive definite matrix , typically satisfying conditions or equivalent , up to the scaling of .recently , the authors of have noticed and demonstrated that does not have to be symmetric positive definite , and a less restrictive assumption can be used instead , where denotes the matrix largest singular value , and is the symmetric positive definite square root of .it is verified in that and are equivalent if is symmetric and positive definite . in what follows, we give a complete and concise proof of the following convergence rate bound , first proved in , [ t.1 ] if and satisfies , then for given by it holds that either or the first step , lemma [ l.1 ] , of the proof of theorem [ t.1 ] is the same as that in , where we characterize a set of possible next step iterates in varying the preconditioner constrained by assumption , aiming at eliminating the preconditioner from consideration .the only difference is that in we start with changing an original coordinate basis to an -orthogonal basis , which transforms into the identity , resulting in a one - line proof of lemma [ l.1 ] . here, we choose to present a detailed proof of lemma [ l.1 ] , for a general , demonstrating that the transformation of into the identity , made after lemma [ l.1 ] , is well justified .[ l.1 ] let us denote and and define a closed ball centered at .let satisfy , then for given by it holds that left - multiplying by gives or , in our new notation , resulting . since by , we get the second step of the proof is traditional reducing the generalized symmetric eigenvalue problem to the standard eigenvalue problem for the symmetric positive definite matrix by making the change of variables as hinted by lemma [ l.1 ] .we use the standard inner product in variables , i.e. and the corresponding vector norm , so , e.g., we later use and -based scalar products and norms defined as follows , e.g. , for brevity we drop the subscript in the rest of the paper . in the following to , refers to , refers to , and so on , cf .lemma [ l.1 ] .furthermore , , and method is .the new form of condition is .this means that approximates the identity matrix with respect to the notation used in lemma [ l.1 ] .the closed ball has the form with the radius centered at . since and , we can estimate by using a minimizer of in ( i.e. by considering the worst case ) .we observe that , effectively , we set without loss of generality .[ s3 ] the main idea of the geometrical approach of , which we also employ in this paper , is that the convergence rate of iterations is slowest , in terms of the rayleigh quotient , if is a linear combination of two eigenvectors , which makes the further convergence analysis trivial .a new proof of this fact actually occupies a major part of our paper . in order to illustrate how such a dramatic reduction in dimension becomes possible , in this section we apply our technique to a simplified case corresponding to .it is not difficult to see that under this assumption turns into one iteration of the power method , 0 our proof of theorem [ t.1 ] in the next section uses powerful tools , which are uncommon in numerical linear algebra and thus may catch an unprepared reader unguarded .the role of this section is to serve as a gentle introduction to the main proof .here we assume so that in order to simplify the analysis , then relation turns into the power method , , and bound holds with and thus .let us make a historic note that exactly this result has apparently first appeared in .the left - hand side of bound is monotone in .one way to find out at which the behavior of is the worst is to minimize for all that satisfy for some fixed . slightly abusing the notation in the proof, we keep denoting by both the initial approximation in and the vector in the minimization problem .we notice that is equivalent to . therefore , at a stationary point we have , using lagrangian multipliers , that where is some constant .this yields which can be rewritten as where .since implies , we obtain which shows that .thus , equation can be viewed as a polynomial equation , where is a third degree polynomial with positive first and last coefficients , specifically and , correspondingly .inserting , where are the projections of onto the eigenspaces , leads to .since the eigenspaces are orthogonal to each other , the products must be zero for each .owing to the positiveness of the first and last coefficients , the polynomial must have a non - positive root , and thus at most two positive roots , i.e. can be zero for some two indexes and at most , allowing the only possibly nonzero and from all projections .we conclude that is a linear combination of at most two normalized eigenvectors and , corresponding to distinct eigenvalues and of the matrix . we assume without loss of generality that , then similarly , since , we obtain let , then implies . by using monotonicity of the ratio of the quotients in and and the fact that the vector here corresponds to the worst - case scenario , i.e. minimizing over all with the fixed value , we obtain with .since is an equality , we also prove that the upper bound in with is sharp , turning into an equality if the initial approximation in satisfies . in the next section ,we apply the described dimensionality reduction technique to the general case .we formulate the conditions that `` the worst case '' must satisfy , which yield the generalization of equation , and rewrite this equation as a cubic equation .we show that the first and last coefficients of this equation are positive , which , as we have just seen , implies that is a linear combination of two eigenvectors .a simple two - dimensional analysis completes the proof of theorem [ t.1 ] .[ s4 ] next the proof of theorem [ t.1 ] is given : let us denote and define , a closed ball with the radius centered at .on the one hand , it holds that for any vector since is not an eigenvector and . indeed , taking into account ,we have so that . on the other hand , , since , see lemma [ l.1 ] .this proves and , thus , the left inequality in , provided that in the previous proof with , the ball shrinks to a single point and the only choice of is possible .the present case is significantly more difficult for the worst - case scenario analysis , involving a minimization problem with two variables , and . in our previous work ,see and references therein , we first vary intending to minimize for a given , and then vary fixing .the first minimization problem defines as an implicit function of , and then lagrangian multipliers are used , as in section [ s3 ] , to analyze the second minimization problem , in .it turns out that the proof is much simpler if we vary both and at the same time and attack the required two - parameter minimization problem in and directly by using the kkt arguments as provided below .[ t.reduc ] for and a fixed value that is not an eigenvalue of , let a pair of vectors denote a solution of the following constrained minimization problem : if is not an eigenvector of , then both and belong to a two - dimensional invariant subspace of corresponding to two distinct eigenvalues , and where denotes an angle between two vectors defined by .we consider the equivalent problem we first notice that the assumption implies because of the first constraint and .thus , is correctly defined .next , let us temporarily consider a stricter constraint , instead of .combined with the other constraints , this results in minimization of the smooth function on a compact set , so there exists a solution .finally , let us remove the artificial constraint and notice that any nonzero multiple of is also a solution .thus we can consider the karush - kuhn - tucker ( kkt ) conditions , e.g. , ( * ? ? ?* theorem 9.1.1 ) , , in any neighborhood of , which does not include the origin .next we show that the gradients of and are linearly independent . for the gradient of , it holds that with , since is not an eigenvector of , and it holds that , since does not depend on . assuming the linear dependence of the gradients of and implies that , so that and . by using , it holds that while , i.e. ( using again the assumed linear dependence ) the vector is an eigenvector of , and , hence is an eigenvector of , contradicting the lemma assumption .therefore , the gradients of and are linearly independent .all functions involved in our constrained minimization are smooth .we conclude that the stationary point is regular , i.e. , the kkt conditions are valid .the kkt stationarity condition states that there exist constants and such that at the critical point .the independent variables no longer appear , so to simplify the notation , in the rest of the proof we drop the superscript and substitute for .we separately write the partial derivatives with respect to , and with respect to , the kkt complementary slackness condition must be satisfied , implying if is an eigenvector then in condition , leading to , i.e. vector is also an eigenvector of , thus we are done .now we consider a nontrivial case , where neither nor is an eigenvector .condition then implies , so identity holds unconditionally , condition turns into and taking the inner product of with gives taking the inner products of both sides of with results in therein we use and . denoting , we rewrite as taking the inner products of both sides of with yields where the orthogonality has been used again .therefore , we obtain , which implies . substituting and multiplying through by in results in multiplying through by and substituting , which follows from ,we obtain , where is a third degree polynomial with and , which can not have more than two positive roots . 0inserting the expansion with the projections of on the eigenspaces leads to for every , cf .section 3 . since and , the polynomial must have a non - positive root , and thus at most two positive roots .hence can be nonzero for at most two ( since ) and thus , is a linear combination of two normalized eigenvectors and corresponding to two distinct eigenvalues ( cf .section 3 ) , i.e. . since , by so is .furthermore , the orthogonality from shows that and .this leads to , since the angles between vectors have the range ] .the numerator of is also a monotonically increasing function and its denominator is monotonically decreasing in ] , and which proves and hence ., nonsymmetric preconditioning for conjugate gradient and steepest descent methods , _ procedia computer science _ , 51 ( 2015 ) ,276285 . . a preliminary version available at arxiv:1212.6680 [ cs.na ] , 2012http://arxiv.org/abs/1212.6680 , modern preconditioned eigensolvers for spectral image segmentation and graph bisection , workshop on clustering large data sets third ieee international conference on data mining ( icdm 2003 ) , 2003 .http://math.ucdenver.edu/~aknyazev/research/conf/icdm03.pdf knn2003 a. v. knyazev and k. neymeyr , a geometric theory for preconditioned inverse iteration .iii : a short and sharp convergence estimate for generalized eigenvalue problems , _ linear algebra appl ._ , 358 ( 2003 ) , pp .95114 .kn2003 a. v. knyazev and k. neymeyr , efficient solution of symmetric eigenvalue problems using multigrid preconditioners in the locally optimal block conjugate gradient method , _ electronic transactions on numerical analysis _ , 15 ( 2003 ) , pp ./ vol.15.2003/pp38 - 55.dir / pp38 - 55.pdf , preconditioned eigensolvers for large - scale nonlinear hermitian eigenproblems with variational characterizations .i. conjugate gradient methods , research report 14 - 08 - 26 , department of mathematics , temple university , august 2014 .revised april 2015 . to appear in_ mathematics of computation_. https://www.math.temple.edu/~szyld/reports/nlpcg.report.rev.pdf , preconditioned eigensolvers for large - scale nonlinear hermitian eigenproblems with variational characterizations .interior eigenvalues , research report 15 - 04 - 10 , department of mathematics , temple university , april 2015 . to appear in _ siam journal on scientific computing_. http://arxiv.org/abs/1504.02811 , high - performance computing for exact numerical approaches to quantum many - body problems on the earth simulator , in proceedings of the 2006 acm / ieee conference on supercomputing ( sc 06 ) .acm , new york , ny , usa , article 47 , 2006 .
preconditioned iterative methods for numerical solution of large matrix eigenvalue problems are increasingly gaining importance in various application areas , ranging from material sciences to data mining . some of them , e.g. , those using multilevel preconditioning for elliptic differential operators or graph laplacian eigenvalue problems , exhibit almost optimal complexity in practice , i.e. , their computational costs to calculate a fixed number of eigenvalues and eigenvectors grow linearly with the matrix problem size . theoretical justification of their optimality requires convergence rate bounds that do not deteriorate with the increase of the problem size . such bounds were pioneered by e. dyakonov over three decades ago , but to date only a handful have been derived , mostly for symmetric eigenvalue problems . just a few of known bounds are sharp . one of them is proved in [ ] for the simplest preconditioned eigensolver with a fixed step size . the original proof has been greatly simplified and shortened in [ ] by using a gradient flow integration approach . in the present work , we give an even more succinct proof , using novel ideas based on karush - kuhn - tucker theory and nonlinear programming . symmetric ; preconditioner ; eigenvalue ; eigenvector ; rayleigh quotient ; gradient ; iterative method ; karush kuhn tucker theory . 65f15 65k10 65n25 _ dedicated to the memory of evgenii g. dyakonov , moscow , russia , 19352006 . _
in the past decade , distributed cooperative control for multi - agent systems , particularly the consensus problem , has gained much attention and significant progress has been achieved , e.g. , .almost all studies assume that the information can be continuously transmitted between agents with infinite precision . in practice , such an idealized assumption is often unrealistic , so information transmission should to be considered in the analysis and design of consensus protocols .there are two main approaches to handle the communication limitation : event - triggered and quantized control . inevent - triggered ( and self - triggered ) control the control input is piecewise constant and transmission happens at discrete events . for instance , provided event - triggered and self - triggered protocols in both centralized and distributed formulations for multi - agent systems with undirected graph topology ; proposed a self - triggered protocol for multi - agent systems with switching topologies .other authors considered systems with quantized sensor measurements and control inputs .the authors of the papers combined event - triggered control with quantized communication . for example , considered model - based event - triggered control for systems with quantization and time - varying network delays ; presented decentralised event - triggered control in multi - agent systems with quantized communication . when considering event - triggered control in multi - agent systems with quantized communication or sensing , some aspects should be paid special attention to .firstly , the notion of the solution should be clarified since in some cases the classic or hybrid solutions may not exist .for instance , and used the concept of filippov solution when they considered quantized sensing .secondly , the zeno behavior must be excluded .thirdly , the need of continuous state access for neighbors should be avoided . in , which is a key motivation for the present paper, the authors did not explicitly discuss the first aspect and used periodic sampling to exclude the zeno behavior .they did not give any accurate upper bound of the sampling time , which restricts the application of the results .inspired by and , we propose centralized and distributed self - triggered rules for multi - agent systems with quantized communication or sensing . under these rules ,the existence of a unique trajectory of the system is guaranteed and the frequency of communication and system updating is reduced .the main contribution of the paper is to show that the trajectory exponentially converges to practical consensus set .it is shown that continuously monitoring of the triggering condition can also be avoided .an important aspect of this paper is that the weakest fixed interaction topology is considered , namely , a directed graph containing a spanning tree .the proposed self - triggered rules are easy to implement in the sense that triggering times of each agent are only related to its in - degree .the rest of this paper is organized as follows : section [ sec2 ] introduces the preliminaries ; section [ sec3 ] discusses self - triggered consensus with quantized communication ; section [ sec4 ] treats instead self - triggered consensus with quantized sensing ; simulations are given in section [ sec5 ] ; and the paper is concluded in section [ sec6 ] .in this section we will review some results on algebraic graph theory - and stochastic matrices - . for a matrix , the element at the -th row and -th column is denoted as ; and denote ) ] .the _ ( weighted ) laplacian matrix _ is defined as . a directed path from agent to agent is a directed graph with distinct agents and links .we say a directed graph has a spanning tree if there exists at least one agent such that for any other agent , there exits a directed path from to .obviously , there is a one - to - one correspondence between a graph and its adjacency matrix or its laplacian matrix . in the following , for the sake of simplicity in presentation ,sometimes we do nt explicitly distinguish a graph from its adjacency matrix or laplacian matrix , i.e. , when we say a matrix has some graphic properties , we mean that these properties are held by the graph corresponding to this matrix .a matrix is called a _ nonnegative matrix _ if for all , and is called a _stochastic matrix _if is square , nonnegative and for each .a stochastic matrix is called _ scrambling _ if , for any and , there exists such that both and are positive .moreover , given a nonnegative matrix and , the -matrix of , which is denoted as , and its element at -th row and -th column , , is if has a spanning tree , we say contains a -spanning tree . similarly ,if is scrambling , we say is -scrambling .a nonnegative matrix is called a _ stochastic indecomposable and aperiodic _ ( sia )matrix if it is a stochastic matrix and there exists a column vector such that , where is the -vector containing only ones . for two -dimension stochastic matrices and , they are said to be of the same _ type _ , denoted by , if they have zero elements and positive elements in the same places .let denotes the number of different types of all sia matrices in , which is a finite number for given . for two matrices and of the same dimension , we write if is a nonnegative matrix . throughout this paper, we use to denote the left product of matrices . here , we introduce some lemmas that will be used later . from corollary 5.7 in , we have [ lem1 ] for a set of stochastic matrices , if there exists and such that and contains a -spanning tree for all , then there exists , such that is -scrambling. from lemma 6 in , we have [ lem2 ] let be matrices with the property that for any , is sia , where is a constant , then is -scrambling for any .( ) for a real matrix , define the ergodicity coefficient and its hajnal diameter .[ remark1 ] obviously , if is a stochastic matrix , then . moreover , if is -scrambling for some , then .[ lem3 ] ( ) if and are stochastic matrices , then .[ lem4 ] ( ) for a vector ^{\top}\in\mathbb{r}^n ] and ^{\top} ] and . from the self - triggered rule , for any given , the system can arbitrarily choose ] for every agent .then , in the interval ] . then , from lemma [ lem1 ] , for any positive integer , we know that is -scrambling for some .then from remark [ remark1 ] , lemma [ lem3 ] and lemma [ lem4 ] , we have where .thus for any , there exists a positive integer such that ] is no more than , where and .moreover , for any positive integer , every agent triggers at least once during ] with , then , we can rewrite ( [ system ] ) and ( [ input ] ) as \label{systemd}\end{aligned}\ ] ] now we consider the evolution of . if agent does not trigger at time , then .thus if agent triggers at time , then .assume be the last update of agent before , where integer is the number of triggers which are triggered by other agents between .then , . noting =\bigcup_{m = k - d_{ik}}^{k}(t_m , t_{m+1}] ] , , , %\in\mathbb{r}^{n(\tau_1 + 1)\times n(\tau_1 + 1)}\end{aligned}\ ] ] and %\in\mathbb{r}^{n(\tau_1 + 1)\times n(\tau_1 + 1)}\end{aligned}\ ] ] from ( [ property1 ] ) and ( [ property2 ] ) , we know that is a stochastic matrix .we can rewrite ( [ solutiony3 ] ) as \end{aligned}\ ] ] \(b ) next , we will prove that there exists such that for any , is scrambling , where . from ( [ property1 ] ) and ( [ property2 ] ) , we know that is a nonnegative matrix for any and , and .hence , .denote %\in\mathbb{r}^{n(\tau_1 + 1)\times n(\tau_1 + 1)}\end{aligned}\ ] ] and .then , where . from lemma [ lem5 ] , we know that , for any , since each agent triggers at least once during ] . herewe choose a such that . for any , note ^{[(\delta_f)^{\frac{1}{\tau_2}}]}\\= & \prod_{i=(k_1 - 1)\tau_2 + 1}^{k_2\tau_2 } [ e(i)]^{[(\delta_f)^{\frac{1}{\tau_2}}]}\end{aligned}\ ] ] and the first block row sum of ^{(\delta_f)^{\frac{1}{\tau_2}}} ] for every agent .similarly , after has been chosen , the system can arbitrarily choose ] , the only solution to ( [ system ] ) with input ( [ inputmc ] ) is particularly , we have then , , where ^{\top} ] . the proof follows similarly to the proof to theorem [ thm1 ] . in this subsection , we consider distributed self - triggered consensus rule .similar to theorem [ thm2 ] , we have [ thm4 ] under the assumptions and self - triggered rule of theorem [ thm2 ] , the trajectory of ( [ system ] ) with input ( [ inputm ] ) exponentially converges to the consensus set , where is a positive constant which can be determined by .we omit the proof since it is similar to the proof of theorem [ thm2 ] .in this section , a numerical example is given to demonstrate the effectiveness of the presented results .consider a network of seven agents with a directed reducible laplacian matrix \end{aligned}\ ] ] which is described by the graph in fig .[ fig:1 ] .the initial value of each agent is randomly selected within the interval $ ] in our simulation and the next triggering time is randomly chosen from the permissible range using a uniform distribution .the uniform quantizing function used here is if .[ fig:2 ] shows the evolution of under the four self - triggered rules treated in theorems [ thm1]-[thm4 ] with and . in this simulation, it can be seen that under all self - triggering rules all agents converge to the consensus set with . .the dots indicate the triggering times of each agent.,scaledwidth=45.0% ] let the quantizer parameter take different values .[ fig:3 ] illustrates under the four self - triggering rules for different .the curves show the averages over 100 overlaps .as expected , the smaller , the smaller is the consensus set . with different .,scaledwidth=45.0% ]in this paper , consensus problems for multi - agent systems defined on directed graphs under self - triggered control have been addressed . in order to reduce the overall need of communication and system updates , centralized and distributed self - triggered rules have been proposed in the situation that quantized information can only be transmitted , i.e. , quantized communication , and the situation that each agent can sense only quantized value of the relative positions between neighbors , i.e. , quantized sensing .it has been shown that the trajectory of each agent exponentially converges to the consensus set if the directed graph containing a spanning tree .the triggering rules can be easily implemented since they are related only to the degree matrix .interesting future directions include considering stochastically switching topologies and more precise expression of the consensus sets .f. xiao and l. wang , `` asynchronous consensus in continuous - time multi - agent systems with switching topology and time - varying delays , '' _ automatic control , ieee transactions on _ , vol .53 , no . 8 , pp .1804 - 1816 , 2008 .k. you and l. xie , `` network topology and communication data rate for consensusability of discrete - time multi - agent systems , '' _ automatic control , ieee transactions on _ , vol .2262 - 2275 , 2011 . x. l.yi , w. l. lu and t. p. chen , `` pull - based distributed event - triggered consensus for multi - agent systems with directed topologies , '' _ neural networks and learning systems , ieee transactions on _ , to be appeared .d. v. dimarogonas and k. h. johansson , `` stability analysis for multi - agent systems using the incidence matrix : quantized communication and formation control , '' _ automatica _ , vol .4 , pp . 695 - 700 , 2010 .h. yu and p. j. antsaklis , `` event - triggered output feedback control for networked control systems using passivity : achieving stability in the presence of communication delays and signal quantization , '' _ automatica _ , vol .1 , pp . 30 - 38 , 2013 .e. garcia and p. j. antsaklis , `` model - based event - triggered control for systems with quantization and time - varying network delays , '' _ automatic control , ieee transactions on _ , vol .2 , pp . 422 - 434 , 2013 .e. garcia , y. c. cao , h. yu , p. j. antsaklis and d. casbeer , `` decentralised event - triggered cooperative control with limited communication , '' _ international journal of control _ , vol .9 , pp . 1479 - 1488 , 2013 .b. liu , w. l. lu and t. p. chen , `` consensus in networks of multiagents with switching topologies modeled as adapted stochastic processes , '' _ siam journal on control and optimization _ , vol .227 - 253 , 2011 .
the consensus problem for multi - agent systems with quantized communication or sensing is considered . centralized and distributed self - triggered rules are proposed to reduce the overall need of communication and system updates . it is proved that these self - triggered rules realize consensus exponentially if the network topologies have a spanning tree and the quantization function is uniform . numerical simulations are provided to show the effectiveness of the theoretical results .
in the state - of - art mobile communication systems , a network operator possesses a spectrum license that provides exclusive transmission rights for a particular range of radio frequencies . spectrum assignment based on dedicated licenses resolves the issues related to inter - operator interference but it also results in low spectrum utilization efficiency .inter - operator spectrum sharing is envisioned as one of the viable approaches to achieve higher operational bandwidth efficiency and meet the increasing mobile data traffic demand in a timely manner . in the scenario ,a limited number of operators share a common resource pool by relying on more flexible and adaptive prioritization policies than is currently possible with dedicated licenses .cognitive radio technologies are effective measures to resolve the sharing conflicts over the under vertical spectrum sharing , where the lessor ( owner ) operator has higher legacy rights over the spectrum than the lessee operator . on the other hand ,the co - primary or horizontal spectrum sharing scheme conceptualizes the case where authorized operators possess equal ownership on the spectrum being adopted .however , _ a priori _ agreements should be made on the spectrum usage with regard to the long term share of an individual operator .the multilateral use of shared resources in the can , for instance , be achieved with channel allocation schemes originally developed for single - operator systems .these schemes are in principle applicable to realizes inter - operator spectrum sharing , provided that the operators are willing to exchange information and cooperate honestly . under this requirement , many spectrum sharing algorithms are available in the literature .they differ related to the domain where inter - operator interference is handled , i.e. time , frequency , and/or space .the cooperative spectrum sharing schemes require a great deal of network information exchange among the operators , e.g. , interference prices , channel state information , etc . , and/or employing a central entity to decide upon the resource allocation . in , the operator reports the inflicted aggregate interference to the spectrum controller , and on this basis , the controller awards the spectrum pool to the impacted . in ,cooperation amongst the operators is realized by broadcasting the spectrum occupancy information , allowing small cells of competitor operators to avoid interference in accessing the spectrum pool .similarly , operators in maintain channel occupancy and spectrum reservation matrices for opportunistic access to the shared pool .although the achievable gains in cooperative schemes are in general high , operators may be reluctant to share proprietary information with their competitors and may also have an incentive to mis - report this information .finally , information exchange may incorporate excessive inter - operator signaling overhead . in this perspective ,game theoretic non - cooperative schemes appear to be a more viable option to share spectrum . in these schemes ,players make decisions independently ; they may still cooperate with competitors but , the cooperation is entirely self - enforcing . in , operators establish cooperation and play non - zero sum games to share spectrum . however , the choice of utility function , encompassing spectrum pricing is undesirable as it penalizes increased spectrum usage . in ,the operators enlist their preferences of partitioning the shared pool and the outcome is established based on a minimum rule .this method may not work well in scenarios with load variations , in which heavily - loaded and lightly - loaded operators will end up with the same number of orthogonal carriers from the pool . in , operators model their interactions via repeated games .a common assumption is that they agree in advance on the spectrum allocation , e.g. , at a in or at an orthogonal allocation in and this allocation is maintained under the threat of punishment .auction - based sharing techniques have been discussed in , in which operators bid competitively for spectrum access through a spectrum broker .however , operators may be hesitant in adopting market - driven sharing schemes as they may not want to touch their revenue model . in this paper, we consider spectrum sharing in a setting where no information is revealed to other operators .we assume that operators are not willing to monetize spectrum use , keeping spectrum sharing on the level . unlike one - shot games , the proposed scheme takes into account also the history of previous interactions between the operators and entails the benefits of reciprocity .we illustrate that a repeated game can be set up so that both operators achieve better performance in comparison to a static spectrum allocation . unlike the repeated game models proposed in , we do not fix the spectrum allocation but we allow a flexible use of the based on the network load and interference conditions . by employing the proposed scheme in a scenario with two operators , we are able to show that under load asymmetry , both operators can benefit as compared to a scheme where no spectrum coordination is allowed. the remainder of the paper is organized as follows . in section [ sec : system_model ] , we present the system model .section [ sec : coordination ] formulates the repeated game for inter - operator spectrum sharing and presents the proposed mechanism for negotiating the utilization of the spectrum pool .section [ sec : numericals ] demonstrates performance gains with the proposed scheme and finally section [ sec : conclusions ] concludes the paper with a summary and areas for further work .for simplicity , we concentrate on a spectrum sharing scenario with two small cell operators , operator and operator . each operator has one for dedicated usage .the operators participate also in a and divide it into of equal bandwidth .the proposed spectrum sharing scheme will be used to negotiate the utilization of the in the downlink .we consider a scenario with network load variations . at a particular timeinstant , the user distribution is modeled via a with a mean equal to users for operator . given the network state ,an operator evaluates a network utility function to describe the offered to its users .it is important to remark that operators need not employ same utility function nor to be aware of each other s utility function . for simplicity, we assume that both operators maintain a proportionally fair utility function directly constructed based on the user rates . where is a particular realization of a with mean , is the transmission rate of the -th user of operator on the -th calculated as where is the time scheduling weight of the -th user scheduled on the -th , is the bandwidth of a , is the downlink user sinr and is the sinr efficiency .we consider downlink transmissions without power control .the downlink received signal power for the -th user on the -th is . also , let us denote by the aggregate interference level incorporating both interference from the operator s own network , and from the other operator s interfering .then , the downlink user sinr is where is the power per of thermal noise and other interference .note that on the dedicated there is no inter - operator interference .the scheduling weights , are determined to maximize the utility . in order to evaluate the effect the opponent operator has on its utility , an operator may ask its users to measure the amount of aggregate interference level they receive from the opponent . this functionality requires that the users are able to separate between their own and the other operator s generated interference .note that this kind of functionality does not require any signaling between the two operators .it is assumed that inter - operator interference measurements are ideal .in small cell deployments it is expected to have changing traffic and interference profiles .small cell deployments of different operators sharing spectrum in the same geographical area can exploit these fluctuations and achieve mutual benefits by regulating the allocation of .for instance , let us consider spectrum sharing between two operators with unbalanced traffic loads over a . a lightly - loaded operator can satisfy its with few andcould perhaps stop using some of the from the pool .an operator that is heavily loaded at that time would not suffer from inter - operator interference on the emptied and would be able to meet its too .however , there should be an incentive for the lightly - loaded operator to free up some . without going to monetary transactions, we propose to regulate the allocation of by means of _ spectrum usage favors _ asked and granted by the operators . in a, a spectrum usage favor refers to the following action an operator asks its competitor for permission to start using a certain number of from the pool on an exclusive basis .while negotiating for spectrum , the operators should agree about the default utilization of the spectrum pool . in principle , any mac protocol could be applied in the default state .we consider that both operators utilize all the of the pool .a spectrum usage favor that is exchanged between the operators necessitates a departure from the default state .the time period a spectrum favor is valid is agreed between the operators .e.g. , a favor can be valid in the order of seconds , reflecting the time the network states remain unchanged .after the validity of a favor expires , the utilization of the spectrum pool falls back to the default state and the operators will begin a new round of negotiations .since the operators will share spectrum for a long time , an operator taken favors in the past will return them in future to show a cooperative spirit and maintain the exchange of favors with its competitor . monetized compensations or auction schemes are not considered here .it is a non - trivial task to design such efficient mechanisms for a limited area and a limited time , and to couple operator strategies to the income model of operators . realizing spectrum sharing in the form of favors entails the benefits of reciprocity , circumvents monetary - based spectrum sharing and enables the operators to achieve mutual benefits without revealing ran - specific information to competitors and/or other parties .these make the considered approach similar to peering agreements in the internet .we first consider a one - shot game where the operators and are modeled as myopic players .the game is strategic and non - cooperative .each player s action or strategy , , is to either ask a favor of ccs denoted by , grant a favor on ccs denoted by or do neither , denoted by .let denote the set of such actions for player . to specify the outcome of the game , we assume that a favor is exchanged only if one player plays and the other plays with , then the outcome is an exchange of , no exchange of favors occur .a trade for still occurs .we leave such a model for future work . ]depending on the outcome , operators draw rewards : ( i ) the reward when a player takes a favor is equal to the utility gain when the interference on is eliminated , ( ii ) the reward when a player grants a favor is equal to the utility loss when stopping to use and ( iii ) the reward when a player does not ask nor grant a favor is zero .the gains and losses thus depend on the current internal state of the player in question , and are not known to the opponent .in such a game , a final outcome is a , from which no player can improve its utility by deviating unilaterally , i.e. , for every player and every alternative strategy .it is straightforward to see that the of the formulated one - shot game corresponds to the situation where a player always asks for a favor on to maximize its reward but never grants a favor . as a result, both operators would utilize all from the pool irrespective of the network load and interference profiles .the game formulated here differs from the power control game in in that here the actions are requests of other player powers , and power control is binary , whereas in , the actions are power allocation profiles across frequency .also , no information of other operator state is assumed here .the of the strategic one - shot games discussed here and in , however , are similar ; full usage of the available spectrum is the only option for a rational player .since the operators will share spectrum for a long time , the one - shot game described above will be played repeatedly . in a repeated game ,the action of a player at a stage game depends not only on the current rewards but also in the sequence of previous rewards see , e.g. , .repeated games , such as prisoner s dilemma have been well - studied and admit a rich set of equilibrium profiles including various punishment strategies .the setting here is even more challenging as the game we are interested in is a _ stochastic game _ meaning that each player s pay - off depends on a random parameter , namely the configuration of the users at that time , and moreover the player s have _ imperfect information _ since they only observe their own user configuration .we thus have a _ bayesian game_. given this , we focus on a simplified set of stationary threshold policies and characterize an equilibrium among these policies .negotiations of favors on a single were considered in .here , we extend that to a situation with multiple . at each stage of the gamean operator can compute its utility gain and utility loss by asking and granting favors on for .the of the utility gains when operator gets a favor on is denoted by and similarly , the of utility losses by . in our system setup ,the randomness is only due to the poisson distribution of the operator s own users ( and the user s corresponding channel gains ) .therefore the depend only on the network state of the operator s own network .however , note that if power control and inter - cell interference coordination is employed , these distributions may depend on the state of the opponent operator s network too . at each stage game , the operator first checks whether to ask for a favor on by comparing its immediate utility gain with a threshold .if the utility gain is less than the threshold , the operator can consider asking for a favor on instead , and so forth .we assume that an operator always asks for the largest number of for which its utility gain exceeds the corresponding threshold . as a result ,the probability that operator asks for a favor on is equal to the probability that the utility gain from taking a favor on is less than the corresponding thresholds and also , the utility gain from taking a favor on is higher than the threshold where , to simplify the analysis , it has been assumed that the distributions of utility gains from taking favors on different number of are independent .similarly , the operator grants a favor on upon being asked , if its immediate utility loss is smaller than a threshold and it has not already requested a favor . taking into account the fact that an operator can not ask and grant a favor at the same stage game , the probability to grant a favor on is where it has been assumed that the distributions of utility gains and utility losses are independent .we assume that the networks of the operators are similar , and in symmetric relationship with each other , and do not assume a discount of favors . to get preliminary understanding on steady state behavior in such a setting , inspired by thus assume that averaged over long times , operators give and take the same amount of equally valuable favors .hence , favors would become a rudimentary ran - level spectrum sharing currency .thus we have where the left - hand side describes the average number of that operator gets a favor on , and the right - hand side the same quantity for operator .an operator can monitor the probabilities of asking and granting of the opponent and set its own decision thresholds for satisfying the constraint .however , there may be multiple combinations of thresholds fulfilling the constraint .we propose to identify the thresholds maximizing an excess utility calculated over the of the one - shot game ( i.e. both operators utilize simultaneously all the ) .the excess utility for an operator reflects its expected gain from taking a favor penalized by its expected loss from granting a favor . in order to avoid unnecessary complexity in the notation we show how to set the decision thresholds for operator .similarly , the decision thresholds for operator can be computed .the excess utility for operator is where and are the average gain and loss in utility on for operator such that , the optimization problem for identifying the decision thresholds is in order to solve this optimization problem , we construct the lagrangian function and solve the system of first - order conditions . where is the lagrange multiplier .starting with the partial derivative of the lagrangian in terms of and setting it equal to zero allows computing the value of the lagrange multiplier . setting the partial derivative of the lagrangian with respect to equal to zero , and substituting the value of the lagrange multiplier into the resulting equation gives next , starting from one can determine the threshold as a function of the thresholds finally , setting and using the solution for , we end up with the thresholds that may maximize the lagrangian must jointly satisfy equations and also the constraint .note that the above system of equations does not accept a closed - form solution but it is straightforward to solve numerically .also , from equations ( [ eq : dervg1],[eq : dervgn ] ) one can deduce that . in the appendix, we show that the solution satisfying the first - order conditions and the constraint satisfies .therefore , the proposed method can achieve better performance in comparison to the of the one - shot game .finally , besides the calculation of the lagrangian at the stationary point , we also compute it at the borders .the thresholds , either interior or border , maximizing the lagrangian are selected .in order to assess the performance of the proposed coordination protocol , we consider an indoor deployment scenario in a hall of a single - story building .the hall is a square with a side of m. the are partitioned into two groups as illustrated in fig .[ fig : building ] modeling a spectrum sharing scenario with two operators .the service areas of the operators fully overlap .a user is connected to the of its home network with the highest received signal level at its location .we consider a power law model for distance - based propagation pathloss with attenuation constant and pathloss exponent .the available power budget on a is dbm , the thermal noise power is dbm / hz and the noise figure is db .the sinr efficiency is .the bandwidth of a is mhz .initially , we consider that the consists of two . first , we consider an initialization phase of simulation snapshots ( or equivalently stage games ) . at each stage game ,the user locations are independently generated according to the and the operators calculate and keep track of their utility gains and utility losses from taking and granting favors over one and two .we simulate many different network loads so that the distributions of utility gains and utility losses at the end of the initialization can be seen as the steady state distributions over all possible network states .the different network loads are generated by varying the mean of the used to model the locations of the users . in fig .[ fig : gainlossdistr ] we depict the distributions of utility gains and losses for an operator at the end of the initialization .next , we evaluate the performance in terms of the user rate distribution over a finite time horizon of stage games following the initialization phase .initially , the values of the thresholds are set arbitrarily equal to , , and for both operators .every stage games , the operators probabilities for asking and granting favors are recomputed considering all stage games . then, the decision thresholds are updated by solving the optimization problem and so forth .given the allocation of at each stage of the game , the operators compute and keep track of the user rates .recall that granted favors are valid only for a particular stage game . at the end of each stage game , the allocation returns to the default state i.e. both operators utilize all the of the .the performance of the proposed scheme is assessed in comparison with the of the one shot game , which is a static spectrum allocation scheme where both operators utilize all the of the .first , we consider a scenario with network load asymmetry between the operators .the mean number of users for the first stage games are and . in the second half of the simulation , the meanvalues are reversed . in fig .[ fig : opra_asy ] , the rate distribution for the users of operator is depicted over the full course of the simulation . in the first stage games , operator mostly cope with fewer or no due to the lower load and it grants more favors than the operator . in the second half of the simulation , operator returns the favors .overall , operator offers better in comparison with the attained without any coordination e.g. it improves its mean user rate by approximately % .the user rate distribution curves for operator follow the same trend and are not depicted . fig .[ fig : opra_asy_4scc ] depicts the rate distribution curves for the users of operator when the consists of four .one can see that the mean user rate increases by approximately % while , the user rate at the % of the distribution increases by more than % .finally , we show that gains due to coordination can be achieved even in cases with equal mean network loads for the operators . in that case , the proposed protocol takes advantage of the instantaneous network load variations . in fig .[ fig : opra_sym ] we see that operator improves its mean user rate by approximately % for mean number of users and four in the .in this paper , we considered co - primary spectrum sharing between two small cell operators deployed in the same geographical area .we considered a scenario where the operators have equal access rights on a spectrum pool and we proposed a protocol for coordinating the utilization of component carriers from the pool . according to it , an operator may ask for spectrum usage favors from its competitor .a spectrum usage favor means that the competitor would stop using some component carriers from the pool .an operator that has few users to serve could perhaps cope with less component carriers and grant the favor .operators that have taken favors in the past are likely to return these favors in future and reciprocity is maintained .we formulated the interaction between the operators as a strategic , non - cooperative repeated game . since it is hard to analyze the proposed game and find its nash equilibrium , we resort to a heuristic strategy that uses a threshold - based test to decide whether to ask or grant a favor at each stage game .the decision thresholds depend on the current network realization and also in the history of previous interactions with the competitor .we proved that the proposed strategy is strictly better as compared to the case without coordination between the operators .we illustrated that in an indoor deployment scenario , two operators are both able to offer higher user rates as compared to the case with no coordination , without revealing any operator - specific information to each other .our results show that a rational operator , knowing that the opponent is rational and has a network with similar characteristics , has incentive to be cooperative . in future works, operators with non - similar load and network characteristics will be addressed , as well as models where the statistics of the underlying poisson process changes to reflect e.g. variations in load due to time - of - day .using the fact that the decision thresholds over the distribution of utility losses are related as , one can write write equation as using equation into equation , the excess utility can be read as according to the definition of the probabilities of granting a favor from equation , we note that the last term in equation is equal to the right - hand side of the constraint in equation scaled by . after replacing the last term of equation by the left - hand side of the constraint in equation, we end up with using the probabilities of ask a favor from equation into the last term of equation and replacing back the decision thresholds , , the first and the last terms of equation can be factorized together resulting to which is always positive since .this work has been performed in the framework of the fp7 project ict 317669 metis , which is partly funded by the european union .also , this work was supported in part by the academy of finland funded project smaciw under grant no .
we consider two small cell operators deployed in the same geographical area , sharing spectrum resources from a common pool . a method is investigated to coordinate the utilization of the spectrum pool without monetary transactions and without revealing operator - specific information to other parties . for this , we construct a protocol based on asking and receiving _ spectrum usage favors _ by the operators , and keeping a book of the favors . a spectrum usage favor is exchanged between the operators if one is asking for a permission to use some of the resources from the pool on an exclusive basis , and the other is willing to accept that . as a result , the proposed method does not force an operator to take action . an operator with a high load may take spectrum usage favors from an operator that has few users to serve , and it is likely to return these favors in the future to show a cooperative spirit and maintain reciprocity . we formulate the interactions between the operators as a repeated game and determine rules to decide whether to ask or grant a favor at each stage game . we illustrate that under frequent network load variations , which are expected to be prominent in small cell deployments , both operators can attain higher user rates as compared to the case of no coordination of the resource utilization . [ lsp]limited spectrum pool [ cc]component carrier [ bs]base station [ pdf]probability distribution function [ ne]nash equilibrium [ ran]radio access network [ ppp]poisson point process [ qos]quality - of - service [ mno]mobile network operator co - primary spectrum sharing , repeated games , spectrum pooling .
non - dominated sorting is a combinatorial problem that is fundamental in multiobjective optimization , which is ubiquitous is scientific and engineering contexts .the sorting can be viewed as arranging a finite set of points in euclidean space into layers according to the componentwise partial order .the layers are obtained by repeated removal of the set of minimal elements .more formally , given a set of points equipped with the componentwise partial order for .] , the first layer , often called the first pareto front and denoted , is the set of minimal elements in .the second pareto front is the set of minimal elements in , and in general the pareto front is given by in the context of multiobjective optimization , the coordinates of each point in are the values of the objective functions evaluated on a given feasible solution . in this way, each point in corresponds to a feasible solution and the layers provide an effective ranking of all feasible solutions with respect to the given optimization problem .rankings obtained in this way are at the heart of genetic and evolutionary algorithms for multiobjective optimization , which have proven to be valuable tools for finding solutions numerically .figure [ fig : example - fronts ] gives a visual illustration of pareto fronts for randomly generated points .it is important to note that non - dominated sorting is equivalent to the longest chain problem in combinatorics , which has a long history beginning with ulam s famous problem of finding the length of a longest increasing subsequence in a sequence of numbers ( see and the references therein ) .the longest chain problem is then intimately related to problems in combinatorics and graph theory , materials science , and molecular biology . to see this connection ,let denote the length of a longest chain . ] in consisting of points less than or equal to with respect to .if all points in are distinct , then a point is a member of if and only if . by peeling off and making the same argument, we see that is a member of if and only if . in general , for any we have this is a fundamental observation .it says that studying the shapes of the pareto fronts is equivalent to studying the longest chain function .the longest chain problem has well - understood asymptotics as . in this context , we assume that where are _ i.i.d . _ random variables in and let denote the length of a longest chain in .the seminal work on the problem was done by hammersley , who studied the problem for _ i.i.d ._ uniform on ^ 2 ] with density function ^ 2 \to { \mathbb{r}} ] nondecreasing and right continuous . in , we studied the longest chain problem for _ i.i.d . _ on with density function . under general assumptions on , we showed that in almost surely , where is the viscosity solution of the hamilton jacobi equation here and .in this paper we study a fast numerical scheme for ( p ) , first proposed in , and prove convergence of this scheme .we then show how the scheme can be used to design a fast approximate non - dominated sorting algorithm , which requires access to only a fraction of the datapoints , and we evaluate the sorting accuracy of the new algorithm on both synthetic and real data . a fast approximate algorithm for non - dominated sorting has the potential to be a valuable tool for multiobjective optimization , especially in evolutionary algorithms which require frequent non - dominated sorting .there are also potential applications in polynuclear growth of crystals in materials science . here , the scheme for ( p ) could be used to simulate polynuclear growth in the presence of a macroscopically inhomogeneous growth rate .this paper is organized as follows . in section[ sec : convergence ] we prove that the numerical solutions converge to the viscosity solution of ( p ) .we also prove a regularity result for the numerical solutions ( see lemma [ lem : holder ] ) and other important properties . in section [ sec : num ] we demonstrate the numerical scheme on several density functions , and in section [ sec : fast ] we propose a fast algorithm for approximate non - dominated sorting that is based on numerical solving ( p ) .let us first fix some notation . given we write if and .we write when for all . for , and will retain their usual definitions .for we define = \{z \in { \mathbb{r}}^d \ , : \ , x \leqq z \leqq y\ } , \ \ ( x , y ] = \{z \in { \mathbb{r}}^d \ , : \ , x <z \leqq y\},\ ] ] and make similar definitions for and . for any and , there exists unique and such that . we will denote by so that .we also denote and . for , we denote by ] . for this mappingis given explicitly by we say a function is pareto - monotone if we now recall the numerical scheme from .let . for a given ,the domain of dependence for ( p ) is .this can be seen from the connection to non - dominated sorting and the longest chain problem .it is thus natural to consider a scheme for ( p ) based on backward difference quotients , yielding where is the numerical solution of ( p ) and are the standard basis vectors in . under reasonable hypotheses on , described in section [ sec : convergence - proof ] , there exists a unique pareto - monotone viscosity solution of ( p ) .as we wish to numerically approximate this pareto - monotone solution we may assume that for all .given that is non - negative , for any , there is a unique with satisfying .hence the numerical solution can be computed by visiting each grid point exactly once via any sweeping pattern that respects the partial order .the scheme therefore has linear complexity in the number of gridpoints . at each grid point, the scheme can be solved numerically by either a binary search and/or newton s method restricted to the interval .\ ] ] in the case of , we can solve the scheme explicitly via the quadratic formula now extend to a function by setting .defining we see that is a pareto - monotone solution of the discrete scheme where is defined by here , is the space of functions . in the next sectionwe will study properties of solutions of ( s ) .in this section we prove that the numerical solutions defined by ( s ) converge uniformly to the viscosity solution of ( p ) . as in , we place the following assumption on : * there exists an open and bounded set with lipschitz boundary such that is lipschitz and .it is worthwhile to take a moment to motivate the hypothesis ( h ) .consider the following multi - objective optimization problem where with for all , and is the set of feasible solutions .this formulation includes many types of constrained optimization problems , where the constraints are implicitly encoded into .if are feasible solutions in , then these solutions are ranked , with respect to the optimization problem , by performing non - dominated sorting on .thus the domain of is given by .supposing that are , say , uniformly distributed on , then the induced density of on will be nonzero on and identically zero on .thus , the constraint that feasible solutions must lie in directly induces a discontinuity in along . in showed that , under hypothesis ( h ) , there exists a unique pareto - monotone viscosity solution of ( p ) satisfying the additional boundary condition the boundary condition is natural for this problem . indeed , since , there are almost surely no random variables drawn outside of .hence , for any we can write ^d \ , : \ , y\leqq x } u_n(y).\ ] ] since is pareto - monotone , the maximum above is attained at , and hence . for completeness ,let us now give a brief outline of the proof of uniqueness for ( p ) .for more details , we refer the reader to . the proof is based on the auxiliary function technique , now standard in the theory of viscosity solutions .however , the technique must be modified to account for the fact that is possibly discontinuous on , and hence does not possess the required uniform continuity .a commonly employed technique is to modify the auxiliary function so that only a type of one - sided uniform continuity is required of .this allows to , for example , have a discontinuity along a lipschitz curve , provided the jump in is locally in the same direction ( see for more details ) .we can not directly use these results because they require coercivity or uniform continuity of the hamiltonian and/or lipschitzness of solutions none of which hold for ( p ) . our technique for proving uniqueness for ( p ) employs instead an important property of viscosity solutions of ( p)namely that for any , is a viscosity subsolution of ( p ) .this property , called _ truncatability _ in , follows immediately from the variational principle this allows us to prove a comparison principle with no additional assumptions on the hamiltonian .a general framework for proving convergence of a finite - difference scheme to the viscosity solution of a non - linear second order pde was developed by barles and souganidis .their framework requires that the scheme be stable , monotone , consistent , and that the pde satisfy a _ strong uniqueness property_ .the monotonicity condition is equivalent to ellipticity for second order equations , and plays a similar role for first order equations , enabling one to prove maximum and/or comparison principles for the discrete scheme .the strong uniqueness property refers to a comparison principle that holds for semicontinuous viscosity sub- and supersolutions . the numerical scheme ( s )is easily seen to be consistent ; this simply means that for all .the scheme is stable if the numerical solutions are uniformly bounded in , independent of .it is not immediately obvious that ( s ) is stable ; stability follows from the discrete comparison principle for ( s ) ( lemma [ lem : discrete - comp ] ) and is proved in lemma [ lem : holder ] .the monotonicity property requires the following : it is straightforward to verify that ( s ) is monotone when restricted to pareto - monotone .this is sufficient since we are only interested in the pareto - monotone viscosity solution of ( p ) .all that is left is to establish a strong uniqueness result for ( p ) .unfortunately such a result is not available under the hypothesis ( h ) .since may be discontinuous along , we can only establish a comparison principle for continuous viscosity sub- and supersolutions ( see ( * ? ? ?* theorem 4 ) ) .one way to rectify this situation is to break the proof into two steps .first prove convergence of the numerical scheme for lipschitz on .it is straightforward in this case to establish a strong uniqueness result for ( p ) .second , extend the result to satisfying ( h ) by an approximation argument using inf and sup convolutions .although this approach is fruitful , we take an alternative approach as it yields an interesting regularity property for the numerical solutions . in particular , in lemma [ lem : holder ] we establish approximate hlder regularity of of the form latexmath:[\[\label{eq : holder - est } verify in appendix a , the approximate hlder estimate along with the stability of ( s ) allows us to apply the arzel - ascoli theorem , with a slightly modified proof , to the sequence .this allows us to substitute the ordinary uniqueness result from in place of strong uniqueness .we first prove a discrete comparison principle for the scheme ( s ) .this comparison principle is essential in proving stability of ( s ) and the approximate hlder regularity result in lemma [ lem : holder ] .for the remainder of this section , we fix .[ lem : discrete - comp ] let and suppose are pareto - monotone and satisfy .\ ] ] then on ] .suppose that } ( u - v ) > 0 ] , we must have ] and such that since , we have on ] we have \setminus \gamma_h.\ ] ] for \cup \gamma_h ] .let and ^d ] . then for some we have , and hence .we therefore have ^d ) } ( \psi(x - b^k ) - \psi(x - b^k - h e_i))\bigg ) \\ & { } \geq{}s(h , x,{\widehat}{u } ) + \|s(h,\cdot , u)\|_{l^\infty((h , r]^d ) } s(h , x,\psi(\cdot - b^k))\\ & { } \hspace{-2.4mm}\stackrel{\eqref{eq : h - super}}{\geq } { } \hspace{-2.4mm}s(h , x,{\widehat}{u } ) + \|s(h,\cdot , u)\|_{l^\infty((h , r]^d ) } \\ & { } \geq { } s(h , x , u).\end{aligned}\ ] ] suppose now that ] we have and hence .since on ^d ] , which yields ^d)}^\frac{1}{d } \sum_{i=1}^d \psi(y_0 - b^i ) \notag \\ & { } \leq { } dr^\frac{d-1}{d}\|s(h,\cdot , u)\|_{l^\infty((h , r]^d)}^\frac{1}{d } \sum_{i=1}^d ( y_{0,i } - x_{0,i } + h)^\frac{1}{d}\notag \\ & { } \leq { } d^2r^\frac{d-1}{d}\|s(h,\cdot , u)\|_{l^\infty((h , r]^d)}^\frac{1}{d } ( |x_0 - y_0|^\frac{1}{d } + h^\frac{1}{d}).\end{aligned}\ ] ] noting that we have , which completes the proof for the case that .suppose now that ^d ] .therefore , by proposition [ prop : boundary_condition ] , we have that satisfies . combining this with lemma [ lem : holder ]we have for all .similarly , combining with lemma [ lem : holder ] we have latexmath:[\[\label{eq : holder } for every . the estimates in and show uniform boundedness , and a type of equicontinuity , respectively , for the sequence .by an argument similar to the proof of the arzel - ascoli theorem ( see the appendix ) , there exists a subsequence and such that uniformly on compact sets in . by, we actually have uniformly on .since the scheme ( s ) is monotone and consistent , it is a standard result that is a viscosity solution of ( p ) .note that is pareto - monotone , on , and satisfies .since uniformly , it follows that is pareto - monotone , on , and satisfies . by uniqueness for ( p ) ( * ? ? ?* theorem 5 ) we have . since we can apply the same argument to any subsequence of , it follows that uniformly on . in section [ sec : num ] , we observe that the numerical scheme provides a fairly consistent underestimate of the exact solution of ( p ) .the following lemma shows that this is indeed the case whenever the solution of ( p ) is concave .[ lem : conv - below ] let be nonnegative and satisfy ( h ) .let be the unique pareto - monotone viscosity solution of ( p ) satisfying .for every let be the unique pareto - monotone solution of ( s ) . if is concave on then for every .fix . since is concave , it is differentiable almost everywhere .is pareto - monotone also implies differentiability almost everywhere .] let be a point at which is differentiable and is continuous .since is concave we have since is a viscosity solution of ( p ) and is continuous at we have since is continuous , we see that for all ] and versus for the density ^d}(x) ] .the results for the other densities and are similar .we demonstrated the convergence rates on due to the fact that it has many important features ; namely , it is discontinuous , yields non - convex pareto - fronts , and induces a shock in the viscosity solution of ( p ) .we demonstrate now how the numerical scheme ( s ) can be used for fast approximate non - dominated sorting , and give a real - world application to anomaly detection in section [ sec : real ] .we assume here that the given data are drawn _ i.i.d . _ from a reasonably smooth density function , and that is large enough so that is well approximated by . in this regime , it is reasonable to consider an approximate non - dominated sorting algorithm based on numerically solving ( p ) .a natural algorithm is as follows .since the density is rarely known in practice , the first step is to form an estimate of using the samples . in the large sample regime , this can be done very accurately using , for example , a kernel density estimator or a -nearest neighbor estimator . to keep the algorithm as simple as possible , we opt for a simple histogram to estimate , aligned with the same grid used for numerically solving ( p ) .when is large , the estimation of can be done with only a random subset of of cardinality , which avoids considering all samples .the second step is to use the numerical scheme ( s ) to solve ( p ) on a fixed grid of size , using the estimated density on the right hand side of ( p ) .this yields an estimate of , and the final step is to evaluate at each sample to yield approximate pareto ranks for each point .the final evaluation step can be viewed as an interpolation ; we know the values of on each grid point and wish to evaluate at an arbitrary point .a simple linear interpolation is sufficient for this step .however , in the spirit of utilizing the pde ( p ) , we solve the scheme ( s ) at each point using the values of at neighboring grid points , i.e. , given for all , and ] via ( s ) .4 . evaluate for via interpolation . for simplicity of discussion, we have assumed that are drawn from ^d ] . if this is indeed the case , then in light of we have ^d ) } \leq c\left(k^{-\frac{1}{2d}}h^{-1 } + h^\frac{1}{d}\right),\ ] ] with high probability .the right side of the inequality is composed of two competing additive terms . the first term captures the effect of random errors ( variance ) due to an insufficient number of samples .the second term captures the effect of non - random errors ( bias ) due to insufficient resolution of the proposed numerical scheme ( s ) .this decomposition into random and non - random errors is analogous to the mean integrated squared error decomposition in the theory of non - parametric regression and image reconstruction .similarly to we can use the bound in to obtain rules of thumb on how to choose and .for example , we may first choose some value for , and then choose so as to equate the two competing terms in .this yields and becomes ^d ) } \leq ck^{-\frac{1}{2d(d+1 ) } } = ch^\frac{1}{d},\ ] ] with high probability .notice that steps 1 - 3 in algorithm [ alg : ndom ] , i.e. , computing , require operations . if we choose the equalizing value , then we find that computing has complexity .thus algorithm [ alg : ndom ] is sublinear in the following sense . given , we can choose large enough so that ^d ) } \leq \frac{{\varepsilon}}{2c_d},\ ] ] with high probability .the sorting accuracy of using in place of is then given by with high probability . by the stochastic convergence , and the rates presented in section [ sec : pde - rates ], there exists such that for all we have with high probability .thus , for any there exists and such that is an approximation of for all , and can be computed in constant time with respect to .we emphasize that the sublinear nature of the algorithm lies in the computation of .ranking all samples , i.e. , evaluating at each of , and computing the error in of course requires operations . in practice , it is often the case that one need not rank all samples ( e.g. , in a streaming application ) , and in such cases the entire algorithm is constant or sublinear in in the sense described above .we evaluated our proposed algorithm in dimension for a uniform density and a mixture of gaussians given by , where each is a multivariate gaussian density with covariance matrix and mean .we write the covariance matrix in the form , where denotes a rotation matrix , and , are the eigenvalues .the values for and are given in table [ tab : param ] , and the density is illustrated in figure [ fig : gauss - den ] ..parameter values for mixture of gaussians density [ cols="<,<,<,<,<",options="header " , ] it is important to evaluate the accuracy of the approximate sorting obtained by algorithm [ alg : ndom ] . in practice ,the numerical ranks assigned to each point are largely irrelevant , provided the relative orderings between samples are correct . hence a natural accuracy measure for a given rankingis the fraction of pairs that are ordered correctly . recalling that the true pareto rank is given by , this can be expressed as where if and otherwise .it turns out that the accuracy scores for our algorithm are often very close to 1 . in order to make the plots easier to interpret visually ,we have chosen to plot instead of accuracy in _ all _ plots .unfortunately , the complexity of computing the accuracy score via is , which is intractable for even moderate values of .we note however that is , at least formally , a monte - carlo approximation of hence it is natural to use a truncated monte - carlo approximation to estimate .this is done by selecting pairs at random and computing the complexity of the monte - carlo approximation is . in all plots in the paper , we computed the monte - carlo approximation times and plotted means and error bars corresponding to a confidence interval . in all of the figures ,the confidence intervals are sufficiently small so that they are contained within the data point itself .we can see in figure [ fig : acc ] that we can achieve excellent accuracy while maintaining a fixed grid and subsample size as a function of .we also see that , as expected , the accuracy increases when one uses more grid points for solving the pde and/or more subsamples for estimating the density .we also see that the algorithm works better on uniformly distributed samples than on the mixture of gaussians .indeed , it is quite natural to expect the density estimation and numerical scheme to be less accurate when changes rapidly . we compared the performance of our algorithm against the fast two dimensional non - dominated sorting algorithm presented in , which takes operations to sort points .the code for both algorithms was written in c++ and was compiled on the same architecture with the same compiler optimization flags .figure [ fig : cputime ] shows a comparison of the cpu time used by each algorithm . for our fast approximate sorting , we show the cpu time required to solve the pde ( steps 1 - 3 in algorithm [ alg : ndom ] ) separately from the cpu time required to execute all of algorithm [ alg : ndom ] , since the former is sublinear in .it is also interesting to consider the relationship between the grid size and the number of subsamples . in figure[ fig : grid ] , we show accuracy versus grid size for and subsamples for non - dominated sorting of points . notice that for subsamples , it is not beneficial to use a finer grid than approximately .this is quite natural in light of the error estimate on algorithm [ alg : ndom ] .there are certainly other ways one may think of to perform fast approximate sorting without invoking the pde ( p ) .one natural idea would be to perform non - dominated sorting on a random subset of , and then rank all points via some form of interpolation .we will call such an algorithm _ subset ranking _ ( in contrast to the pde - based ranking we have proposed ) .although such an approach is quite intuitive , it is important to note that there is , at present , no theoretical justification for such an approach .nonetheless , it is important to compare the performance of our algorithm against such an algorithm .let us describe how one might implement a subset ranking algorithm .as described above , the first step is to select a random subset of size from .let us call the subset .we then apply non - dominated sorting to , which generates pareto rankings for each .the final step is to rank via interpolation .there are many ways one might approach this . in similar spirit to our pde - based ranking ( algorithm 1 ) , we use grid interpolation , using the same grid size as used to solve the pde .we compute a ranking at each grid point by averaging the ranks of all samples from that fall inside the corresponding grid cell .the ranking of an arbitrary sample is then computed by linear interpolation using the ranks of neighboring grid points . in this way ,the rank of is an average of the ranks of nearby samples from , and there is a grid size parameter which allows a meaningful comparison with pde - based ranking ( algorithm 1 ) .figure [ fig : subsample ] shows the accuracy scores for pde - based ranking ( algorithm 1 ) and subset ranking of samples drawn from the uniform and mixture of gaussians distributions .a grid size of was used for both algorithms , and we varied the number of subsamples from to . notice a consistent accuracy improvement when using pde - based ranking versus subset ranking , when the number of subsamples is significantly less than .it is somewhat surprising to note that subset ranking has much better than expected performance .as mentioned previously , to our knowledge there is no theoretical justification for such a performance when is small .we now demonstrate algorithm [ alg : ndom ] on a large scale real data application of anomaly detection .the data consists of thousands of pedestrian trajectories , captured from an overhead camera , and the goal is to differentiate nominal from anomalous pedestrian behavior in an unsupervised setting .the data is part of the edinburgh informatics forum pedestrian database and was captured in the main building of the school of informatics at the university of edinburgh .figure [ fig : traj ] shows 100 of the over 100,000 trajectories captured from the overhead camera .the approach to anomaly detection employed in utilizes multiple criteria to measure the dissimilarity between trajectories , and combines the information using a pareto - front method , and in particular , non - dominated sorting .the database consists of a collection of trajectories , where , and the criteria used in are a walking speed dissimilarity , and a trajectory shape dissimilarity . given two trajectories \to [ 0,1]^2 $ ] , the walking speed dissimilarity is the distance between velocity histograms of each trajectory , and the trajectory shape dissimilarity is the distance between the trajectories themselves , i.e. , .there is then a pareto point for every pair of trajectories , yielding pareto points .figure [ fig : points ] shows an example of 50000 pareto points and figure [ fig : fronts ] shows the respective pareto fronts . in , only 1666 trajectories from one day were used , due to the computational complexity of computing the dissimilarities and non - dominated sorting .the anomaly detection algorithm from performs non - dominated sorting on the pareto points , and uses this sorting to define an anomaly score for every trajectory .let and let denote the longest chain function corresponding to this non - dominated sorting .the anomaly score for a particular trajectory is defined as and trajectories with an anomaly score higher than a predefined threshold are deemed anomalous . using algorithm [ alg : ndom ] , we can approximate using only a small fraction of the pareto points , thus alleviating the computational burden of computing all pairwise dissimilarities .figure [ fig : acc - real ] shows the accuracy scores for algorithm [ alg : ndom ] and subset ranking versus the number of subsamples used in each algorithm . due to the memory requirements for non - dominated sorting , we can not sort datasets significantly larger than than points .although there is no such limitation on algorithm 1 , it is important to have a ground truth sorting to compare against . therefore we have used only out of trajectories , yielding approximately pareto points . for both algorithms ,a grid was used for solving the pde and interpolation . notice the accuracy scores are similar to those obtained for the test data in figure [ fig : acc ] .this is an intriguing observation in light of the fact that are _ not _ _ i.i.d ._ , since they are elements of a euclidean dissimilarity matrix .we have provided theory that demonstrates that , when are _ i.i.d ._ in with a nicely behaved density function , the numerical scheme ( s ) for ( p ) can be utilized to perform fast approximate non - dominated sorting with a high degree of accuracy .we have also shown that in a real world example with non-_i.i.d ._ data , the scheme ( s ) still obtains excellent sorting accuracy .we expect the same algorithm to be useful in dimensions and as well , but of course the complexity of solving ( p ) on a grid increases exponentially fast in . in higher dimensions , one could explore other numerical techniques for solving ( p ) which do not utilize a fixed grid . at present , there is also no good algorithm for non - dominated sorting in high dimensions .the fastest known algorithm is , which becomes intractable when and are large .this algorithm has the potential to be particularly useful in the context of big data streaming problems , where it would be important to be able to construct an approximation of the pareto depth function without visiting all the datapoints , as they may be arriving in a data stream and it may be impossible to keep a history of all samples .in such a setting , one could slightly modify algorithm [ alg : ndom ] so that upon receiving a new sample , the estimate is updated , and every so often the scheme ( s ) is applied to recompute the estimate of .there are certainly many situations in practice where the samples are not _ i.i.d ._ , or the density is not nicely behaved . in these cases , there is no reason to expect our algorithm to have much success , and hence we make no claim of universal applicability .however , there are many cases of practical interest where these assumptions are valid , and hence this algorithm can be used to perform fast non - dominated sorting in these cases . furthermore , as we have demonstrated in section [ sec : real ] , there are situations in practice where the _ i.i.d . _ assumption is violated , yet our proposed algorithm maintains excellent accuracy and performance .we proposed a simple _ subset ranking _ algorithm based on sorting a small subset of size and then performing interpolation to rank all samples .although there is currently no theoretical basis for such an algorithm , we showed that subset ranking achieves surprisingly high accuracy scores and is only narrowly outperformed by our proposed pde - based ranking .the simplicity of subset ranking makes it particularly appealing , but more research is needed to prove that it will always achieve such high accuracy scores for moderate values of .we should note that there are many obvious ways to improve our algorithm .histogram approximation to probability densities is quite literally the most basic density estimation algorithm , and one would expect to obtain better results with more sophisticated estimators .it would also be natural to perform some sort of histogram equalization to prior to applying our algorithm in order to spread the samples out more uniformly and smooth out the effective density .provided such a transformation preserves the partial order it would not affect the non - dominated sorting of . in the case that is separable ( a product density ) , one can perform histogram equalization on each coordinate independently to obtain uniformly distributed samples .we leave these and other potential improvements to future work ; our purpose in this paper has been to demonstrate that one can obtain excellent results with a very basic algorithm .we thank ko - jen hsiao for providing code for manipulating the pedestrian trajectory database .we use the following minor extension of the arzel - ascoli theorem in section [ sec : convergence - proof ] .let be a compact metric space .we say that a sequence of real - valued functions on is _ approximately equicontinuous _ if for every there exists such that for every .let . since is approximately equicontinuous there exists such that for all we have forms an open cover of .since is compact , there exists a finite subcover for some integer . without loss of generalitywe may assume that .now let .by we have for some and any .hence we have it follows that is cauchy in , which completes the proof .
non - dominated sorting is a fundamental combinatorial problem in multiobjective optimization , and is equivalent to the longest chain problem in combinatorics and random growth models for crystals in materials science . in a previous work , we showed that non - dominated sorting has a continuum limit that corresponds to solving a hamilton jacobi equation . in this work we present and analyze a fast numerical scheme for this hamilton jacobi equation , and show how it can be used to design a fast algorithm for approximate non - dominated sorting .
since it is rare to find a pair of biological species in nature which meet precise prey - dependence or ratio - dependence functional responses in predator - prey models , especially when predators have to search for food ( and therefore , have to share or compete for food ) , a more suitable general predator - prey theory should be based on the so - called ratio - dependent theory ( see ) .the theory may be stated as follows : the per capita predator growth rate should be a function of the ratio of prey to predator abundance , and so should be the so - called predator functional response .such cases are strongly supported by numerous field and laboratory experiments and observations ( see , for instance , ) .denote by and the population densities of prey and predator at time respectively . then the ratio - dependent type predator - prey model with michaelis - menten type functional responseis given as follows : [ 1.1 ] & = & rn(1-)- , + & = & p , where and are positive constants .in , denotes a mortality function of predator , and and the prey growth rate with intrinsic growth rate and carrying capacity in the absence of predation , respectively , while and are model - dependent constants . from a formal point of view, this model looks very similar to the well - known michaelis - menten - holling predator - prey model : [ 1.2 ] & = & rn(1-)- , + & = & p .indeed , the only difference between models ( [ 1.1 ] ) and ( [ 1.2 ] ) is that the parameter in ( [ 1.2 ] ) is replaced by in ( [ 1.1 ] ) .both terms and are proportional to the so - called searching time of the predator , namely , the time spent by each predator to find one prey .thus , in the michaelis - menten - holling model the searching time is assumed to be independent of predator density , while in the ratio - dependent michaelis - menten type model it is proportional to predator density ( _ i.e. _ , other predators strongly interfere ) .predators and preys are usually abundant in space with different densities at difference positions and they are diffusive .several papers have focused on the effect of diffusion which plays a crucial role in permanence and stability of population ( see , and the references therein ) . especially in the effect of variable dispersion rates on turing instability was extensively studied , and in the dynamics of ratio - dependent system has been analyzed in details with diffusion and delay terms included .cavani and farkas ( see ) have considered a modification of when a diffusion was introduced , yielding : [ 1.3 ] & = & rn(1-)-+d_1,x(0,l),t>0 , + & = & p+d_2,x(0,l),t>0 , where the specific mortality of the predator is given by which depends on the quantity of predator . here ,the positive constants and denote the minimal mortality and the limiting mortality of the predator , respectively . throughout the paper , the following natural condition will be assumed , and we will consider the case of the constant diffusivity , , .the advantage of this model is that the predator mortality is neither a constant nor an unbounded function , but still it is increasing with the predator abundance . on the other hand , combining and ,many authors ( see , for instance ) have studied a more general model as follows : & = & rn-+d_1,x(0,l ) , t>0 , + & = & p+d_2,x(0,l ) , t>0 , with the specific mortality of the predator somewhat restricted in the form in this paper we consider a ratio - dependent reaction - diffusion predator - prey model with michaelis - menten type functional response and the specific mortality of the predator given by ( [ 1.4 ] ) instead of .we study the effect of the diffusion on the stability of the stationary solutions .also we explore under which parameter values turing instability can occur giving rise to non - uniform stationary solutions satisfying the following equations : [ full1 ] & = & rn(1-)-+d_1,x(0,l),t>0 , + & = & p+d_2,x(0,l),t>0 , assuming that prey and predator are diffusing according to fick s law in the interval . ] which can not be continued to for any , and + ( ii ) and for .+ moreover , if , then either or as we say that the equilibrium is turing unstable if it is an asymptotically stable equilibrium of the kinetic system ( [ kinetic ] ) but is unstable with respect to solutions of ( 2.1 ) ( see ) .an equilibrium is turing unstable means that there are solutions of ( [ 2.3 ] ) that have initial values arbitrarily closed to ( in the supremum norm ) but do not tend to as tends to .we linearize system ( [ 2.1 ] ) at the point : setting the linearized system assumes the form while the boundary conditions remain unchanged : the linear boundary value problem - can be solved in several ways . in particular , the fourier s method of separation of variablesassumes that solutions can be represented in the form with \rightarrow \mathbb r. ] ; \(iii ) .let be an arbitrary closed subspace of such that \oplus z; ] where denotes the usual vector or matrix - norm , while ,\mathbb r^{2}) ] however , in choosing the subspace of we shall use the orthogonality induced by the inner product ,\quad\text{for } \bv=(v_{1},v_{2})^t , \bw=(w_{1},w_{2})^t.\ ] ] [ thm5 ] suppose that and \(i ) if ( [ 3.3 ] ) holds , then the constant solution of the nonlinear problem ( [ 2.3 ] ) is asymptotically stable .\(ii ) if is not parallel to the second eigenvector of and satisfies ( [ 3.4 ] ) , then at the constant solution undergoes a turing bifurcation .\(i ) follows immediately from the asymptotic stability of the zero solution of the linear problem _ _ ( [ 2.7])-([2.8])_. _ \(ii ) as in the proof of ( i ) of theorem [ thm3 ] , we have that is asymptotically stable for , while it is unstable for .we have to show the existence of a stationary non - constant solution in some neighborhood of the critical value such a stationary solution satisfies the following system of second - order partial differential equations we consider ( [ 3.9 ] ) as an operator equation on the banach space given by ( [ 3.7 ] ) , and we apply theorem [ thm3 ] with as the bifurcation parameter .set . then ( [ 3.9 ] )assumes the equivalent form where is the jacobian matrix of evaluated at and denote the left hand side of ( [ 3.10 ] ) by where is a one - parameter family of operators acting on and taking its elements into ;\mathbb r^{2}). ] and the mesh size and the time step size which will be determined later in .set denote by and the numerical approximation of , , respectively for and then , given initial data the numerical scheme is to solve [ eq : njpj ] n_j^k+1&=&n_j^k + t n_j^k + t d_1 , + p_j^k+1&=&p_j^k + tp_j^k + t d_2 for iteratively for on the boundaries where neumann condition holds , we used a three - point interpolation scheme to guarantee the second - order accuracy in space as follows : [ eq : bc ] n_2^k - 4 n_1^k + 3 n_0^k = 0 ; & & p_2^k - 4 p_1^k + 3 p_0^k = 0 ; + n_n_h-2^k - 4 n_n_h-1^k + 3 n_n_h^k = 0 ; & & p_n_h-2^k - 4 p_n_h-1^k + 3 p_n_h^k = 0 .we will then establish the the positivity of the numerical solutions and boundedness for the numerical prey solution under certain conditions on .suppose that for then , for + \delta t d_1\frac{n_{j-1}^k - 2n_j^k + n_{j+1}^k}{h^2 } \nonumber \\ & \leq & n_j^k + { \delta t } n_j^k \left[1-n_j^k \right ] + 2\frac{\delta t d_1}{h^2}(1- n_j^k)\nonumber \\ & = & n_j^k + { \delta t } ( 1-n_j^k ) \left[n_j^k + \frac{2d_1}{h^2}\right ] \nonumber \\ & \leq & n_j^k + { \delta t } ( 1-n_j^k ) ( 1+\frac{2d_1}{h^2 } ) \nonumber \\ & = & \left[1-\delta t(1+\frac{2d_1}{h^2})\right]n_j^{k } + \delta t(1+\frac{2d_1}{h^2})\nonumber \\ & \leq & 1-\delta t(1+\frac{2d_1}{h^2 } ) + \delta t(1+\frac{2d_1}{h^2 } ) \leq 1 \label{est : nj}\end{aligned}\ ] ] provided for , owing to the boundary condition , is given by + \delta t d_1\frac { 4 ( - n_{1}^k + n_2^k)}{3h^2}.\end{aligned}\ ] ] hence , the same analysis as above yields , instead of , n_1^k + \delta t ( 1+\frac{4d_1}{3h^2 } ) \le 1\end{aligned}\ ] ] provided analgously , one gets + \delta t d_1\frac { 4 ( - n_{n_h-1}^k + n_{n_h-2}^k)}{3h^2},\end{aligned}\ ] ] and therefore n_{n_h-1}^k + \delta t ( 1+\frac{4d_1}{3h^2 } ) \le 1\end{aligned}\ ] ] provided on the other hand , suppose that for then , for + \delta t d_1\frac{n_{j-1}^k - 2n_j^k + n_{j+1}^k}{h^2}\nonumber \\ & \geq & n_j^k + \delta t n_j^k(1-n_j^k ) - \delta t \alpha n_j^k -2\frac{\delta t d_1}{h^2}n_j^k\nonumber\\ & \ge&(1+\delta t-\delta t\alpha)n_j^k-\delta t n_j^k - 2\frac{\delta t d_1}{h^2}n_j^k\nonumber\\ & = & \left[1-\delta t \alpha-\delta t\frac{2d_1}{h^2}\right]n_j^{k } \geq 0,\end{aligned}\ ] ] provided next for , by using , the procedure to get the estimate leads to ^{k } \geq 0\end{aligned}\ ] ] provided similarly , under the same conditions , one obtains {n_h-1}^{k } \geq 0.\end{aligned}\ ] ] next , suppose that and for recalling , one then obtains , for p_j^k+1&=&p_j^k + tp_j^k + t d_2 + & & p_j^k - tp_j^k - p_j^k + & & ( 1-t - ) p_j^k 0 [ est : pj ] provided . for and , taking into account of the boundary condition , one gets p_j^k+1 ( 1-t - ) p_j^k 0 j = 1j = n_h-1 provided . collecting all the above results , we are now in a position to state the following theorem : let for suppose that then the numerical solutions and obtained iteratively by and satisfies that numerically a steady state is declared to reach when either the or -norm difference is less than a given tolerance value .the and -norm differences are defined as follows : }{\max } \left|\bu_{steady}(x , k\delta t)-\bu_{h}(x , k\delta t)\right|,\end{aligned}\ ] ] where are given by with terms neglected and is the piecewise linear interpolation of the numerical solution set .the unique positive equilibrium is if we fix for the length of the habitat the interval ( [ 3.4 ] ) becomes in the following figure [ fig : d1d2 ] , stability regions , the mean prey - predator diffusion coefficients , and , are plotted .we tested our model in the cases of = ( 0.005,0.2 ) and = ( 0.005,0.32 ) , which are in the stable and unstable regions with varying , respectively . in these cases ,the critical value for turing bifurcation is .figure [ fig : merge_np ] shows the numerical prey and predator solutions , and , with respect to time at a specified fixed point . as shown in figure [ fig : merge_np ] , for = ( 0.005,0.2 ) , the equilibrium solution is asymptotically stable and for , the equilibrium solution is unstable . for the simulation in the case of we used the spatial mesh size and the time step size determined by the .the iteration was run until the time equals to 1000 , with approximately iterations . in the case of mesh size =0.005 and the time step size = 0.0000375 were used , which were alsothe . in this case also the simulation was done until the time equals to 1000 , with approximately iterations . in figure[ fig : d202 ] , in case of the prey and predator solutions are plotted with respect to number of iterations and space .we clearly see that as time goes to infinity , the solution converges to the equilibrium solution . in the lower figure in figure[ fig : d202 ] , in case of where is in unstable region , the prey and predator solutions are plotted with respect to number of iterations and space .we clearly see that as time goes to infinity , the solution shows the deviation from the equilibrium solution . in figure[ fig : varying - s ] , for the values near , = ( 0.005,0.27 ) and = ( 0.005,0.272 ) are considered . by varying values from 0.05 to 0.4, the prey predator solution has a _ small amplitude pattern _ which we expected in the theory . in figure[ fig : merge - as - preda ] and figure [ fig : merge - as - prey ] , we have plotted the prey and predator solutions and their small amplitude patterns with respect to number of iterations and space by changing values . near the , in case of we use the mesh sizes and ran our simulation until the number of iteration is approximately . in case of we have used with the mesh sizes and . againour runs were continued until the number of iteration was approximately . in figure[ fig : merge - as - preda ] and figure [ fig : merge - as - prey ] , the axis scale in has been used as that of the case of which has a bigger amplitude pattern . comparing the solutions in figure [ fig :merge - as - preda ] and figure [ fig : merge - as - prey ] with the non - constant stationary solution ( [ 3.25 ] ) , we clearly observe that as time goes to infinity the prey and predator solutions converge to non - constant stationary solution ( [ 3.25 ] ) which confirms that undergoes a turing bifurcation .system ( [ 2.1 ] ) describes the dynamics of a ratio - dependent predator - prey interaction with diffusion .prey quantity grows logistically in the absence of predation , predator mortality is neither a constant nor an unbounded function , but it is increasing with the predator abundance and both species are subject to fickian diffusion in a one - dimensional spatial habitat from which and into which there is no migration .it is assumed that the system without diffusion has a positive equilibrium and under certain conditions it is asymptotically stable .we show that analytically at a certain critical value a diffusion driven ( turing type ) instability occurs , _ i.e. _ the stationary solution stays stable with respect to the kinetic system ( the system without diffusion ) .we also show that the stationary solution becomes unstable with respect to the system with diffusion and that turing bifurcation takes place : a spatially non - homogenous ( non - constant ) solution ( structure or pattern ) arises . a first order approximation of this pattern is explicitly given . a numerical scheme that preserve the positivity of the numerical solutions and the boundedness of prey solutionis introduced .numerical examples are also included .research partially supported by the bk21 mathematical sciences division , seoul national university , kosef ( abrl ) r14 - 2003 - 019 - 01002 - 0 , and krf-2007-c00031 .upper : the prey / predator solution pattern when with varing .( :0.005,:0.27,:0.271 ) lower : the prey / predator solution pattern when .( :0.005,:0.272,:0.271),title="fig:",width=377 ] 0.3 cm upper : the prey / predator solution pattern when with varing .( :0.005,:0.27,:0.271 ) lower : the prey / predator solution pattern when .( :0.005,:0.272,:0.271),title="fig:",width=377 ] upper left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] 0.3 cm upper left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] 0.3 cm upper left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] 0.3 cm upper left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] upper left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] 0.3 cm upper left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] 0.3 cm upper left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] 0.3 cm upper left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.272,:0.271),title="fig:",height=226 ] * figure 1 and plot , from equation ( [ 3.5 ] ) * figure 2 left : the prey solution at =0.25 with respect to time , the constant line represents and the two solid lines represent two different values . right : the predator solution at =0.25 with respect to time , the constant line represents and the two solid lines represent two different values . * figure 3(:0.005,:0.2,:0.271 ) upperleft : the prey solution with respect to time and space when .prey pattern shows the convergence to the equilibrium solution as time increases .upperright : the predator solution with respect to space when .predator pattern shows the convergence to the equilibrium solution as time increases.(:0.005,:0.32,:0.271 ) lowerleft : the prey solution with respect to time and space when .prey pattern shows the deviation from the equilibrium solution as time increases .lowerright : the predator solution with respect to space when .predator pattern shows the deviation from the equilibrium solution as time increases . *figure 4 left : the prey / predator solution pattern when with varing .( :0.005,:0.27,:0.271 ) right : the prey / predator solution pattern when .( :0.005,:0.272,:0.271 ) * figure 5 upper left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the predator solution pattern when .upper right : the predator solution pattern when .( :0.005,:0.272,:0.271 ) * figure 6 upper left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.27,:0.271 ) lower left : the prey solution pattern when .upper right : the prey solution pattern when .( :0.005,:0.272,:0.271 )
ratio - dependent predator - prey models have been increasingly favored by field ecologists where predator - prey interactions have to be taken into account the process of predation search . in this paper we study the conditions of the existence and stability properties of the equilibrium solutions in a reaction - diffusion model in which predator mortality is neither a constant nor an unbounded function , but it is increasing with the predator abundance . we show that analytically at a certain critical value a diffusion driven ( turing type ) instability occurs , _ i.e. _ the stationary solution stays stable with respect to the kinetic system ( the system without diffusion ) . we also show that the stationary solution becomes unstable with respect to the system with diffusion and that turing bifurcation takes place : a spatially non - homogenous ( non - constant ) solution ( structure or pattern ) arises . a numerical scheme that preserve the positivity of the numerical solutions and the boundedness of prey solution will be presented . numerical examples are also included . [ section ] [ theorem]lemma [ theorem]proposition [ theorem]remark [ theorem]corollary [ theorem]definition # 1#2#1_#2 i ) 00 reaction - diffusion system , population dynamics , bifurcation , pattern formation . 35k57 , 92b25 , 93d20 .
statistical mechanics is a theory developed at the end of nineteenth century to deal with physical systems from an atomistic point of view . in principlethe properties of bulk matter , which may contain atoms , can be worked out from the motion of atoms following the basic equations of newtonian mechanics or quantum mechanics . however , such detailed information is not available or not really necessary .a probabilistic point of view , in the form of statistical ensemble is taken .the theory is extremely economical and successful in dealing with _ equilibrium _ problems .there are a number of equivalent formulations of the theory .but statistical mechanics in a nutshell is the following concise formula that connects statistical mechanics with thermodynamics .first the partition function is defined as where is a `` state '' of the system ; the summation is carried over all possible states . is absolute temperature and is the boltzmann constant ( joule / kelvin ) .the free energy of the system at temperature is given by free energy is a useful thermodynamic quantity in dealing with phase transitions .other macroscopic observables are averages of the corresponding microscopic quantities over the boltzmann - gibbs weight , the following three quantities are perhaps the most important ones , internal energy , heat capacity , and entropy , where the average is over the gibbs probability density , .the essential task in statistical mechanics is to do the multi - dimensional integrations ( for continuous systems ) or summations ( for discrete systems ) . to fix the notation and language, we briefly introduce the basics of monte carlo method .consider the computation of a statistical average , where the probability density obeys , and .suppose that we can generate samples according to the probability , then the integral can be estimated from an arithmetic mean over the samples , the random variable has mean and standard deviation , being decorrelation time .thus in the limit of large number of samples , the estimate converges to the exact value .the most general method to generate according to is given by a markov chain with a transition probability , satisfying the conditions stated below .this is the probability of generating a new state given that the current state is .such process converges if the equilibrium distribution of the markov chain satisfies in constructing a monte carlo algorithm , it is convenient to consider a much stronger condition , the detailed balance one of the most famous and widely used monte carlo algorithms is the metropolis importance sampling algorithm .it takes a simple choice of the transition matrix : where , and is a conditional probability of choosing given that the current value is , and it is symmetric , . usually unless is in some `` neighborhood '' of .the diagonal term of is fixed by the normalization condition .in order to facilitate the discussion we first introduce the ising model .ising model is an interacting many - particle model for magnets .a state consists of a collection of variables taken on only two possible values and , signifying the spin up and spin down states .the spins are on a lattice .the energy of the state is given by where is a constant which fixes the energy scale , the summation is over the nearest neighbor pairs .when temperature is specified , the states are distributed according to in a local monte carlo dynamics ( metropolis algorithm ) , one picks a site at random , i.e. , choosing a site with equal probability ( this specifies or realizes ) .then , the energy increment if the spin is flipped , , is computed , which has the result .the flip is accepted with probability . if the flip is rejected , the move is also counted and the state remains unchanged .one monte carlo step is defined as moves ( trials ) for a system of spins , such that each spin is attempted to flip once on average .the local algorithm of metropolis type has some salient features : ( 1 ) it is extremely general .less assumption is made of the specific form of probability distribution .( 2 ) each move involves operations and degrees of freedom .( 3 ) the dynamics suffers from critical slowing down .the correlation time diverges as a critical temperature is approached .we shall elaborate more on this in the following .the statistical error , using estimator eq . ( [ estimator ] ) , is given by where is the variance of the observable , is the number of monte carlo steps , and is the decorrelation time .we can take the point of view that the above equation defines , i.e. , perhaps it is appropriate to call decorrelation time , since is sometimes called correlation time in the literature .the decorrelation time is the minimum number of monte carlo steps needed to generate effectively independent , identically distributed samples in the markov chain .the smallest possible value for is 1 , which represents independent sample by every step .the usual integrated autocorrelation time differs from our definition by a factor of 2 , .the critical slowing down manifests itself by the fact that at the critical temperature where a second order phase transition occurs . here is the linear dimension of the system ( in dimensions ) . for the local algorithms for many models and in any dimensions , suggests bad convergence , specially for large systems . at a first - order phase transition , where some thermodynamic variables change discontinuously ,the situation is even worse diverges exponentially with system sizes . for the two - dimensional ising model , a phase transition occurs at .the magnetization is non - zero below this temperature and becomes zero above .in addition , in the limit of large system ( ) , heat capacity per spin and fluctuation of magnetization diverge .these intrinsic properties make computer simulation near critical point very difficult .cluster algorithms overcome this difficulty successfully .for example , for the two - dimensional ising model , the dynamical critical exponent defined in is reduced from 2.17 for the single - spin flip to much small value for the cluster algorithm .it turns out that a precise characterization of the swendsen - wang dynamical critical exponent is very difficult , due to weak size dependence of the decorrelation time .table 1 represents a recent extensive calculation , based on the definition eq .( [ tau - def ] ) for the total energy , rather than the usual method of extracting information from time - dependent auto - correlation functions . in this calculation ,the variance of the sum of consecutive energies ( c.f . eq .( [ estimator ] ) ) are computed explicitly .good convergence to the limiting value is already achieved typically for .an extrapolation is used to get more accurate estimates of the limit . from this calculation, we find for the two - dimensional ising model , the convergence to asymptotics are slow .it appears that the divergence is slightly faster than .if we fit to power law , using two consecutive sizes of and , the exponent decreases from 0.35 to 0.21 .it is not clear whether will convergence to a finite value or continue decreasing to 0 in the limit of .the nonlocal cluster algorithm introduced by swendsen and wang for the the ising model ( and more generally the potts model ) goes as follows : ( 1 ) go over each nearest neighbor pair and create a bond with probability that is , if the two nearest neighbor spins are the same , a bond is created between them with probability ; if spin values are different , there will be no bond .( 2 ) identify clusters as a set of sites connected by zero or more bonds ( i.e. , connected component of a graph ) .relabel each cluster with a fresh new value or at random .we note that each monte carlo step per spin still takes in computational cost .the method is applicable to models containing ising symmetry , i.e. , the energy is the same when is changed to globally .the algorithm is based on a mapping of ising model to a random cluster model of percolation . specifically , we have \\ & = & \sum_{\{n\ } } p^b ( 1 - p)^{nd - b } 2^{n_{\mathrm{c } } } , \end{aligned}\ ] ] where , , is the kronecker delta , and is the number of bonds , is the dimension of a simple hypercubic lattice , and is the number of clusters . is spin on the site and is the bond variable between the sites and .it is evident that the moves in the swendsen - wang algorithm preserves configuration probability of the augmented model containing both the spins and bonds .a single cluster variant due to wolff is very easy to program .the following c code generates and flip one cluster : .... # define z 4 # define l 8 double p = 0.7 ; int s[l*l ] ; void flip(int i , int s0 ) { int j , nn[z ] ; s[i ] = - s0 ; neighbor(i , nn ) ; for(j = 0 ; j < z ; + + j ) if(s0 = = s[nn[j ] ] & & drand48 ( ) < p ) flip(nn[j ] , s0 ) ; } .... in this single cluster version , a site is selected at random .the value of the spin before flip is .it flips the spin and looks for its neighbors .if the value of the neighbor spins are the same as , with probability a neighbor site becomes part of the cluster .this is performed recursively .it turns out that single cluster is somewhat more efficient than the original swendsen - wang , particularly in high dimensions . for dimensions greater than or equal to 4 ,swendsen - wang dynamics gives the dynamic critical exponent , while wolff single cluster is .it is easy to see why the single cluster algorithm of the above works .let us consider a general cluster flip algorithm with two bond probabilities : will be the probability of connecting two parallel spin sites ; the probability of connecting anti - parallel sites . consider the transition between two configurations and , characterized by flipping a cluster .a cluster is growing from a site until the perimeter of the cluster is not connected to the outside .the transition probabilities can be written down as note that the bond configuration probabilities are the same in the interior of the clusters .the difference occurs at the boundary , where parallel spins ( ) in becomes anti - parallel spins ( ) in , or vice versa .detailed balance requires that where is the number of parallel spins on boundary of in configuration ; and is the number of anti - parallel spins on boundary of in configuration .since we have , we obtain although the algorithm is valid for any , it is most efficient at , the coniglio - klein bond probability value. a quite large number of statistical models can be treated with cluster algorithms , with varied success .excellent performance has been obtained for ising model , potts models , and antiferromagnetic potts models , xy model and general models , field - theoretic model , some regularly frustrated models , six - vertex model , etc .cluster algorithms are proposed for hard sphere fluid systems , quantum systems , microcanonical ensembles , conserved order parameters , etc . invaded cluster algorithm and other proposal excellent methods for locating critical points .the cluster algorithms are also used in image processing .the cluster algorithms do not help much in temperature - driven first - order phase transition the slow convergence has been shown rigorously .models with frustration , spin glass being the archetype , do not have efficient cluster algorithms , although there are attempts with limited success .breakthrough in this area will have a major impact on the simulation methods .in this and the following sections , we discuss a class of monte carlo simulation approaches that aim at an efficient use of data collected , and sampling methods that enhance rare events .the computation of free energy , eq . ( [ free - energy ] ) , poses a difficult problem for monte carlo method .a traditional method is to use thermodynamic integration , e.g. , based on the relation , where .if we can estimate the density of states ( the number of states with a given energy for discrete energy models ) , then we can compute free energy , as well as thermodynamic averages .the result is obtained as a function of temperature , rather than a single datum point for a specific value of , as in standard monte carlo simulation .this idea has been pursued over the last decade by ferrenberg and swendsen , berg et al , lee , oliveira et al , and wang .consider the following decomposition of summations over the states where is the microcanonical ensemble average , since the state space is exponentially large ( for the ising model with spins ) , and the range of is typically of order , if can be computed accurately , the task is done .the canonical average of is related to the microcanonical average through and free energy is computed as ferrenberg and swendsen popularized a method which in a sense is to compute the density of states ( up to a multiplicative constant ) in a range close to a given simulation temperature .this method is generalized as multiple histogram method to combine simulations at differential temperatures , to get the whole energy range .we discuss here only the single histogram method for its simplicity . during a normal canonical simulation at fixed temperature , we collect the histogram of energy , , which is proportional to probability distribution of energy , the constant is related to the partition function , , where is the total number of samples collected . from the above equation , we find . with this information , we can compute the free energy difference between temperature and a nearby temperature .similarly , moments of energy can be computed after the simulation , through histogram reweighting , the range of that the histogram data can be collected at a fixed temperature is limited by the energy distribution , which for the canonical distribution away from critical point , is of order of .the whole range of energy is of order .this limit the usefulness of single histogram method .the multicanonical monte carlo method has been shown to be very effective to overcome supercritical slowing down , reducing the relaxation time from exponential divergence with respect to system size to a power , at the first - order phase transitions .multicanonical ensemble flattens out the energy distribution , so that the computation of the density of states can be done for all values of .a multicanonical ensemble is defined to following the probability density for the states as such that the energy histogram ( ) is a constant . from the histogram samples obtained by a simulation with the weight of state at energy as , the density of statecan be computed from . however , unlike canonical simulation where is given , in a multicanonical simulation , is unknown to start with .berg proposed an iterative method to compute the weight in a parametrized form , starting with no information , . a new estimate at iteration then based on the results of all previous iterations .we refer to references for details .the flat histogram algorithm offers an efficient bootstrap to realize the multicanonical ensemble , while transition matrix monte carlo utilizes more data that can be collected in a simulation to improve statistics .we start from the detailed balance equation for some given dynamics : by summation over the states of fixed energy , and of fixed energy , and assuming that the probability of the state is a function of energy only , , we get where the transition matrix in the space of energy is defined as the matrix has a number of interesting properties : it is a stochastic matrix in the sense of and ; the stationary solution of is the energy distribution ; the dynamics associated with is considerably faster than that of .we specialize to the case of single - spin - flip dynamics for the ising model .the transition matrix for the spin states consists of a product of two factors , the probability of choosing a spin to flip , and the flip rate .we have less the two configurations and differ by one spin , in this case , the value of is , where is the number of spins in the system . using these results , we can rewrite the transition matrix as the diagonal elements are determined by normalization . substituting eq.([t - ssf ] ) into eq.([t - balance ] ) , using the relation between and , we obtain this is known as broad histogram equation which forms the basis for the flat histogram algorithm presented below . additionally , this equation also gives us a way of computing the density of states by the quantity obtained from spin configurations generated from any distribution .the quantity is the number of ways that the system goes to a state with energy , by a single spin flip from state .the angular brackets indicate a microcanonical average : the following algorithm generates flat histogram in energy and realizes the multicanonical ensemble . 1 .pick a site at random .2 . flip the spin with probability where the current state has energy , the new state has energy .3 . accumulate statistics for .4 . go to 1 .we note by virtue of eq .( [ broad - histo - eq ] ) , the flip rate is the same as that in multicanonical simulation with a weight and metropolis acceptance rate .while in multicanonical sampling , the weight is obtained through several simulations iteratively , the quantities is much easier to obtain , through a single simulation .this quantity serves a dual purpose it is used to construct a monte carlo algorithm ( used as input ) , and at the same time , it is used to compute the density of states ( output of the simulation ) .clearly , this is circular unless approximation is made .we have considered replacing the exact microcanonical average by an accumulative average , over the history of simulation generated so far , i.e. , where is the sequence of states generated with the algorithm given above ; is the number of samples accumulated at the energy bin . in case the data for computing the flip rateis not available , we simply accept the move to explore the new state . a more rigorous way of doing simulation is to iterate the simulation with constant flip rate .for example , after the first simulation , we compute a first estimate to the density of states . in a second simulation , we perform multicanonical simulation la lee . the data collected in the second run for will be unbiased .it is found that even with a single simulation , the results converge to the exact values ( for and ) for sufficiently long runs , even though a rigorous mathematical proof of the convergence is lacking .wang and landau proposed recently a new algorithm that works directly with the density of states .the simulation proceeds with the flip rate , but the value of the density of states is updated after every move by and letting for convergence .excellent results were obtained .a careful comparison with flat histogram method is needed . in metropolis algorithm ,moves are sometimes rejected .this rejection is important for realizing the correct stationary distribution . in 1975bortz , kalos , and lebowitz proposed a rejection - free algorithm .it is still based on metropolis flip rate , but the waiting due to rejection is taking into account by considering all possible moves .the bortz - kalos - lebowitz n - fold way algorithm for the ising model goes as follows : 1 .compute the acceptance probability for one attempt of a move 2 .pick an energy change according to probability flip a site belonging to with probability 1 .the site is choosing from the sites with equal probability .one n - fold - way move is equivalent to moves in the original dynamics .thus thermodynamic averages are weighted by , i.e. , , where summation is over every move . in order to implement step 2 efficiently ,additional data structure is needed so that picking a spin in a given class characterized by is done in in computer time .combining n - fold way and flat histogram algorithm is easy , since the important quantity is already computed in flat histogram algorithm .the flip rate is given by formula ( [ flat - histo - rate ] ) . in the flat histogram algorithm , the probability that the energy of the system is is a constant , i.e. , the averages in the second line of the above equation refer to samples generated in an n - fold way simulation .in equal - hit algorithm ( ensemble ) , we require that the number of `` fresh configurations '' generated at each energy is a constant .more precisely equal - hit ensemble is _ defined _ by one possible choice of the flip rate is where is the inverse total acceptance rate arithmetic averaged over the n - fold way samples at energy .the histogram generated in the equal - hit algorithm depends on the precise dynamics ( the rate ) used .since there are many possible choices of the rate , such `` equal - hit ensemble '' is not unique . while eq .( [ broad - histo - eq ] ) gives us a way of obtaining the density of states , there are more equations than unknowns .we consider two optimization methods .the first method is based on the transition matrix itself .we define .symbols with hat being unknown , and the monte carlo estimate , consider subject to , , and . the last constraint needs more explanation .we assume that the energy level is equally spaced ( as in the ising model ) .consider three energy levels , , , . if we write down three equations of type ( [ broad - histo - eq ] ) , for transitions from to , to , and to , we can cancel the density of states by multiplying the three equations together .this leaves the last equation above , and it is known as ttt identity .it can be shown that multiple t identities ( four or more ) are not independent , and they need not put in as constraints . for ising model there is also one additional symmetry constraint , .when the solution for is found , we can use any of the energy detailed balance equation to find density of states .the ttt identity guarantees that the answer is unique whichever detailed balance equation is used .the second method is based on optimization directly with variable , actually , by subject to , for the ising model , , where is the total number of spins in the system .in addition , we can put in the known fact that the ground states are doubly degenerate , ., calculated by transition matrix monte carlo method for a two - dimensional ising model with a total monte carlo steps and the first discarded , using the flat - histogram algorithm and n - fold way .the insert is error in with respect to exact results . ]figure 1 shows one of the simulation results of the density of states , using the second method .the errors by comparison with exact known values are presented in the insert of the figure .the density of states is determined to an accuracy of better than 2 percents in a matter of few minutes of computer time .the flat - histogram dynamics is used to study spin glasses .the dynamic characteristics is quite similar to multi - canonical method of berg .the study of lattice polymer and protein folding is under way . for related ideas and approaches ,see refs. .the phenomenon of critical slowing down can be effectively dealt with by cluster algorithms for a large class of statistical mechanics models .we reported new and very accurate results for the decorrelation times of the swendsen - wang dynamics .large size asymptotic behavior is analyzed . for super - critical slowing down occurring in first - order phase transitions ,multicanonical ensemble simulation and flat - histogram or equal - hit algorithms are very effective .since the latter algorithms and associated transition matrix method is efficient in computing the density of states , this method can also be useful for general counting problems by monte carlo method .the author thanks prof .r. h. swendsen for many of the work discussed here .he also thank z. f. zhan , t. k. tay , and l. w. lee for collaborations .this work is supported by a nus research grant r151 - 000 - 009 - 112 and singapore - mit alliance .sokal , a. d. : in `` computer simulation studies in condensed matter physics : recent developments '' , eds : d. p. landau , k. k. mon , and h .- b .schttler , springer proceedings in physics , vol * 33 * , ( 1988 ) 6 .
the basic problem in equilibrium statistical mechanics is to compute phase space average , in which monte carlo method plays a very important role . we begin with a review of nonlocal algorithms for markov chain monte carlo simulation in statistical physics . we discuss their advantages , applications , and some challenge problems which are still awaiting for better solutions . we discuss some of the recent development in simulation where reweighting is used , such as histogram methods and multicanonical method . we then discuss the transition matrix monte carlo method and associated algorithms . the transition matrix method offers an efficient way to compute the density of states . thus entropy and free energy , as well as the usual thermodynamic averages , are obtained as functions of model parameter ( e.g. temperature ) in a single run . new sampling algorithms , such as the flat histogram algorithm and equal - hit algorithm , offer sampling techniques which generate uniform probability distribution for some chosen macroscopic variable .
given two independent multivariate iid samples with corresponding lebesgue densities and respectively , we are interested in identifying simultaneously subregions of the densities support where deviates significantly from at prespecified but arbitrarily chosen level . for this aim a multiple test of the composite hypothesis versus is proposed , built from a suitable combination of randomized nearest - neighbor statistics .the procedure does not require any preliminary information about the multivariate densities such as compact support , strict positivity or smoothness and shape properties , and it is valid for arbitrary finite sample sizes and .the hierarchical structure of p - values for subsets of deviation between and provides insight into the local power of nearest - neighbor classifiers , based on the training set .thus our method is of interest in particular if the classification error depends strongly on the value of the feature vector , related to recent literature on classification procedures by belomestny and spokoiny ( 2007 ) .there is an extensive amount on literature concerning two - sample problems .most of it is devoted to the one - dimensional case as there exists the simple but powerful `` quantile transformation '' , allowing for distribution - freeness under the null hypothesis of several test statistics .starting from the classical univariate mean shift problem ( see e.g. hjek and 1967 ) , more flexible alternatives as stochastically larger or omnibus alternatives have been investigated for instance by behnen , neuhaus and ruymgaart ( 1983 ) , neuhaus ( 1982 , 1987 ) , fan ( 1996 ) , janic - wrblewska and ledwina ( 2000 ) , and ducharme and ledwina ( 2003 ) .our approach is different in that it aims at spatially adaptive and simultaneous identification of local rather than global deviations . in the above cited literature asymptotic poweris discussed against single directional alternatives tending to zero at a prespecified rate , typically formulated by means of the densities and corresponding to the transformed observations , where denotes the mixed distribution function with density .note that the mapping coincides with the inverse quantile transformation under the null . for power investigation of our procedurea specific two - sample minimax set - up is introduced .it is based on a reparametrization of to a couple , reducing the composite hypothesis `` '' to the simple one `` '' with the multivariate mixed density as infinite dimensional nuisance parameter .the reparametrization conceptionally differs from the above described transformation for the univariate situation as it can not rely on the inverse mixed distribution function .nevertheless it leads under moderate additional assumptions in that case to the same notion of efficiency . in order to explore the power of our method , the alternative is assumed to be of the form for fixed but unknown ,some suitably chosen ( semi-)norm , a constant and a given smoothness class .for any the quality of a statistical level--test is then quantified by its minimal power where the infimum is running over all couples which are contained in the set ( [ menge ] ) .it is a general problem that an optimal solution may depend on and .since the smoothness and shape of a potential difference are rarely known in practice , it is of interest to come up with a procedure which does not depend on these properties but is ( almost ) as good as if they were known , leading to the notion of minimax adaptive testing as introduced in spokoiny ( 1996 ) .note that here we have however as an additional infinite dimensional nuisance parameter . the problem of data - driven testing a simple hypothesis is further investigated for instance by ingster ( 1987 ) , eubank and hart ( 1992 ) , ledwina ( 1994 ) , ledwina and kallenberg ( 1995 ) , fan ( 1996 ) and dmbgen and spokoiny ( 2001 ) among others , the two - sample context by butucea and tribouley ( 2006 ) .the idea in common is to combine a family of test statistics corresponding to different values of the smoothing parameters , respectively ; see , for instance , rufibach and walther ( 2008 ) for a general criterion of multiscale inference .the closest in spirit to ours is the procedure developed in dmbgen and spokoiny ( 2001 ) within the continuous time gaussian white noise model and further explored by dmbgen ( 2002 ) , dmbgen and walther ( 2008 ) and rohde ( 2008 ) , all concerned with univariate problems .walther ( 2010 ) treats the problem of spatial cluster analysis in two dimensions .the paper is organized as follows . in the subsequent section , a multiple randomization testis introduced , built from a combination of suitably standardized nearest - neighbor statistics .its calibration relies on a new coupling exponential bound and an appropriate extension of the multiscale empirical process theory .asymptotic power investigations and adaptivity properties are studied in section 3 , where the reparametrized minimax set - up is introduced .it is shown that our procedure is sharply asymptotically adaptive with respect to -norm on isotropic hlder classes , i.e. minimax efficient over a broad range of hlder smoothness classes simultaneously . the application to local classificationis discussed in section 4 .the one - dimensional situation is considered separately in section 5 where an alternative approach based on local pooled order statistics is proposed . in that casethe statistic does not depend on the observations explicitly but only on their order which in contrast to nearest - neighbor relations is invariant under the quantile transformation .section 6.1 is concerned with a decoupling inequality and the coupling exponential bounds which are essential for our construction .both results are of independent theoretical interest .all proofs and auxiliary results about empirical processes are deferred to section 6.2 and section 6.3 .the procedure below is mainly designed for dimension .the univariate case contains a few special features and is considered separately in section [ sec : d=1 ] .let and denote by the pooled set of observations .for any , the nearest - neighbor of with respect to the _euclidean distance _ is denoted by ; we define .note that the nearest - neighbors are unique a.s .the weighted labels are defined as follows in order to judge about some possible deviation of from on a given set , a natural statistic to look at is a standardized version of or more sophisticated , for some kernel supported by , where and denote the empirical measures corresponding to the first and second sample , respectively . note that the statistic is not distribution - free , and in order to build up a multiple testing procedure several statistics corresponding to different sets have to be combined in a certain way .let denote any kernel of bounded total variation with and for .we introduce the local test statistics where ^ 2.\\\ ] ] every is some in a certain sense standardized weighted average of the nearest - neighbor s labels and its absolute value should tend to be large whenever is clearly larger or smaller than within the random euclidean ball with center and radius .the idea is to build up a multiple test , combining all possible local statistics .the typical way is to consider the distribution of the supremum , see , e.g. heckmann and gijbels ( 2004 ) .the problem is that the distribution is driven by small scales with a corresponding loss of power at larger scales , as there are many more small scales which contribute to the supremum . here, we aim at a supremum type test statistic where the constants are appropriately chosen correction terms ( independent of the label vector ) for adjustment of multiple testing within every `` scale '' of -nearest - neighbor statistics .these correction terms in the calibration aim to treat all the scales roughly equally .although the distribution of under the null hypothesis depends on the unknown underlying distribution , the conditional distribution of the above statistic is invariant under permutation of the components of the label vector . here and subsequently , the index `` '' indicates the null hypothesis , i.e. any couple with .precisely , let the random variable be uniformly distributed on the symmetric group of order , independent of .then where .elementary calculation entails that thus the null hypothesis is satisfied if , and only if , the hypothesis of permutation invariance ( or complete randomness ) conditional on is satisfied .an adequate calibration of the randomized nearest - neighbor statistics , i.e. the choice of smallest possible constants , requires both , an exact understanding of their tail behavior and their dependency structure .note that the randomized nearest - neighbor statistics have a geometrically involved dependency structure . even in case of the rectangular kernel depends explicitly on the `` random design '' which incomplicates the sharp - optimal calibration for multiple testing compared to univariate problems , where the dependency of the single test statistics remains typically invariant under monotone transformation of the design points .also , the optimal correction originally designed for gaussian tails in dmbgen and spokoiny ( 2001 ) does not carry over as only the subsequent bernstein type exponential tail bound is available . [ [ a - coupling - exponential - inequality ] ] * a coupling exponential inequality * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + based on an explicit coupling , the following proposition extends and tightens the exponential bounds derived in serfling ( 1974 ) for a combinatorial process in the present framework . if not stated otherwise , the random variable is uniformly distributed on , independent of .] let be as introduced above and define then where [ [ remark ] ] remark + + + + + + the expression is the payment for decoupling which appears by replacing the tail probability of a hypergeometric ensemble by that of the binomial analogon .for details we refer to section [ sec : decoupling ] . in the typical case obtain .compared to results obtained for weighted averages of standardized , independent bernoullis , the above bernstein type appears to be nearly optimal , i.e. subgaussian tail behavior ( with leading constant ) is actually not present . via inversion of the above exponential inequality , additive correction terms for adjustment of multiple testing are constructed .the next theorem motivates our approach .the construction is designed for typical arrangements of the observation values which appear with probability close to one . to avoid technical expenditure ,we restrict our attention to compactly supported densities . denotes the dual bounded lipschitz metric ( see , e.g. van der vaart and wellner 1996 ) which generates the topology of weak convergence .`` '' refers to convergence in probability along the sequence of distributions .[ thm : limit distribution ] define the test statistic with where and .assume that the sequence of mixed densities on ^d ] , ^d } { \phi}_{rt , n}(x)^2d x\big)^{1/2}\ ] and function from ^d\rightarrow \r ] ,let denote the corresponding mixed density .fix a continuous density and define to be the set of pairs of densities such that ^d\big)\ \ \text{and}\ \h(m , n , p , q)=h.\ ] ] [ [ reparametrizing - the - composite - hypothesis ] ] * reparametrizing the composite hypothesis * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + with the notation above , consequently `` '' is equivalent to `` '' , and if is kept fixed , the composite hypothesis `` '' reduces to the simple hypothesis `` '' . in order to develop a meaningful notion of minimax - efficiency for the two - sample problem we treat subsequently the mixed density as fixed but unknown infinite dimensional nuisance parameter for testing the hypothesis note that in case that is uniformly bounded away from zero and is close to , coincides approximately with the difference , see also the explanation subsequent to theorem [ thm : lower bound ] .[ [ remark-1 ] ] remark + + + + + + it is worth being noticed that the optimal statistic for testing against any fixed alternative equals the likelihood ratio statistic whose distribution still depends on under the null .here and subsequently , the subscript indicates the distribution with density .the rational behind the reparametrization is to eliminate the dependency on the nuisance parameter in the expectation under the null of the first and second order term of the -likelihood expansion , resulting in asymptotic independence of for its distribution under the hypothesis for any local sequence .the subsequent theorem is about the lower bound of hypothesis testing within the above defined classes of densities .[ thm : lower bound ] let where defines the solution to the optimal recovery problem ( [ eq : recovery ] ) below .assume that the sequence of mixed densities on ^d ] , for arbitrary tests at significance level .note that may depend on and even on the nuisance parameter as already does the neyman - pearson test for testing against any one - point alternative .we now turn to the investigation of the test introduced in section [ sec : nearest neighbors ] . to motivate the choice of an optimal kernel for our test statistics and its relation to the optimal recovery problem ,let us restrict our consideration to the gaussian white noise context , leading in case of univariate hlder continuous densities on ] . in case of unbounded support of , we may use a truncated solution .let .assume that is equicontinuous and uniformly bounded away from zero .then for any fixed , there exists a such that for any nondegenerate compact rectangle .in particular , the test is sharp - optimal adaptive with respect to the second hlder parameter .while in view of the results in ingster ( 1987 ) the optimal rate of testing may be expected , some technical effort had to be done to propose a calibration achieving even sharp minimax - optimality .[ [ remark-2 ] ] remark + + + + + + it is worth being noticed that the procedure achieves the upper bound uniformly over a large class of possible mixed densities .the intrinsic reason is that conditioning on is actually equivalent to conditioning on , which indeed is a sufficient and complete statistic for the nuisance functional .[ [ remark - sharp - adaptivity - with - respect - to - beta - and - l ] ] remark ( sharp adaptivity with respect to and ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our construction , including the procedure especially designed for the one - dimensional situation , involves one kernel , shifted and rescaled depending on location and volume of the nearest - neighbor cluster under consideration . due to the dependency of the optimal recovery solution on , the corresponding test statistic achieves sharp adaptivity with respect to the second hlder parameter only .taking in addition the supremum }t_n(\beta ) ] , sharp adaptivity with respect to both hlder parameters may be attained , provided that the above supremum statistic still defines a tight sequence ( in probability ) , i.e. the corresponding sequence of -quantiles was stochastically bounded .then the convergence for any random couple and any choice of could be be extracted from the proof of theorem [ thm : effizienz2 ] . at least for ] , , there exist constants with the results from the previous paragraph deal with small scales of different ( arbitrary ) order depending on the smoothness classes under consideration . in particular , the minimax lower bound is concerned with scales tending to zero as , and it is not yet clear that there is no substantial loss at rather large scales .the size of possible deviation and the scale ( here ) are linked in a specific way depending on the smoothness class under consideration , because the smoothness assumptions do not allow for arbitrarily fast decay to zero .the next theorem is different in spirit .we do not focus on smoothness classes but on stylized situations with being lower bounded by a `` plateau '' of absolute value within a ball . with denoting the lebesgue measure on ^d) ] be a family of measurable functions . for any probability measure on , consider the pseudo - distance for .then for any , the uniform covering numbers of are defined as , where the supremum is running over all probability measures on .( dmbgen and walther ( 2008 , technical report ) ) [ chaining ] let be a stochastic process on a totally bounded pseudo - metric space .let be some positive constant , and for let a nondecreasing function on such that for all and , [ chaining.1 ] \{ > g ( , ) } k ( - ) ( s , t ) .then for arbitrary and , where is a dense subset of , and j(,a ) & : = & _ 0^g((a d(u)^2 / u ) , u ) du , + d(u ) = d(u , , ) & : = & \ { # _ o : _ o , ( s , t ) > u s , t _ o } . * remark .* suppose that for some constants .in addition let with constants and .then elementary calculations show that for and , with . for the proof of theorem [ thm : limit distribution ] the subsequent extension of the chaining lemma vii.9 in pollard ( 1984 ) and theorem 8 in the technical report to dmbgen and walther ( 2008 ) will be used .it complements in particular the existing multiscale theory by a uniform tightness result and to a situation where only a sufficiently sharp _ uniform stochastic _ bound on local covering numbers is available , which typically involves additional logarithmic terms .the situation arises for example in the multivariate random design case where a non - stochastic bound obtained via uniform covering numbers and vc - theory may be too rough . [ levy ]let be a sequence of random variables such that takes values in some polish space .for any , let be a stochastic process on some countable , metric space , where .suppose that the following conditions are satisfied : * ( i ) * there are measurable functions ] by the definition where and denote the empirical measures based on the permutated variables and , respectively .let ^ 2d\hat{\h}_n(x),\end{aligned}\ ] ] with the empirical measure of the observations . in the sequel we make use of the results in the previous section twice - in order to prove the tightness and weak approximation in probability of the sequence of conditional test statistics and within the `` loop '' we use the chaining arguments again to establish a sufficiently tightened uniform stochastic bound for the covering numbers below .i. ( subexponential increments and bernstein type tail behavior ) the inversion of the conditional bernstein type exponential inequality in proposition [ prop : bernstein ] shows that for any , where with let the random pseudo - metric on be defined by ,\end{aligned}\ ] ] with then the application of the second exponential inequality of proposition [ prop : bernstein ] implies for any fixed that where \ii .( random local covering numbers ) we need a bound for the local random covering numbers this is the most involved part of the proof .in contrast to previous work we aim at a uniform stochastic bound . in order to establish a sufficiently sharp upper bound , the following two claims are established : \(i ) let and define for arbitrary different points in via \bigg(1+c\log\big(4\,e\big/\max\big[\e\,{\hat{\rho}_{2,n}}^2,4/n\big]\big)\bigg),\ ] ] with a positive constant to be chosen later .note that the map is subadditive for ] with and any realization if is not rectangular . in case of the rectangular kernel ,the set in the covering number has to be replaced by ( ii ) there exists a constant , independent of and , such that whenever , the upper bound given in ( i ) is again bounded from above by moreover , the latter bound remains valid with in place of . note that we can not rely our bound directly on uniform covering numbers and vapnik - cervonenkis ( vc ) theory as the envelope only allows for a bound of order , which would result in the loss of efficiency of the procedure , and a pre - partitioning of as used in the proof of ( ii ) seems to be rather involved ._ proof of ( i ) : _ we first derive a uniform stochastic bound for the random metric .recall that every function of bounded total variation is representable as a difference of isotonic functions and . with the definition of the subgraphs ^d\times\r:\ , y\leq \psi^{(i)}_{tr}(x)\big\},\ \ i=1,2,\ ] ] the set has a vc - dimension bounded by ( van der vaart and wellner 1996 ) with envelope .consequently , the uniform covering numbers with is bounded by for some real - valued and some constant .the boundedness of shows that is uniform glivenko - cantelli in particular ( see dudley , gin and zinn 1991 , for instance ) . as an immediate consequence , for any such a bound is not sufficient for our purposes . because of , the squared random metric is times the sum of independent random variables with absolute values , hence now the application of bernstein s exponential inequality ( see shorack and wellner 1986 ) entails }\bigg\arrowvert\ > \ \eta\bigg)\ & \leq\ 2\exp\bigg(-\frac{\eta^2/2}{1+\eta/3}\bigg)\\ & \leq\ 2\exp\bigg(-\frac{3}{2}\eta+\frac{9}{2}\bigg)\end{aligned}\ ] ] for arbitrary pointsi.e. , standardized by , has ( uniformly ) subexponential tails .analogously , the process has subexponential increments with respect to the metric given by \big\{a\not = b\big\},\ \ a , b\in\tt\times\tt.\ ] ] note that \underbar{x}\in\aa_n ] . before deriving a stochastic bound , we notice the following : if describes the rectangular kernel , we have , i.e. in this case , the random set is consequently contained in the union consider the general case . using that and we may apply the above chain of arguments for to and together with the upper bounds in ( [ eq : varianz 1 ] ) and ( [ eq : varianz 3 ] ) for the standardization respectively and obtain the existence of a constant such that ^{1/2}}{\sqrt{n}}\log\big(e\sqrt{n}\big/\max\big[1/n,\gamma_{2,n}^2\big]^{1/2}\big)\\ & \leq\ \hat{\gamma}_{1,n}\ \leq\ \gamma_{1,n}\ + \ \frac{c_1\max\big[1/n,\gamma_{2,n}^2\big]^{1/2}}{\sqrt{n}}\log\big(e\sqrt{n}\big/\max\big[1/n,\gamma_{2,n}^2\big]^{1/2}\big)\end{aligned}\ ] ] whenever for some sequence with asymptotic probability , uniformly evaluated at . note that , for all .the same holds true with a constant and a sequence with asymptotic probability and and replaced by and . using the lower bound for and the upper bound for , a bit of algebra yields ^{1/2}\frac{k}{\sqrt{n}}\log\big(e\sqrt{n}\big/\max\big[1/n,\gamma_{2,n}^2\big]^{1/2}\big)^2\bigg\}\end{aligned}\ ] ] whenever . here and from now on , denotes some universal constant , not dependent on and .its value may be different in different expressions .now we first consider the case then the above condition shows that ^{1/2}\frac{k}{\sqrt{n}}\log\big(e\sqrt{n}\big/\max\big[1/n,\gamma_{2,n}^2\big]^{1/2}\big)^2\\ & \leq\ 2\max\bigg\{\delta,\ \max\big[1/n,\gamma_{2,n}^2\big]^{1/2}\frac{k}{\sqrt{n}}\log\big(e\sqrt{n}\big/\max\big[1/n,\gamma_{2,n}^2\big]^{1/2}\big)^2\bigg\},\end{aligned}\ ] ] which entails that for by the isotonicity of on ] . with \delta\bigg[0,\frac{\arrowvert t'-x\arrowvert_2}{r'}\bigg]\ ] ] we obtain }\int i\big\ { y\in m_x(t , t',r , r')\big\}\,d\h_n(x).\label{eq : a}\end{aligned}\ ] ] then implies that . since is uniformly bounded from above , we obtain that ( [ eq : a ] ) is not greater than .consequently , if with the metric defined below in ( [ metric ] ) , due to the isotonicity of for ] , the inequality implies that is not greater than .thus in order to finish claim ( ii ) , it is sufficient to bound first note that there exists a finite collection of at most points such that the set is contained in the union with ^{1/d}\big)\bigg\}\ ] ] for some universal .the rotation and translation invariance of the lebesgue measure leads to the rescaling invariance for the covering numbers but a minimal --net of the set for some _ fixed _ contains not more than ^d ] , noticing that and .now fix a -net with respect to and observe that for , , which shows that the quantity ( [ cn ] ) is bounded by ( with uniformly in and ) .correspondingly , this holds true for )^{1/2},\aa_i , \tilde{d}\big) ] , with asymptotic probability , which entails for some .( tightness and weak approximation in probability ) as a consequence of the above exponential inequalities in step i and the bound for the uniform covering numbers , theorem [ chaining ] shows where the within the brackets is even running over elements of . nowthe application of theorem [ levy ] entails that is tight in -probability .what remains being proved is the weak approximation .starting from ( [ eq : ec ] ) , the uniform convergence ( [ eq : unif .conv . ] ) implies in particular the asymptotic stochastic equicontinuity since to any subsequence of the metric there exists some uniformly convergent subsubsequence as a consequence of the relative compactness of in the uniform topology , it suffices ( via proof of contradiction ) for the weak approximation in probability to establish the convergence of finite dimensional distributions . here, is defined via the outer expectations .for let be a collection of points from . denote furthermore then with the nearest - neighbor of within .let be pointwise be defined by using that equals for two random variables and , one finds that \mathrm{cov}\,\big(y_n(t , r),y_n(t',r')\big\arrowvert\xx_n\big) ] .an application of theorem [ levy ] as well as its subsequent remark imply that for any .thus , because obviously we obtain [ [ proof - of - theorem - thm - lower - bound ] ] proof of theorem [ thm : lower bound ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be some compact rectangle of .fix .for any integer let be some maximal subset of points such that and for arbitrary different points .then .now let be the solution of the subsequent optimization problem : ( ) minimize under the constraints these constraints define a closed and convex set in ^d\big) ] for testing the hypothesis `` '' it holds true that for short we write for in the sequel .note that the test is allowed to depend on the nuisance functional ( in fact the -likelihood and its distribution do ) .now we aim at determining such that the right - hand - side tends to zero as goes to infinity .although for any different , the likelihood - ratios are not independent . however , they are independent conditional on the random vector with entries note that . following at this point standard truncation arguments as , for instance , in dmbgen and walther ( 2009 ) , proof of lemma 7.4 , it turns out to be sufficient for the convergence to zero of ( [ eq : likelihood ] ) to find and ] and . here, denotes the volume of the -dimensional euclidean unit ball , i.e. . together with ( [ expr ] ) - ( [ expr3 ] ) this shows that for any sequence of admissible alternatives if in particular , the term in ( [ eq : e - approximation ] ) is of order we need to check that for this we use the decomposition \hat{\gamma}_n(t , r)^2\ = \ \hat{\gamma}_{2,n}(t , r)^2\ -\\hat{\gamma}_{1,n}(t , r)^2 ] ) , and the fact that while .the case is done analogously ( taking the square ) . to verify ( [ eq : convergence variance ] )it remains to be shown that which however is a simple consequence of chebychef s inequality since for any and any sequence of admissible alternatives , the sequence or some subsequence decreases ( if it decreases ) at a slower rate than .the above considerations show in particular that using in addition that .consequently , and it has to be verified that the latter quantity goes to infinity . recall that ^d } \psi_{t_nr_n}(x)^2h_n(x)dx\ -\ \bigg(\int_{[0,1]^d}\psi_{t_nr_n}(x)h_n(x)dx\bigg)^2\nonumber\\ & = \\big(1+o(r_n^d)\big)\int_{[0,1]^d } \psi_{t_nr_n}(x)^2h_n(x)dx.\label{appp}\end{aligned}\ ] ] we first assume that , i.e. .using that ^d}\sup_{x\in b_{t}(\delta)}\big\arrowvert h_n(x)-h_n(t)\big\arrowvert\ = \ 0,\ ] ] which follows by the same argument as used in theorem [ thm : lower bound ] and the fact that any sequence of centers has a convergent subsequence by the compactness of ^d c_k\rightarrow 0 k\rightarrow\infty ] with and ^d}\arrowvert \phi_n(x)\arrowvert x\in b\cap[0,1]^d ] the partial differential operator .let ^d\big) ] .in order to establish ( [ eq : function ] ) , note that for any polynomial , the topology induced by the metrics corresponding to the two norms ^d}\arrowvert p(x)\arrowvert ] is the topology of uniform convergence , hence these two norms are equivalent . consequently , the boundedness of the polynomial by uniformly in implies that there exists some constant such that for all multi - indices with .now the mean value theorem implies for some intermediate point thus , ^d.\ ] ] if ^d\big) ] pointwise defined by ^d\big\} ] some nondegenerate rectangle , a sequence of functions with .it has to be shown that there exists a universal constant such that whenever .first , we choose a compact ball with center satisfying and ( [ eq : function ] ) .let the couple be defined by consulting the proof of theorem [ thm : effizienz2 ] , this definition of allows for an approximation as in ( [ term ] ) .since for all , ^d\big)\big]^{1/2}\\ & \geq\ c\arrowvert\tilde{\phi}_n\arrowvert_{j_i}^{(\beta+d/2)/\beta}\sqrt{n}\big(1+o(1)\big ) \end{aligned}\ ] ] for some universal constant . now the asserted result is easily deduced for a sufficiently large constant . [ [ proof - of - theorem - thm - local - alternatives . ] ] proof of theorem [ thm : local alternatives ] .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \(i ) let be such that ^d} ] with for , for and equals zero otherwise , where is such that and .then with iid uniformly distributed on ^d ] and .then , with denoting weak convergence , \longrightarrow_w\ \q\label{eq : limit}\ ] ] with the convolution and the poisson weights . since , we can apply le cam s notion of contiguity ( le cam and yang 2000 , chapter 3 ) to conclude that consequently .now assume that but . without loss of generalitywe may assume that . then lindeberg s clt entails that ( [ eq : limit ] ) holds true with again , the limiting distribution satisfies , whence .\(ii ) we begin as in the proof of theorem [ thm : effizienz2 ] , but with , and . adjusting ( [ expr ] ) ( [ eq : e - approximation ] ) yields the arguments of the proof of theorem [ thm : effizienz2 ] apply again and lead to the expansion while with the same reasoning as in the proof of theorem [ thm : spatial adaptivity ] for some constant and .thus , if and , ( [ ea ] ) goes to infinity and the result follows. start with a basic but useful property of the solution to ( [ eq : rec 2 ] ) .suppose has only finitely many extremal points .from the last extremal point on the function is monotoneous and the integral over can only be finite if both envelopes are vanishing in .now consider the case of infinitely many extremal points . since the -norm of the solution ( [ eq : rec 2 ] ) is finite and if there exists a sequence of local extrema of which stays uniformly bounded away from zero , their width must be bounded by a zero sequence .but now the result follows via contradiction of ( [ eq : function ] ) , which , of course , is also applicable for local extrema. let be fixed . define to be a positive real number such that the following conditions are satisfied : is a local extremal point , , ( doable by lemma [ lemm1 ] ) .now extend the function to a compactly supported function such that , and smaller than ; this is possible for sufficiently large ( because the uniform boundedness from yields the boundedness of all partial derivatives by a multiple of with the same argument as used in the proof of theorem [ thm : spatial adaptivity ] ; so one may extend the function first to a compactly supported one in and then extend it close to zero such that its integral vanishes ) - we omit an explicit construction at this point . with small , this construction leads to what was required in the proof of theorem [ thm : lower bound ] .lutz dmbgen s contribution to the decoupling subsequent to an extended discussion in bern is gratefully acknowledged .furthermore , i want to thank three unknown referees and an associate editor for their valuable comments and careful reading of the manuscript .: : behnen , k. , neuhaus , g. and ruymgaart , f. ( 1983 ) .two sample rank estimators of optimal nonparametric score - functions and corresponding adaptive rank statistics ., 588599 .: : belomestny , d. and spokoiny , v. ( 2007 ) . spatial aggregation of local likelihood estimates with application to classification ., 22872311 .: : bennett , g. ( 1962 ) .probability inequalities for sums of independent random variables . , 3345 . : : butucea , c. and tribouley , k. ( 2006 ) .nonparametric homogeneity tests . _j. statist .inference * 136 * _ , 597639 .: : donoho , d. ( 1994a ) . statistical estimation and optimal recovery, 238270 .: : donoho , d. ( 1994b ) .asymptotic minimax risk for sup - norm loss solution via optimal recovery ., 145170 .: : ducharme , g.r . and ledwina . t. ( 2003 ) .efficient and adaptive nonparametric test for the two - sample problem ., 20362058 .: : dudley , r.m ., gin , e. and zinn , j. ( 1991 ) uniform and universal glivenko - cantelli classes . _j. theoretical probability . * 4 * _ , 485510 .: : dmbgen , l. ( 2002 ) .application of local rank tests to nonparametric regression ., 511537 .: : dmbgen , l. and spokoiny , v.g .multiscale testing of qualitative hypotheses ., 124152 .: : dmbgen , l. and walther , g. ( 2008 ) .multiscale inference about a density ., 17581785 ; _ accompagnying technical report available at _http://arxiv.org/abs/0706.3968 : : eubank , r.l . and hart , j.d .testing goodness - of - fit in regression via order selection criteria ., 14121425 .: : fan , j. ( 1996 ) .test of significance based on wavelet thresholding and neyman s truncation ., 674688 .: : hjek , j. and , z. ( 1967 ) . _ theory of rank tests ._ academic press . : : gijbels , i. and heckmann , n. ( 2004 ) .nonparametric testing for a monotone hazard function via normalized spacings . _j. nonpar .statist .* 16 * _ , 463477 .: : hoeffding , w. ( 1963 ) .probability inequalities for sums of bounded random variables . , 1330 .: : ingster , y. ( 1987 ) .asymptotically minimax testing of nonparametric hypotheses ., 553574 .: : janic - wrblewska , a. and ledwina , t. ( 2000 ) . data driven rank test for two - sample problem ._ scand .j. statist .* 27 * _ , 281297 .: : klemel , j. and tsybakov , a. ( 2001 ) . sharp adaptive estimation of linear functionals . , 15671600 .: : le cam , l. and yang , g. ( 2000 ) ._ asymptotics in statistics : some basic concepts . _ springer , new york .: : ledwina , t. and kallenberg , w.c.m .consistency and monte carlo simulation of a data - driven version of smooth goodness - of - fit tests ., 15941608 .: : ledwina , t. ( 1994 ) .data - driven version of neyman s smooth test of fit ., 10001005 .: : leonov , s.l .( 1997 ) . on the solution of an optimal recovery problem and its applications in nonparametric statistics ., 476490 .: : leonov , s.l .remarks on extremal problems in nonparametric curve estimation . , 169178 .: : lepski , o. and tsybakov , a. ( 2000 ) .asymptotically exact nonparametric hypothesis testing in sup - norm and at a fixed point ._ probab .theory rel .fields * 117 * _ , 1748 . : : neuhaus , g. ( 1982 ) .-contiguity in nonparametric testing problems and sample pitman efficiency ., 575582 .: : neuhaus , g. ( 1987 ) .local asymptotics for linear rank statistics with estimated score functions . , 491512 .: : nussbaum , m. ( 1996 ) .asymptotic equivalence of density estimation and gaussian white noise . , 2399 - 2430 .: : de la pe , v.h .a bound on the moment generating function of a sum of dependent variables with an application to simple sampling without replacement ., 197211 .: : de la pe , v.h .( 1999 ) . a general class of exponential inequalities for martingales and ratios ., 537564 .: : pollard , d. ( 1984 ) ._ convergence of stochastic processes ._ springer .: : rohde , a. ( 2008 ) .adaptive goodness - of - fit tests based on signed ranks ., 13461374 .: : rufibach , k. and walther , g. ( 2009 ) . a block criterion for multiscale inference about a density , with applications to other multiscale problems . , to appear .: : serfling , r.j .probability inequalities for the sum of sampling without replacement . , 39 - 48 . : : shorack , g.r . andwellner , j.a .( 1986 ) . _ empirical processes with applications to statistics . _wiley , new york .: : spokoiny , v. ( 1996 ) .adaptive hypothesis testing using wavelets ., 24772498 .: : van der vaart , a.w . and wellner , j.a ._ weak convergence and empirical processes ._ springer .: : walther , g. ( 2010 ) .optimal and fast detection of spatial clusters with scan statistics . , to appear .
based on two independent samples and drawn from multivariate distributions with unknown lebesgue densities and respectively , we propose an exact multiple test in order to identify simultaneously regions of significant deviations between and . the construction is built from randomized nearest - neighbor statistics . it does not require any preliminary information about the multivariate densities such as compact support , strict positivity or smoothness and shape properties . the properly adjusted multiple testing procedure is shown to be sharp - optimal for typical arrangements of the observation values which appear with probability close to one . the proof relies on a new coupling bernstein type exponential inequality , reflecting the non - subgaussian tail behavior of a combinatorial process . for power investigation of the proposed method a reparametrized minimax set - up is introduced , reducing the composite hypothesis `` '' to a simple one with the multivariate mixed density as infinite dimensional nuisance parameter . within this framework , the test is shown to be spatially and sharply asymptotically adaptive with respect to uniform loss on isotropic hlder classes . the exact minimax risk asymptotics are obtained in terms of solutions of the optimal recovery .
detection of sparse mixtures is an important problem that arises in many scientific applications such as signal processing , biostatistics , and astrophysics , where the goal is to determine the existence of a signal which only appears in a small fraction of the noisy data .for example , topological defects and doppler effects manifest themselves as non - gaussian convolution component in the cosmic microwave background ( cmb ) temperature fluctuations .detection of non - gaussian signatures are important to identify cosmological origins of many phenomena .another example is disease surveillance where it is critical to discover an outbreak when the infected population is small .the detection problem is of significant interest also because it is closely connected to a number of other important problems including estimation , screening , large - scale multiple testing , and classification .see , for example , , , , , and .one of the earliest work on sparse mixture detection dates back to dobrushin , who considered the following problem originating from multi - channel detection in radiolocation .let denote the rayleigh distribution with the density .let be independently distributed according to , representing the random voltages observed on the channels . in the absence of noise , s are all equal to one , the nominal value ; while in the presence of signal , exactly one of the s becomes a known value . denoting the uniform distribution on ] and satisfies the following relationship : therefore , the total variation distance converges to zero ( resp .one ) is equivalent to the squared hellinger distance converges to zero ( resp .we will be focusing on the hellinger distance partly due to the fact that it tensorizes nicely under the product measures : denote the hellinger distance between the null and the alternative by in view of and , the fundamental limits and can be equivalently defined as follows in terms of the asymptotic squared hellinger distance : this section we characterize the detectable region explicitly by analyzing the exact asymptotics of the hellinger distance induced by the sequence of distributions . this subsectionwe focus on the case of sparse normal mixture with and absolutely continuous .we will argue in that by performing the lebesgue decomposition on if necessary , we can reduce the general problem to the absolutely continuous case .we first note that the _ essential supremum _ of a measurable function with respect to a measure is defined as we omit mentioning if is the lebesgue measure .now we are ready to state the main result of this section .let .assume that has a density with respect to the lebesgue measure .denote the log - likelihood ratio by let be a measurable function and define 1 . if _ uniformly _ in , where on a set of positive lebesgue measure , then .2 . if _ uniformly _ in , then . consequently , if the limits in and agree and on a set of positive measure , then .[ thm : main ] . assuming the setup of, we ask the following question in the reverse direction : what kind of function can arise in equations and ?the following lemma ( proved in ) gives a necessary and sufficient condition for .however , in the special case of convolutional models , the function needs to satisfy more stringent conditions , which we also discuss below .suppose holds _ uniformly _ in for some measurable function .then in particular , lebesgue - a.e .conversely , for all measurable that satisfies , there exists a sequence of , such that holds . additionally , if the model is convolutional , i.e. , , then is convex .[ lmm : alpha ] in many applications , we want to know how fast the optimal error probability decays if lies in the detectable region .the following result gives the precise asymptotics for the hellinger distance , which also gives upper bounds on the total variation , in view of .assume that holds .for any , the exponent of the hellinger distance is given by where which satisfies ( resp . ) if and only if ( resp . ) .[ thm : ebeta ] as an application of , the following result relates the fundamental limit of the convolutional models to the classical ingster - donoho - jin detection boundary : let .assume that has a density which satisfies that uniformly in for some measurable .then where is the ingster - donoho - jin detection boundary defined in .[ cor : conv ] it should be noted that the convolutional case of the normal mixture detection problem is briefly discussed in ( * ? ? ?* section 6.1 ) , where inner and outer bounds on the detection boundary are given but do not meet . here completely settles this question .see for more examples .we conclude this subsection with a few remarks on . under the assumption that the function on a set of positive lebesgue measure , the formula shows that the fundamental limit lies in the very sparse regime ( ) .we discuss the two extremal cases as follows : 1 ._ weak signal _ : note that if and only if almost everywhere . in this casethe non - null effect is too weak to be detected for any .one example is the zero - mean heteroscedastic case with .then we have .2 . _ strong signal _ : note that if and only if there exists , such that and at this particular , the density of the signal satisfies , which implies that there exists significant mass beyond , the extremal value under the null hypothesis .this suggests the possibility of constructing test procedures based on the _ sample maximum_. indeed , to understand the implication of more quantitatively , let us look at an even weaker condition : there exists such that and which , as shown in , implies that .[ rmk : extreme ] in general need not exist .based on , it is easy to construct a gaussian mixture where and do not coincide .for example , let and be two measurable functions which satisfy and give rise to different values of in , which we denote by . then there exist sequences of distributions and which satisfy for and respectively .now define by and . then by , we have .the detection boundary in is obtained by deriving the limiting distribution of the log - likelihood ratio which relies on the normality of the null hypothesis .in contrast , our approach is based on analyzing the sharp asymptotics of the hellinger distance .this method enables us to generalize the result of to sparse non - gaussian mixtures , where we even allow the null distribution to vary with the sample size .consider the hypothesis testing problem .let .denote by and the cdf and the quantile function of , respectively , i.e. , if the log - likelihood ratio satisfies as _ uniformly _ in for some measurable function . if on a set of positive lebesgue measure , then [ thm : ng ] the function appearing in satisfies the same condition as in . comparing with, we see that the uniform convergence condition is naturally replaced by the uniform convergence of the log - likelihood ratio evaluated at the null quantile .using the fact that for all , which implies that uniformly as , we can recover from by setting .the results in and are obtained under the assumption that the non - null effect is absolutely continuous with respect to the null distribution .next we show that it does not lose generality to focus our attention on this case . using the hahn - lebesgue decomposition ( * ? ? ?* theorem 1.6.3 ) , we can write for some ], we have } \nonumber \\ = & ~ { \mathop{\mathrm{ess \ , sup}}}_x { \left\ { -x^2 + 2 u x \right\ } } \log n ( 1+o(1 ) ) \label{eq : alphadilate } , \end{aligned}\ ] ] where we have applied and the essential supremum in is with respect to , the distribution of . therefore . applying yields the existence of , given by where follows from the facts that is increasing and that . tightens the bounds given at the end of ( * ? ? ?* section 6.1 ) based on the interval containing the signal support . from we see that the detection boundary coincides with the classical case with replaced by -norm of . therefore , as far as the detection boundary is concerned , only the support of matters and the detection problem is driven by the maximal signal strength .in particular , for or non - compactly supported , we obtain the degenerate case ( see also about the strong - signal regime ) . however , it is possible that the density of plays a role in finer asymptotics of the testing problem , e.g. , the convergence rate of the error probability and the limiting distribution of the log - likelihood ratio at the detection boundary .one of the consequences of is the following : as long as , non - compactly supported results in the degenerate case of , since the signal is too strong to go undetected .however , this conclusion need not be true if behaves differently .we conclude this subsection by constructing a family of distributions of with unbounded support and an appropriately chosen sequence , such that the detection boundary is non - degenerate : let be distributed according to the following _ generalized gaussian _ ( subbotin ) distribution with shape parameter , whose density is put .then the density of is given by .hence which satisfies the condition with applying , we obtain the detection boundary ( a two - dimensional _ surface_ parametrized by shown in ) as follows where is the ingster - donoho - jin detection boundary .dilate.pdf ( 8,60) ( 102,3) ( 60,57) ( 60,46) ( 60,26) ( 60,15) equation can be further simplified for the following special cases . * ( laplace ) : plugging into , straightforward computation yields * ( gaussian ) : in this case we have and .this is a special case of the heteroscedastic case in , which will be discussed in detail in .simplifying we obtain which coincides with .the heteroscedastic normal mixtures considered in corresponds to with given in and .in particular , if , is given by the convolution , where the gaussian component models the variation in the signal amplitude . for any , where similar to the calculation in , we have . ] and note that if . assembling and applying , we have solving the equation in yields the equivalent detection boundary in terms of . in the special case of , where the signal is distributed according to , we have therefore , as long as the signal variance exceeds that of the noise , reliable detection is possible in the very sparse regime , even if the average signal strength does not tend to infinity .we consider the detection boundary of the following generalized gaussian location mixture which was studied in ( * ? ?* section 5.2 ) : where is defined in , and . since in , is fulfilled with .applying , we have it is easy to verify that agrees with the results in ( * ? ? ?* theorem 5.1 ) .similarly , the detection boundary for exponential- mixture in ( * ? ? ?* theorem 1.7 ) can also be derived from .we conclude the paper with a few discussions and open problems . our main results in only concern the _ very sparse _ regime .this is because under the assumption in that on a set of positive lebesgue measure , we always have . one of the major distinctions between the very sparse and moderately sparse regimes is the effect of symmetrization . to illustrate this point ,consider the sparse normal mixture model .given any , replacing it by its symmetrized version always increases the difficulty of testing .this follows from the inequality , a consequence of the convexity of the squared hellinger distance and the symmetry of .a natural question is : does symmetrization always have an impact on the detection boundary ? in the very sparse regime , it turns out that under the regularity conditions imposed in , symmetrization does not affect the fundamental limit , because both and give rise to the same function .it is unclear whether and remain unchanged if an arbitrary sequence is symmetrized .however , in the moderately sparse regime , an asymmetric non - null effect can be much more detectable than its symmetrized version .for instance , direct calculation ( see for example ( * ? ? ?* section 2.2 ) ) shows that for , but for .moreover , unlike in the very sparse regime , moment - based tests can be powerful in the moderately sparse regime , which guarantee that .for instance , in the above examples or , the detection boundary can be obtained by thresholding the sample mean or sample variance respectively .more sophisticated moment - based tests such as the excess kurtosis tests have been studied in the context of sparse mixtures .it is unclear whether they are always optimal when .while establishes the adaptive optimality of the higher criticism test in the very sparse regime , the optimality of the higher criticism test in the moderately sparse case remains an open question .note that in the classical setup , it has been shown that the higher criticism test achieves adaptive optimality for ] and , respectively .[ sqrt2 ] for any , [ lmm : sqrt ] 1 .since is strictly concave , is strictly convex .solving for the stationary point yields the minimum at .first we consider .since is convex , is increasing .consequently , we have for all .+ next we consider . by the concavity of , is decreasing .hence for all ] and holds for almost every ] .hlder s inequality yields , which gives the desired .it then follows from that , i.e. , a.e . _( sufficiency ) _ let be a measurable function satisfying .let be a probability measure with the density which is a legitimate density function in view of .then the log - likelihood ratio satisfies , which fulfills uniformly . for convolutional models ,the convexity of is inherited from the geometric properties of the log - likelihood ratio in the normal location model : since }}{\varphi(y)} ] and . dividing both sides by and sending , we have . since , we have where the last equality follows from . plugging the above asymptotics into , we see that is fulfilled uniformly in with applying , we obtain where the last step follows from the .let . put . since by assumption , we also have .denote the likelihood ratio ( radon - nikodym derivative ) by .then instead of introducing the random variable in for the gaussian case , we apply the quantile transformation to generate the distribution of : let be uniformly distributed on the unit interval .then which is exponentially distributed . putting we have set and , which satisfy for all sufficiently large .for the converse proof , we can write the square hellinger distance as an expectation with respect to : } \\ & ~ + { \mathbb{e}\left [ { \left ( \sqrt{1 + n^{-\beta } ( \exp(\ell_n(z_n(1-u ) ) ) - 1 ) } - 1 \right)}^2 { { \mathbf{1}_{\left\{{0 < u \leq \frac{1}{2}}\right\ } } } } \right ] } .\end{aligned}\ ] ] analogous to , by truncating the log - likelihood ratio at zero , we can show that the hellinger distance is dominated by the following : } \nonumber \\ & ~ + { \mathbb{e}\left [ { \left ( \sqrt{1 + n^{-\beta } ( \exp(\ell_n(z_n(1-u ) ) ) - 1 ) } - 1 \right)}^2 { { \mathbf{1}_{\left\{{0< u \leq \frac{1}{2 } , t_n(s_n ) \geq 0}\right\ } } } } \right ] } \nonumber \\ = & ~ { \mathbb{e}\left [ { \left ( \sqrt{1 + n^{-\beta } ( \exp(r_n(s_n ) ) - 1 ) } - 1 \right)}^2 { { \mathbf{1}_{\left\{{s_n > \log_n 2 , r_n(s_n ) \geq 0}\right\ } } } } \right ] } \label{eq : jg1 } \\ & ~ + { \mathbb{e}\left [ { \left ( \sqrt{1 + n^{-\beta } ( \exp(t_n(s_n ) ) - 1 ) } - 1 \right)}^2 { { \mathbf{1}_{\left\{{s_n \geq \log_n 2 , t_n(s_n ) \geq 0}\right\ } } } } \right ] } \nonumber \\ \leq & ~ { \mathbb{e}\left [ { \left ( \sqrt{1 + n^{-\beta } ( n^{\alpha_0(s_n)+\delta } - 1 ) } - 1 \right)}^2 + { \left ( \sqrt{1 + n^{-\beta } ( n^{\alpha_1(s_n)+\delta } - 1 ) } - 1 \right)}^2 \right ] } \label{eq : jg2 } \\ \leq & ~ 2 \ , { \mathbb{e}\left [ n^{2(\alpha_0 \vee \alpha_1(u ) + \delta-\beta ) \wedge ( \alpha_0 \vee \alpha_1(u ) + \delta-\beta ) } \right ] } \\ \leq & ~ n^{-1-\delta } \label{eq : jg3 } \end{aligned}\ ] ] where follows from , from and from . the direct part of the proof is completely analogous to that of by lower bounding the integral in .let , which is uniformly distributed on ] .moreover , we have and since . combining , and, we obtain that is , the type - ii error probability also vanishes .consequently , a sufficient condition for the higher criticism test to succeed is where follows from the following reasoning : by ( * ? ? ?* proposition 3.5 ) , the supremum and the essential supremum ( with respect to the lebesgue measure ) coincide for all lower semi - continuous functions .indeed , is lower semi - continuous by , and so is .it remains to show that the right - hand side of coincides with the expression of in .indeed , we have note that the second equality follows from interchanging the essential supremums : for any bi - measurable function , where the last essential supremum is with respect to the product measure . thus the proof of the theorem is completed .this appendix collects a few properties of total variation and hellinger distances for mixture distributions .let and .then which satisfies [ lmm : h2.mix ] since , there exists a measurable set such that and . then the inequalities in follow from and the facts that and . for any probability measures , is decreasing on ] . in view of the first inequality in, the total variation distance can be lower bounded as follows : using , we have on the other hand , where the last equality is due to .therefore for any , which proves that .in fact , the above derivation also shows that the following _ maximum test _ achieves vanishing probability of error : declare if and only if .in general the maximum test is suboptimal .for example , in the classical setting where , ( * ? ? ?* theorem 1.3 ) shows that the maximum test does not attain the ingster - donoho - jin detection boundary for $ ] .
detection of sparse signals arises in a wide range of modern scientific studies . the focus so far has been mainly on gaussian mixture models . in this paper , we consider the detection problem under a general sparse mixture model and obtain an explicit expression for the detection boundary . it is shown that the fundamental limits of detection is governed by the behavior of the log - likelihood ratio evaluated at an appropriate quantile of the null distribution . we also establish the adaptive optimality of the higher criticism procedure across all sparse mixtures satisfying certain mild regularity conditions . in particular , the general results obtained in this paper recover and extend in a unified manner the previously known results on sparse detection far beyond the conventional gaussian model and other exponential families . * keywords : * hypothesis testing , high - dimensional statistics , sparse mixture , higher criticism , adaptive tests , total variation , hellinger distance .
in contemporary communication networks , the nodes perform only routing , i.e. , they copy the data on incoming links to the outgoing links . in order to transmit messages generated simultaneously from multiple sources to multiple sinksthe network may need to be used multiple times .this limits the throughput of the network and increases the time delay too .network coding is known to circumvent these problems . in network coding intermediate nodes in a networkare permitted to perform coding operations , i.e. , encode data received on the incoming links and then transmit it on the outgoing links ( each outgoing link can get differently encoded data ) , the throughput of the network increases .thus , network coding subsumes routing .for example , consider the butterfly network of fig .[ b_fly ] wherein each link can carry one bit per link use , source node generates bits and , and both sink nodes and demand both source bits . with routing only ,two uses of link are required while with network coding only one .this is an example of single - source multi - sink linear multicast network coding , wherein there is a single source ( ) , generating a finite number of messages , ( ) , and multiple sinks , each demanding all the source messages and the encoding operations at all nodes are linear . in general , there may be several source nodes , each generating a different number of source messages , and several sink nodes , each demanding only a subset , and not necessarily all , of the source messages .decoding at sink nodes with such general demands is studied in this paper .we represent a network by a finite directed acyclic graph , where is the set of vertices or nodes and is the set of directed links or edges between nodes .all links are assumed to be error - free .let denote a -ary finite field .the set is denoted by ] , and sinks , ] .let be the total number of source messages .the -tuple of source messages is denoted by }=(x_1,x_2,\ldots , x_\omega) ] . by denote the column vector of the source messages .the demand of the sink node is denoted by ] , let , i.e. , } ] , we do not differentiate between and . for a multi - variable binary - valued function , the subset of whose elements are mapped to by called its support and is denoted by })) ] denotes the -tuples in the support restricted to .a source message is denoted by edges without any originating node and terminating at a source node .data on a link is denoted by .a network code is a set of coding operations to be performed at each node such that the requisite source messages can be faithfully reproduced at the sink nodes .it can be specified using either local or global description .the former specifies the data on a particular outgoing edge as a function of data on the incoming edges while the latter specifies the data on a particular outgoing edge as a function of source messages . throughout the paper we use global description for our purposes .[ global description of a network code ] an -dimensional network code on an acyclic network over a field consists of global encoding maps for all , i.e. , .let , be the incoming edges at the source , then .when the intermediate nodes perform only linear encoding operations , the resulting network code is said to be a linear network code ( lnc ) .[ global description of an lnc ] an -dimensional lnc on an acyclic network over a field consists of number of global encoding vectors for all such that .the global encoding vectors for the incoming edges at the source are standard basis vectors for the vector space .the global encoding vectors of the lnc for butterfly network is given in fig .[ b_fly](b ) .hereafter we assume that the network is feasible , i.e. , demands of all sink nodes can be met using network coding , and the global description of a network code ( linear or nonlinear ) is available at the sink nodes .if a sink node demands ( ) source messages , it will have at least incoming edges .the decoding problem is to reproduce the desired source messages from the coded data received at the incoming edges .thus , decoding amounts to solving a set of at least simultaneous equations ( linear or nonlinear ) in unknowns for a specified set of unknowns .hence , the global description of a network code is more useful for decoding .while decoding of nonlinear network codes has not been studied , the common technique used for decoding an lnc for multicast networks is to perform gaussian elimination , which requires operations , followed by backward substitution , which requires operations .this is not recommendable when the number of equations ( incoming coded messages ) and/or variables ( source messages ) is very large . in such cases ,iterative methods are used .convergence and initial guess are some issues that arise while using iterative methods .we propose to use the sum - product ( sp ) algorithm to perform iterative decoding at the sinks . a similar scheme for decoding multicast network codes using factor graphs was studied in in which the authors considered the case of lncs .the problems associated with the proposed decoding scheme in are : * to construct the factor graph , full knowledge of network topology is assumed at the sinks which is impractical if the network topology changes . for a particular sink node ( say ) , the factor graph constructed will have variable nodes and factor nodes , where is the set of incoming edges at node . *complete knowledge of local encoding matrix of each node is assumed at the sinks which again is impractical since local encoding matrix for different nodes will have different dimensions and hence variable number of overhead bits will be required to communicate to downstream nodes which will incur huge overhead .we also point out that the motivating examples , _ viz ._ , examples 1 and 4 , given in for which the proposed decoding method claims to exploit the network topology admits a simple routing solution and no network coding is required to achieve maximum throughput . solving a system of linear equations in boolean variablesis also studied in ( * ? ? ?18 ) .algorithm * + * single - vertex * & & + * all - vertex * & & + * single - vertex with traceback * & not applicable & + [ cols= " < " , ] total number of operations ( ands and ors ) required with traceback is , which is operations , and that without traceback are , which is operations .thus , running single - vertex sp algorithm followed by traceback step affords computational advantage over the multiple - vertex version. will now determine the number of semiring operations required to compute the desired marginal functions in an mpf problem using the sp algorithm and the desired supports in an arg - mpf problem using the arg - sp algorithm with and without traceback in the boolean semiring . in this section , by addition and multiplication we mean the boolean or and and operations .by remark [ rem_supt_or ] , is considered same as addition .let be an acyclic factor graph with variable nodes and factor node .the local domain of a node is denoted by , the cardinality of its configuration space by , and its degree by . for an egde between nodes and , and . for every node ,define if and otherwise .the message passed from a variable node to a factor node as given in is in the above equation , for each of the values of , product of messages is required which requires multiplications . for each of the values of , additions and multiplicationsare required .thus , the total number of operations required are additions and multiplications .the messages passed from a factor node to a variable node as given in is this involves product of a local functions with messages for each of the values of .the total number of operations required for this case is additions and multiplications .the messages are passed by all nodes except the root node . at the root node ,the marginal function is the product of messages , requiring multiplications , if it is a variable node and the product of messages with the local function , requiring multiplications , if it is a factor node . in other words ,computation of marginal function at requires multiplications .thus , the total number of additions and multiplications required in the single - vertex sp algorithm is and the grand total of the number of additions and multiplications is in the arg - sp algorithm , support of marginal at is computed which requires additions ( by remark [ rem_supt_ops ] ) so that the grand total of operations in this case is in this case , first the single - vertex arg - sp algorithm with as the root is executed on the factor graph. then the local domain of a neighbor of is partitioned into sets and .the value is already known from decoding at , and is computed using as follows : where the table of values of the partial marginal was already computed at while passing the message to the root .we need to look only at the rows for which and output the value of for which .this requires additions , where and .the total number of multiplications remains the same as in the single - vertex arg - sp algorithm , which is , but the number of additions is the sum of the number of additions required in single - vertex sp algorithm and the number of additions required at each node , which is at most .thus , the grand total of operations is at most in the all - vertex sp algorithm , first the messages are passed by all the nodes on the unique path towards the root .when the root has received messages from all its neighbors , messages are passed on each edge in the reverse direction , i.e. , away from the root and towards the leaves . when all the leaves have received the messages , marginal functions of each node is computedwe use the method suggested in ( * ? ? ?v ) to compute messages and marginal function .let a node have degree and has received messages from all but one of its neighbors which is on the unique path from to the root .for an instance of , let be the values of the known messages , be the value of the message it is yet to receive from , and be the value of its local function , assumed to be if , i.e. , , . the messages involves the product of with all excluding and summing over suitable variables as in and ; there are such messages to be sent , one to each neighbor . . ]this can be achieved by computing the following products consecutively : , , , , ; this step requires multiplications .now passes to ( after summing over suitable variables ) and awaits the reception of from .once is received , the marginal functions is computed , , which requires multiplication .then the following products are computed consecutively : , , , , ; this step requires multiplications .subsequently , are computed as follows : , , , , ; this step requires multiplication .various messages received and passed by node are depicted fig .[ cplxty ] .thus , computation of all the messages to be passed by and its marginal function requires multiplications for each of the values in .this is true for the root node also .hence , total number of multiplications required is ] in this section .the network model is same as given in section i - a for network coding problem with the exception that the sink nodes demand a function of messages rather than a subset of messages , i.e. , a sink node demands the function .a network code comprises global encoding maps , one for each edge , such that there exist ( decoding ) maps , , for each sink ] , of messages , we assume it to be a map from to for simplicity rather than from to .if a sink demands functions , then such a sink may be replaced by sinks each demanding one function but the incoming information to these new sinks is the same ( see fig . [ multi_func ] ) .the in - network function computation problem is to design network code that maximizes the frequency of target functions computation , called the _ computing capacity _ , per network use . in , bounds on rate of computing symmetric functions ( invariant to argument permutations ) , like minimum , maximum , mean , median and mode , of data collected by sensors in a wireless sensor network at a sink node were presented . the notion of min - cut bound for the network coding problem extended to function computation problem in a directed acyclic network with multiple sources and one sink in .the case of directed acyclic network with multiple sources , multiple sinks and each sink demanding the sum of source messages was studied in ; such a network is called a sum - network .relation between linear solvability of multiple - unicast networks and sum - networks was established .furthermore , insufficiency of scalar and vector linear network codes to achieve computing capacity for sum - networks was shown .coding schemes for computation of arbitrary functions in directed acyclic network with multiple sources , multiple sinks and each sink demanding a function of source messages were presented in . in , routing capacity , linear coding capacity and nonlinear coding capacity for function computation in a multiple source single sink directed acyclic network were compared and depending upon the demanded functions and alphabet ( field or ring ) , advantage of linear network coding over routing and nonlinear network coding over linear network coding was shown . in order to obtain the value of its desired functions , a sink node may require to perform some operations on the messages it receives on the incoming edges .though there are many results on bounds on the computing capacity and coding schemes for in - function computation problem , the decoding operation to be performed at the sink nodes to obtain the value of the desired functions has not been studied .we now formulate computation of the desired functions at sink nodes as an mpf problem over the boolean semiring and use the sp algorithm on a suitably constructed factor graph for each sink to obtain the value of the desired functions .we consider decoding at the sink node .it demands the function , where is the set of arguments of for some ] , we have . by remark [ rem_func ] , . a look - up table ( lut )approach to decoding is to maintain a table with rows and two columns at each sink : first column containing all possible incoming message vectors , , and the second column listing corresponding values of the demanded function , . given an instance of incoming messages , a sink node locates the row containing that -tuple in the first column of the lut and then outputs the value in the second column of the row , which is the desired function value . if two rows in the lut have the same entry in the first column ( network code is a many - to - one map ) , the entry in the second column will also be same .on the contrary , if for two , but for all and some ] and , .let be a realization of the message vector and the coded message received by on its incoming edges .the set contains all the message vectors such that including .thus , for all .since and , we have that for all .hence , the sp algorithm for can terminate as soon as a message vector with is found and we need not obtain all possible message vectors which evaluate to the given coded messages on incoming edges of a sink .for example , let , for all ] is constructed as follows : 1 .install variable nodes , one for each source message .these vertices are labeled by their corresponding source messages , .2 . install factor nodes and label them .the associated local domain of each such vertex is the set of source messages that participate in that encoding map and the local kernel is .a variable node is connected to a factor node iff the source message corresponding to that variable node participates in the encoding map corresponding to the said factor node .4 . install an additional dummy factor node with local domain , local kernel and label it .connect this node to variable nodes in the set , i.e. , to the arguments of .this node corresponds to the demanded function . as before ,first the cycles in the factor graph are removed , if there are any .the single - vertex sp algorithm is run on the acyclic factor graph with the dummy factor node as the root using and .once it has received all the messages , its marginal function ( using ) and subsequently the set are computed as follows : where is the local domain of a neighboring variable node of .theorem 1 states that obtaining only an element of the set is sufficient to get the desired function value .in this paper , we proposed to use the sp algorithm for decoding network codes and performing in - network function computation .we posed the problem of network code decoding at each sink node in a network as an mpf problem over the boolean semiring .a method for constructing a factor graph for a given sink node using the global encoding maps ( or vectors in case of an lnc ) of the incoming edges and demands of the sink was provided .the graph so constructed had fewer nodes and led to fewer message being passed lowering the number of operations as compared to the scheme of .we discussed the advantages of traceback over multiple - vertex sp algorithm .the number of semiring operations required to perform the sp algorithm with and without traceback were derived . for the sinks demanding all the source messages, we introduced the concept of fast decodable network codes and provided a sufficient condition for a network code to be fast decodable .then we posed the problem of function computation at sink nodes in an in - network function computation problem as an mpf problem and provided a method to construct a factor graph for each sink node on which sp algorithm can be run to solve the mpf problem .r. appuswamy , m. franceschetti , n. karamchandani , and k. zeger , `` linear codes , target function classes , and network computing capacity , '' _ ieee trans .inf . theory _ ,9 , pp . 5741 - 5753 , september 2013 .
while the capacity , feasibility and methods to obtain codes for network coding problems are well studied , the decoding procedure and complexity have not garnered much attention . in this work , we pose the decoding problem at a sink node in a network as a marginalize a product function ( mpf ) problem over a boolean semiring and use the sum - product ( sp ) algorithm on a suitably constructed factor graph to perform iterative decoding . we use _ traceback _ to reduce the number of operations required for sp decoding at sink node with general demands and obtain the number of operations required for decoding using sp algorithm with and without traceback . for sinks demanding all messages , we define _ fast decodability _ of a network code and identify a sufficient condition for the same . next , we consider the in - network function computation problem wherein the sink nodes do not demand the source messages , but are only interested in computing a function of the messages . we present an mpf formulation for function computation at the sink nodes in this setting and use the sp algorithm to obtain the value of the demanded function . the proposed method can be used for both linear and nonlinear as well as scalar and vector codes for both decoding of messages in a network coding problem and computing linear and nonlinear functions in an in - network function computation problem . network coding , decoding , sum - product algorithm , traceback , in - network function computation .
the multitype contact process is a stochastic process that can be seen as a model for the evolution of different biological species competing for the occupation of space .it was introduced by neuhauser in as a modification of harris ( single - type ) contact process ( ) .let us give the definition of the multitype contact process on with ( at most ) two types .we will need the parameters : , and , , , . is then the markov process with state space and generator given by , with + \sum_{\substack{x \in \z^d:\\\xi(x ) = 0}}\;\;\sum_{\substack{y \in \z^d:\\ |x - y|\leq r_i,\\\xi(y ) = i } } \lambda_i \cdot [ f(\xi^{i\to x } ) - f(\xi)],\quad i = 1 , 2,\ ] ] where is a function that depends only on finitely many coordinates , is the norm and we will adopt throughout the paper the following terminology : vertices are called _ sites _ , sites in state 0 , 1 and 2 are respectively said to be _ empty _ or to have a type 1 or type 2 _ occupant _ ( or _ individual _ ) , and elements of are called _ configurations_. additionally , are called _ death rates _ , are _ ranges _ and are _ birth rates _ ( or sometimes _ infection rates _ ) .let us now explain the dynamics in words .two kinds of transitions can occur .first , an individual of type dies with rate , leaving its site empty .second , given a pair of sites with , ( with or ) and , the occupant of gives birth at with rate , so that a new individual of type is placed at .note that , under these rules , births only occur at empty sites , so that the state of a site can never change directly from 1 to 2 or from 2 to 1 . in case only one type ( say , type 1 )is present , this reduces to the contact process introduced by harris in , to be denoted here by in order to distinguish it from the multitype version .we refer the reader to for an exposition of the contact process and the statements about it that we will gather in this introduction and in section [ s : back ] .let be the ( one - type ) contact process with rates , , and the initial configuration in which only the origin is occupied . denote by the configuration in which every vertex is empty , and note that this is a trap state for the dynamics . there exists ( depending on the dimension and the range ) such that = 1 \quad \text { if and only if } \lambda \leq \lambda_c.\label{eq : phase_transition}\ ] ] this _ phase transition _ is the most fundamental property of the contact process .the process is called _ subcritical _ , _ critical _ and _ supercritical _ respectively in the cases , and . in this paper, we will consider the multitype contact process on with parameters we emphasize that the quantity that appears here is the one associated to the _ one - type process _, as in . we will be particularly interested in the ` heaviside ' initial configuration , we will denote by the process with rates and initial configuration .we let the interval delimited by and is called the _ interface _ at time , and is the _ position _ of the interface at time .the choice of the middle point of the interval as the position of the interface is somewhat arbitrary and will not matter for all the results obtained in this paper . in case , it follows readily from inspecting the generator in that for all .if , both and are possible ( in the latter case we say that we have a _ positive _ interface , and in the previous case , a _ negative _ interface ) . in , it is shown that the process , which describes the evolution of the _ size _ of the interface , is stochastically tight : [ thm : size_inter_tight ] if and , then < \varepsilon \text { for all } t \geq 0.\ ] ] in the present paper , we will continue the study of the interface , but we will focus on its position rather than its size . our main result is if and , then there exists such that where denotes brownian motion with diffusive constant , and convergence holds in the space of cdlg trajectories with the skorohod topology .our proof of this result follows the usual two steps : verifying convergence of finite - dimensional distributions and tightness of trajectories in ( see section 16 of ) .we thus prove the following propositions , both applicable to the case and : there exists such that , for any we have where are independent and .[ prop : fdd ] [ prop : tight ] for any there exists a compact set such that > 1-\varepsilon.\ ] ] in proving these propositions , we will establish a result of independent interest , which we call _interface regeneration_. we will explain it here only informally ; the precise result depends on a few definitions and is given in theorem [ thm : interface_regeneration ] .given , consider the configuration and assume the interface position .suppose we define a new configuration by putting 1 s in all sites to the left of and 2 s to the right of .we then show that it is possible to construct , in the same probability space as that of , a multitype contact process started from time , , such that and moreover , _ the interface positions for and for are never too far from each other_. since the evolution of the interface of has the same distribution as that of the original process ( except for a space - time shift ) , this regeneration allows us to argue that , if we consider large time intervals , then the displacement of in each interval follows approximately the same law . in many of our proofs ,we study the time dual of the multitype contact process .this dual , called the _ ancestor process _ , was first considered in and further studied in . in these references, it was shown that the ancestor process behaves approximately as a system of coalescing random walks on .because of this , our proofs of propositions [ prop : fdd ] and [ prop : tight ] are inspired in arguments that apply to coalescing random walks and the voter model , an interacting particle system whose dual is ( exactly ) equal to coalescing random walks . in particular , a key estimate for the proof of proposition [ prop : tight ] ( see lemma [ cla4onevstun ] ) was inspired in an argument by rongfeng sun for coalescing random walks ( ) .given a set , we denote by its cardinality and by its indicator function .we will reserve the letter to denote elements of , as well as the one - type contact process , and the letter for elements of and the multitype process .we denote by the configuration in which every vertex is in state 0 .we write ( and similarly for ) . given , `` on '' means that for all ( and similarly for ) . throughout the paper, we fix the parameters and .all the processes we will consider will be defined from these two parameters . we will now briefly survey some background material on the ( one - type ) contact process . a _graphical construction _ or _ harris system _ is a family of independent poisson processes on , we view each of these processes as a random discrete subset of .an arrival at time of the process is called a _ recovery mark _ at at time , and an arrival at time of the process is called an _ arrow _ or _ transmission _ from to at time .this terminology is based on the usual interpretation that is given to the contact process , namely : vertices are individuals , individuals in state 1 are _ infected _ and individuals in state 0 are _although we will focus mostly on the multitype contact process , which we see as a model for competition rather than the spread of an infection , we will still use some infection - related terminology that comes from the study of the classical process .we will sometimes need to consider restrictions of to time intervals , and also translations of .we hence introduce the following notation , for and : } = d^x \cap [ 0,t],\quad d^x \circ\theta(z , t ) = \{s - t : s\in d^{x - z},\;s\geq t\},\\[.2 cm ] & d^{x , y}_{[0,t ] } = d^{x , y } \cap[ 0,t],\quad d^{x , y } \circ\theta(z , t ) = \{s - t : s\in d^{x - z , y - z},\;s\geq t\},\\[.2 cm ] & h_{[0,t ] } = \left((d^x_{[0,t]})_{x\in\z},\;(d^{(x , y)}_{[0,t]})_{\substack{x , y\in\z^d,\;0 < |x - y|\leq r}}\right),\\[.2 cm ] & h \circ \theta(z , t ) = \left((d^x\circ \theta(z , t))_{x\in\z},\;(d^{(x , y)}\circ \theta(z , t))_{\substack{x , y\in\z^d,\;0 < |x - y|\leq r}}\right ) .\label{eq : harris_sub_interval } \end{split}\ ] ] given a ( deterministic or random ) initial configuration and a harris system , it is possible to construct the contact process started from by applying the following rules to the arrivals of the poisson processes in : where is defined as in . that this can be done in a consistent manner , and that it yields a markov process with the desired infinitesimal generator , is a non - trivial result which ( as the other statements in this section ) the reader can find in .given , and a harris system , an _ infection path _ in from to is a path \to \mathbb{z} ] , every vertex that is reachable by an infection path from is reachable by an infection path from . for the statement of the present lemma , it would suffice to argue that , if is large , with high probability one can find and ] , then with probability larger than , \text { such that } \zeta_t(x ) = 1.\ ] ] by lemma [ lem : couple_ones ] , it suffices to prove that , given , there exists so that , for , ] and (x)= [ \zeta^a_t(h)](x ) = 1 ] .we can also characterize as the unique infection path in satisfying now fix and assume is the multitype contact process started from and constructed with a harris system .we claim that indeed , if the right - hand side is zero , then the indicator function is zero ( as the other term is non - zero by construction ) , so holds by .if the right - hand side of is non - zero , then the definition of infection paths together with , and imply that for every }\\\text{such that } \xi_0(\upgamma(0 ) ) = i,\ ; \upgamma(t ) = x \text { and } \\\xi_{s-}(\upgamma(s ) ) = 0 \text { whenever } \upgamma(s ) \neq \upgamma(s-)\end{array},\quad i = 1,2.\ ] ] moreover , there exists at most one infection path satisfying the stated properties .* ancestry process .* we now define an auxiliary process that is key in making the graphical construction of the multitype contact process more tractable .again fix a harris system and let .given , by arguing similarly to how we did in the previous paragraphs , it can be shown that \to \z\text { such that } \uppsi(r ) = x \text { and}\\ ( \uppsi(s-),s)\nleftrightarrow \z\times \{t\ } \text { whenever }\uppsi(s- ) \neq \uppsi(s).\end{array}\ ] ] in case it exists , we denote this path by , or when we want to make the dependence on the harris system explicit .note that only depends on ] by ;\\[.2cm]\uppsi^*_{y , t , t'}(s ) & \text{if } s \in [ t , t'],\end{array}\right.\ ] ] we have that * if ] and , then by the definition of , we have that , so that , by the uniqueness of , we get , so follows .we define , for and , \triangle & \text{otherwise,}\end{array}\right.\ ] ] where is interpreted as a `` cemetery '' state . the process is called the _ ancestor process _ of . in case , we write instead of , and in case , we omit the superscript and write .naturally , now can be rewritten as in particular , we get * joint construction of primal and dual processes . * we now explain the relationship between the multitype contact process and the ancestor process . given a harris system and , we recall the notation introduced in and define the _ reversed harris system _ } ] is the harris system on the time interval ] by reversing time and reversing the direction of the arrows .assume we are given and construct started from using the harris system .fix and assume that we use }]-infection path , when ran backwards and with arrows reversed , corresponds exactly to the -infection path . as a consequence of these considerations, we have if for all , then , with the convention that , we now recall the renewal structure from which we are able to decompose the ancestor process into pieces that are independent and identically distributed .this then allows us to find an embedded random walk in and argue that the whole of the trajectory of remains close to this embedded random walk .most of the results of this subsection are not new ( they appear in or or both ) ; in an effort to balance the self - sufficiency of this paper with shortness of exposition , we will include a few key proofs and omit others . [lem : no_renewals_ab ] there exists such that , for any , we have \right ] < e^{-c(b - a)}.\ ] ] the proof is a simple consequence of ; see proposition 1 , page 474 , of . given , we write = \p[\;\cdot\;|\;(y,0)\leftrightarrow \infty \text { for all } y \in a].\ ] ] in case , we write instead of and in case , we omit the superscript .[ lem_x_goes_to_y_first ] let and for any and events on harris systems , } \in e,\;\eta_\uptau = y\text { and } h \circ\theta(0,\uptau ) \in f \right ] \\[.2 cm ] & = \p\left[\uptau < \infty,\ ; h_{[0,\uptau ] } \in e,\;\eta_\uptau = y\right ] \cdot \tilde \p^{y } \left[h \in f\right ] .\end{split}\ ] ] we let and , for , define as follows : \sigma_k&\text{if } \sigma_k < \infty \text { and } \eta_{\sigma_k } = \triangle;\\[.2 cm ] \infty & \text{if } \sigma_k = \infty . \end{cases}\ ] ] is thus an increasing sequence of stopping times with respect to the sigma - algebra of harris systems .we note that , in case we have , then gives for all .so we have as a consequence , we obtain using , the left - hand side of becomes } \in e;\;\eta_{\sigma_k } = y \text { and } ( y,\sigma_k ) \leftrightarrow \infty;\ ; h\circ \theta(0,\sigma_k ) \in f \right]\\ & = \tilde \p^{y}\left [ h \in f\right ] \cdot \sum_{k=0}^\infty \p\left[\sigma_k < \infty;\;h_{[0,\sigma_k ] } \in e;\;\eta_{\sigma_k } = y \text { and } ( y,\sigma_k)\leftrightarrow \infty \right]\\ & = \tilde \p^{y}\left [ h \in f\right ] \cdot \p\left [ \uptau < \infty;\ ; h_{[0,\uptau ] } \in e;\;\eta_\uptau = y \right].\end{aligned}\ ] ] given , on the event we define the times we write instead of and instead of .we now state three simple facts about these random times .first , it follows from that second , from it is easy to obtain = 1.\ ] ] third , by putting and together , it is easy to show that \leq e^{-ct},\qquad t > 0.\ ] ] our main tool in dealing with the ancestor process is the following result .[ prop : rmn ] 1 . under , in particular, is a random walk on with increment distribution = \tilde \p\left[\eta_{\uptau_1 } = w - z\right].\ ] ] 2 .there exist such that , for any , and , }|\eta_s - \eta_t| > x\right ] \leq ce^{-cx^2/r } + cre^{-c|x|}.\ ] ] 3 . under , a proof of part can be found in , but we give another one here .let be measurable subsets of ] as .next , \right ] & \leq \tilde \p\left[\max_{\frac{t}{\mu } \leq i \leq \frac{t}{\mu } + \delta t } |\eta_{\uptau_i } - \eta_{\uptau_{\lfloor t/\mu \rfloor}}| > \varepsilon\sqrt{t}\right ] \\[.2cm]&\leq \delta t \frac{\text{var}(\eta_{\uptau_1})}{\varepsilon^2 t } = \delta \frac{\text{var}(\eta_{\uptau_1})}{\varepsilon^2},\end{aligned}\ ] ] where the last inequality is an application of kolmogorov s inequality .the above can be made arbitrarily small by taking small ( depending on ) .the other term in is then treated similarly , and the proof of is now complete . in , results are obtained about the joint behavior of two or more ancestor processes .the method used to obtain such results involved studying renewal times that are more complicated then the defined above .we will not present the details here .rather , let us just mention that , while a single ancestor behaves closely to a random walk ( as outlined above ) , a larger amount of ancestors , when considered jointly , behave closely to a system of coalescing random walks ( that is , a system of random walkers that move independently with the added rule that two walkers that occupy the same position merge into a single walker ) .taking advantage of this comparison , one can then obtain for ancestor processes several estimates that hold for coalescing random walks .in particular , in lemma 3.2 in , it is shown that \leq \frac{c|x - y|}{\sqrt{t}},\ ; x , y \in \z,\ ; t > 0.\ ] ] using this result , it is then possible to show that the density of the set of _ all _ ancestors at time , , goes to zero as ( see proposition 3.5 in ) , so that \xrightarrow{t\to \infty } 0.\ ] ] finally , we will need the bound < \frac{c}{\sqrt{t}}. \end{split}\ ] ] for coalescing random walks having symmetric jump distribution with finite third moments , this estimate is given by lemma 2.0.4 in .as and are not exactly coalescing random walks , the proof of the mentioned lemma has to be adapted to the present context . given the method of proof of theorem 6.1 in , this adaption does not involve anything new , so we do not include it here. given , we write define &\#\{x<0:\xi(x ) = 2\ } <\infty,\;\#\{x>0:\xi(x ) = 1\ } < \infty \end{array}\right\};\ ] ] in particular , and for any . as mentioned in the introduction, denotes the contact process started from the heaviside configuration , , and the interval delimited by and is the _ interface _ , and is the _ interface position _ , at time . using , it is easy to show that , almost surely , it will be useful to have the following rough bound on the displacement of and .[ lem : no_faster ] for any and there exists such that , if and satisfies on , then with probability larger than , it is sufficient to prove the result for , where is the constant that appears in lemmas [ lem : couple_ones ] and [ lem : desc_bar_sides ] .we fix with using the joint construction of the multitype contact process and the ancestor processes ( as described in subsection [ ss : mcp ] and in particular equation ) together with the assumption that on and claim [ eq : attract_multi ] , we have \leq \p\left[\xi^h_t(x ) = 1\right]\leq \p\left[\eta^x_t\neq \triangle,\ ; \eta^x_t \leq 0 \right].\end{aligned}\ ] ] if , the right - hand side is smaller than or equal to = \p\left[\eta_t \neq \triangle , \;|\eta_t| \geq x \right ] \stackrel{\eqref{eq : bondrw}}{\leq } ce^{-cx^2/t } + cte^{-cx}.\ ] ] combining this with a union bound , we get = \p\left[\xi_t(x ) = 1 \text { for some } t\in \n \text { and } x \geq \frac{s}{3 } + \sigma't\right ] < \frac{\varepsilon}{3}\label{eq : bound_before_intervals}\ ] ] if is large enough. we then bound \right]\\[.2 cm ] & \leq \p\left[(-\infty,\;s/3 + \sigma ' t ) \times \{t\ } \leftrightarrow[ 2s/3 + \sigma '' t,\infty ) \times [ t , t+1 ] \right ] <e^{-c(s + \sigma t/2)}\end{aligned}\ ] ] for some , by a comparison with a poisson random variable ( describing the number of arrivals in a certain space - time region ; we omit the details ) .together with , this shows that , if is large enough , < \varepsilon/2.\ ] ] by lemma [ lem : desc_bar_sides ] and , increasing if necessary we have \text { for some } t \geq 0 \right ] < \varepsilon/2.\ ] ] to conclude , \\&\geq \p\left[\text{for all } t \geq 0,\;r(\xi_t ) < \frac{2s}{3 } + \sigma''t \text { and } \xi_t(x ) \neq 0 \text { for some } x \in \left[\frac{2s}{3}+\sigma''t , s + \sigma t\right ] \right ] > 1-\varepsilon .\end{aligned}\ ] ] given a harris system and , we define the regenerated interface process as follows : let us explain this definition in words . using the harris system , we construct the contact process started from the heaviside configuration and evolve it up to time , obtaining the configuration with corresponding interface position .then , we artificially put 1 s on ] , we continue evolving the process ; the resulting interface position at time is . note in particular that in section [ ss : proof_reg ] , we will prove : [ thm : interface_regeneration ] for any there exists such that , for any , > 1-\varepsilon.\ ] ] as a consequence we obtain [ cor : interface_motion_tight ] for any and there exists such that <\varepsilon \quad \text { for all } s \ge 0.\ ] ] for any , by , } |i_t - i_s| > k \right ] \leq\p\left[\sup_{t \in [ s,\infty ) } |i^s_t - i_t| > k/2\right ] + \p \left [ \sup_{t \in [ 0,r ] } |i_t| > ( k-1)/2\right].\end{aligned}\ ] ] now , for fixed , the second term vanishes as , and the first term does so as well by theorem [ thm : interface_regeneration ] .[ lem : interface_nowhere ] for any there exists such that < \varepsilon \text { for any } x \in \frac{1}{2}\z,\ ; t \geq t_0.\ ] ] let . by , we can obtain such that for all , < \varepsilon /2 ] , , and , we have \right ] \leq \frac{c}{t^{3/2}}.\ ] ] fix .choose large enough that fix ,\ ; x_1 \in i_1,\ ; x_2 \in i_2,\ ; x_3 \in i_3 t ] , ] , so that , if is large , we can hope that the configuration in the outside never has any effect on the evolution of the interface .our second class of configurations will depend on a preliminary definition .given , let } + 2 \cdot \mathds{1}_{(\lfloor i(\xi_0)\rfloor,\infty)}.\ ] ] also let and be contact processes started from and , respectively ( constructed with the same harris system ) .we now let > 1-\varepsilon \right\}.\label{eq : def_omega_eps_k}\ ] ] we will separately prove the following two propositions : * ( large isolation segments allow for regeneration).*[prop : omega_eps_k ] for any there exists such that the following holds . for any thereexists such that ._ theorem _ [ thm : interface_regeneration ] .fix .choose as in proposition [ prop : omega_eps_k ] , then choose as in proposition [ prop : gamma_sl ] , and finally choose as in proposition [ prop : omega_eps_k ] .now , for any we have \stackrel{}{\geq } \p \left[\xi^h_t \in \gamma_{s , l}\right ] > 1-\varepsilon.\ ] ] now , for any we have \leq \p\left[\xi^h_s \notin \omega_{\varepsilon , k } \right ] + \p\left[\left.\sup_{t \geq s } |i^s_t - i_t| > k\;\right|\ ; \xi^h_s \in \omega_{\varepsilon , k}\right ] < 2\varepsilon.\end{aligned}\ ] ] [ prop : almost_all_couples_all ] for any and there exists such that the following holds .if is an interval of length at most and are such that for all , then > 1- \varepsilon.\ ] ] since and are constructed from the same harris system , it suffices to find such that > 1- \varepsilon.\ ] ] for a fixed , consider the system of first ancestor processes constructed from the time - reversed harris system }};\\ \label{eq : no_2s}&r(\xi_0 ) < 0.\end{aligned}\ ] ] let be the process started from then , with probability larger than we have , \\\label{eq : xi_coinc_gamma}&\ell(\xi_t ) = \ell(\xi'_t),\ ; r(\xi_t ) = r(\xi'_t)\text { and } \\ & \label{eq : xi_where_2s}\ell(\xi_t),\;r(\xi_t ) < s/2 + \beta t.\end{aligned}\ ] ] given , we write fix . by lemmas[ lem : couple_ones ] , [ lem : desc_bar_sides ] and [ lem : no_faster ] , if is large enough , then with probability larger than all the following three events occur : = \{\xi'_t = 0\ } \cap ( -\infty , f^{(2)}_t ] \text { for all } t \geq 0\right\},\\[.2 cm ] & e_2 = \left\{\text{there exists } x \in ( f^{(1)}_t,\;f^{(2)}_t):\;\xi'_t(x ) \neq 0 \text { for all } t \geq 0 \right\},\\[.2 cm ] & e_3 = \left\{r(\xi'_t ) <f^{(1)}_t \;\text{for all } t \geq 0 \right\}.\end{aligned}\ ] ] we will also assume that .we will now state and prove two auxiliary claims .+ _ claim 1 . _ on , for all .+ to see that this holds , first note that so applying we get we now fix with and will show that using , it follows from that there exists an infection path \to\z ] such that , and by the definition of we have , so that , thus , thus .+ we are now ready to conclude . from claim 1 and the definition of , we have that from claim 1 and the definition of , from claim 1 and claim 2 , [ cor : all_tog ] for any there exists such that the following holds .assume and satisfies , for some with : ;\\\label{eq_cor_as2}&a < r(\xi_0),\;\ell(\xi_0 ) < b;\\ \label{eq_cor_as3}&\xi_0(x ) \equiv 2 \text { on } [ b , b+s].\end{aligned}\ ] ] let be the process started from }(x ) + \mathds{1}_{(a , b)}(x ) \cdot \xi_0(x ) + 2\cdot\mathds{1}_{[b,\infty)}(x).\ ] ] then , with probability larger than , & a -\frac{s}{2 } - \beta t <r(\xi_t),\ell(\xi_t ) < b + \frac{s}{2 } + \beta t.\end{aligned}\ ] ] we will also need , the process started from given , by lemma [ lem : couple_left ] , can be chosen so that , if and hold , then &\text{and } \ell(\xi_t ) , r(\xi_t ) < b + \frac{s}{2 } + \beta t \end{array}\right]>1-\varepsilon/2.\end{aligned}\ ] ] now , note that and the definition of imply ,\\ & r(\xi'_0 ) > a,\end{aligned}\ ] ] so that we can again use lemma [ lem : couple_left ] ( and symmetry ) to obtain that & \text{and } \ell(\xi'_t ) , r(\xi'_t ) > a - \frac{s}{2 } - \beta t\end{array}\right ] > 1-\varepsilon/2.\label{eq : eff_coup_inf}\end{aligned}\ ] ] putting and together , we obtain the desired result ._ proposition _ [ prop : omega_eps_k ] . given , we choose large enough corresponding to in corollary [ cor : all_tog ] .increasing if necessary , by lemma [ lem : no_faster ] , we can also assume the following ( recall that and , where is the process started from the heaviside configuration ) . \text { for all } t \geq 0\right ] > 1 - \varepsilon.\ ] ] then , given , we choose corresponding to and in lemma [ prop : almost_all_couples_all ]. now assume .then , there exist as prescribed in ; note in particular that , so that .let }(x ) + \mathds{1}_{(a , b)}(x ) \cdot \xi_0(x ) + 2 \cdot \mathds{1}_{[b,\infty)}(x),\\[.2 cm ] & \tilde \xi_0(x ) = \mathds{1}_{(-\infty , \lfloor i(\xi_0 ) \rfloor ] } + 2\cdot \mathds{1}_{(\lfloor i(\xi_0)\rfloor,\infty)}\end{aligned}\ ] ] and , be the processes started from these configurations . by our choice of and , with probability largerthan the following three events occur : ;\\[.2 cm ] & \text{for all } t \ge 0,\;i(\tilde \xi_t ) \in [ \lfloor i(\xi_0 ) \rfloor - s - \bar\beta t,\;\lfloor i(\xi_0 ) \rfloor + s + \bar \beta t ] \subset [ a - s - \bar \beta t,\ ; b+s + \bar \beta t];\\[.2 cm ] & \text{for all } t\ge t_0,\;i(\hat \xi_t ) = i(\tilde \xi_t).\end{aligned}\ ] ] if these events all occur , we have & |i(\xi_t ) - i(\tilde \xi_t)|=0 \text { if } t > t_0.\end{aligned}\ ] ] the desired result now holds for . if the statement is false , then one can find , and a sequence of times such that > \delta \text { for all } n \in \n . \nonumber\ ] ] by tightness of the size of the interface ( as given by ) , we can then find such that > \delta/2\text { for all } n \in \n .\label{eq : negation_more}\ ] ] let us denote by the event inside the above probability . note that \subset [ m_{t_n},x^{(k)}_{t_n}]\}.\ ] ] for , define the event , \text { all vertices in } \\[0.2 cm ] [ m_{t_n } - r,\;m_{t_n } ) \cup \{x^{(0)}_{t_n},\ldots , x^{(k)}_{t_n}\ } \cup ( x^{(k)}_{t_n},\;x^{(k)}_{t_n } + r ] \\[.2 cm ] \text{have a death mark and do not originate any arrow . }\end{array } \right\}\ ] ] since the set of vertices that appears in the definition of contains vertices , we have = e^{(-1 - 2r\lambda)(2r + k ) } , $ ] so that , by , \geq \frac{\delta}{2}\cdot e^{(-1 - 2r\lambda)(2r + k)}.\ ] ] additionally , by , now , and together imply > \frac{\delta}{2}\cdot e^{(-1 - 2r\lambda)(2r + k)}.\ ] ] this contradicts tightness of the interface size , .we will need one extra subset of , defined for and by \#\{x \in ( m(\xi),\ ; m(\xi ) +l ] : \xi(x ) = 2\ } \geq k , \\[.2cm]m(\xi ) - m(\xi ) \leq l\end{array}\right\}.\ ] ] fix and . by, we can choose so that < \varepsilon/2 \text { for all } t \geq 0.\ ] ] let . we now choose corresponding to and in lemma [ lem : order_many ] ; we get : \xi_t(x ) = 2\ } < k\right ] \\[.2cm]&\leq \p\left[\#\{x \in ( m_t , m_t + l ] : \xi_t(x ) \neq 0 \ } < k ' \right ] \\[.2cm]&\leq \p\left[x^{(k')}_t >m_t + l\right ] \leq \p\left[x^{(k')}_t > l\right ] < \varepsilon/4 . \end{split } \label{eq : llprime}\ ] ] by symmetry we also get < \varepsilon/4.\ ] ] the desired statement now follows from putting together ( with the observation that ) , and .given and , define the event d^{z , y}_{[0,1 ] } = \varnothing \text { for all } y , z \text { with } |x - y|\leq s,\ ; |x - z| > s \end{array } \right\}.\ ] ] by prescribing the position of a finite number of arrows and the absence of recovery marks and arrows at certain positions , it is easy to show that there exists such that , for any , > \delta_s.\ ] ] we note that ,\ ; \ell(\xi_{1 } ) > x+s\};\\[.2 cm ] \label{eq : f_good_prop2}&f_s(x ) \cap \{\xi_0(x ) = 1,\;r(\xi_0 ) < x+s\ } \subset \{\xi_{1 } \equiv 2 \text { on } [ x - s , x],\ ; r(\xi_{1 } ) < x+s\}\end{aligned}\ ] ] and that now , given and , we choose so that ( note in particular that ) .assume that and . by the definition of in, we can find \text { with } x_{i+1 } >x_i + 2(s+r ) \text { for all } i;\\[.2 cm ] y_1,\ldots , y_{k ' } \in [ m(\xi_0 ) + 2(s+r ) , m(\xi_0 ) +l ] \text { with } y_{i+1 } >y_i + 2(s+r ) \text { for all } i\label{eq : we_can_find_y}\end{aligned}\ ] ] we then have & \stackrel{\eqref{eq : def_gamma},\eqref{eq : f_good_prop1},\eqref{eq : f_good_prop2}}{\geq } \p\left[\left(\cup_{i=1}^k f_s(x_i)\right ) \cap \left(\cup_{i=1}^k f_s(y_i)\right)\right ] \\[.2cm]&\stackrel{\eqref{eq : fcan_happen},\eqref{eq : f_is_indep},\eqref{eq : we_can_find_x},\eqref{eq : we_can_find_y}}{\geq } 1 - 2(1-\delta_s)^k > 1 -\varepsilon.\end{aligned}\ ] ] _ _ p__roposition [ prop : gamma_sl ] . fix and .we first choose corresponding to and in lemma [ lem : after_one_second ] , and then choose corresponding to and in lemma [ lem : gamma_pi_good ] . we then have , for any , \geq \p\left[\xi^h_t\in \gamma_{s , l}\;|\;\xi^h_{t-1 } \in \pi_{k , l}\right ] \cdot \p\left[\xi^h_{t-1 } \in \pi_{k , l } \right]\geq ( 1-\varepsilon/2)^2 > 1-\varepsilon.\ ] ] it is then easy to show that we can increase if necessary so that the result also holds for .
we study the interface of the multitype contact process on . in this process , each site of is either empty or occupied by an individual of one of two species . each individual dies with rate 1 and attempts to give birth with rate ; the position for the possible new individual is chosen uniformly at random within distance of the parent , and the birth is suppressed if this position is already occupied . we consider the process started from the configuration in which all sites to the left of the origin are occupied by one of the species and all sites to the right of the origin by the other species , and study the evolution of the region of interface between the two species . we prove that , under diffusive scaling , the position of the interface converges to brownian motion . * *
recently a lot of attention has been paid on the properties of s / n / s and n / n / n microbridge structures , and noise measurement ( shot noise as well as thermal noise ) was used as an additional technique besides transport measurements . but to measure the noise at cryogenic temperature is difficult since the sample noise usually is much smaller than the thermal noise of the components in the test circuit .noise thermometry for electrons at low temperature was previously performed by current noise measurement with squid , which is very sensitive and has very low noise level .however it is limited for small resistance and not for voltage noise measurement , also it can not be used when magnetic field is applied .cross - correlation technique provides an alternative method to measure the sample noise at low temperature .one bottleneck for this technique is that it requires a lot of time to do cross - correlation to converge and achieve the required sensitivity , and the trade off between sensitivity and the time needed to converge makes it difficult to use for low level noise experiments . in this articlewe present an improved cross - correlation algorithm as well as the test instrument set - up for measurement of thermal noise .this algorithm does a vector average over both time and specific frequency range .it is worth noting that for commercial spectrum analyzer usually only average over time is used , and it is impossible to realize this algorithm within the instrument because of limitations like memory and computation speed . as shown in the following sections , much faster convergence can be achieved with the new algorithm .it is well known 4-probe resistance measurement eliminates contact resistance by measuring the voltage signal across the sample that was stimulated by the current though the sample .it can be also considered as measuring the `` in phase '' signal between current and voltage across the sample .cross - correlation is similar in a sense it eliminates the channel s noise by measuring the `` similarity '' or `` in phase '' signal between two different voltage channels . to better illustrate it ,let consider the voltage signals from two channels .the fourier components at particular frequency are : here a is the amplitude of the noise signal from our sample at frequency and , are the amplitude of unwanted noise at generated in those two channels , for example , the thermal noise generated by the 20 lead resistor as shown in fig .[ fig1 ] .schematic of the set - up .the sample resistor is 10 and 1.5 .the thermal noise is 20 resistor along the channel is much bigger than the sample thermal noise . ] to do cross - correlation we calculate the product of the two vectors : for the last three terms in eq.[eqn2 ] , since all those phases , are random for white noise , and and does not change over time , when we do the average of over enough long time , the random phase terms will cancel each other and will give negligible contribution to the total amplitude .so the real part of the will converge at , and the imaginary part will converge at .ideally if we do the average over infinite time the amplitude will converge at .but in practical situation , the measurement time should be limited to some reasonable extent . to estimate the time needed to approach convergence limit, we need to find when the deviation of the random phase term is much smaller than the term .first in order to identify the signals at frequency , the sampling time should be much longer than to get an accurate fourier component . then at different times are acquired to do the vector average to eliminate the random phase terms .if , which happens when the sample is at low temperature and the sample signal amplitude is very small , it will take very long time to approach the convergence limit , which requires large n in the following equation : from standard textbook we know for iid ( independent identical distributed ) random sequence , n times average gives n times smaller variance .assuming our channel noises are iid , we can expect that the variance of the averaged random phase term , similarly for the averaged correlation the variance , since our goal is to find noise magnitude , we need the variance of root square of real part , which is proportional to , and finally what need standard deviation to compare with . the standard deviation should be proportional to .the exponent is indeed observed in our experiment as shown in sec.[exponent ] .this exponent might also be used as a criteria to decide if the channel noise in different time steps can be fitted as iid random sequence , i.e. whether there is some correlation in time domain .as indicated above by the exponent , it is not very effective to eliminate the channel noise by increasing the measurement time steps . to accelerate the convergence process , a new algorithm is described below .consider cross - correlation results at two different frequencies and : here the last vector term represents the vector sum of the last three terms in eq.[eqn2 ] .for `` white '' noise , the amplitude is the same for different frequencies but the phases , are random .this means a vector average over frequency domain is equivalent to the vector average over time domain . since in practicewe usually acquire a series of data points in one sampling time and then do a fft transform , we could compute all the points in the frequency domain and use them to do vector average .for example the commercial spectrum analyzer usually takes 1024 scaler voltage points in ( ca nt take more because of limited memory ) and give 512 vector points in the frequency domain . if we do vector average of the 512 points , according to the above statement it is similar to the result of average over 512 time steps for one particular frequency , which means the convergence at can be achieved 512 times faster . to test this we built some simple experimental set - up as described in the following sectiona schematic plot of the test set up is shown in fig . [ fig1 ] .since people usually use resistive stainless steel coax cable to connect the sample in the low temperature stage , here a 20 resistor is used along each channel to simulate the channel resistor .the sample resistors used here are 10 and 1.5 to simulate the low noise level from real sample . in this casethe amplitude of the sample noise is much smaller than that of channel noise , so it can only be retrieved by cross - correlation technique .the output of the two channels feeds separately to two par116 preamplifier ( transformer mode ) and par124 lock - in amplifier(only as an additional cascade amplifier , the monitor output is used ) .the transformer is used to match the impedance .two transformers need to be similar because otherwise if may change the phase and amplitude and affect the convergence .for example if there is a fixed phase difference between two transformer , the amplitude will be reduced to .after amplification the signals are fed to left / right channel input of a standard pci sound card installed in a pii pc . the sound card then digitize the signal with 16 bit resolution and 44.1khz sampling rates .after that the data is acquired by a program written in labview to the computer memory . then the program calculates fourier spectra and and performs correlation and vector average etc . , and shows all results on the screen in real time .a pc is much better than a commercial analyzer when considering the memory size and computation speed .in fig . [ fig2 ] , for the 10 sample resistor three curves are shown to demonstrate the result of conventional cross - correlation algorithm and to compare it with the new algorithm .curve a shows that the standard deviation of the average over time , decreases as the time elapse . as shown in sec.[cross ] it is proportional to , which can be find easily from the log - log plot .curve b shows the average over time approaches slowly to , which is around 0.3nv/ , after more than 100 times average .curve c shows that when using the new algorithm , the average over frequency _ and _ time almost converge at from the first point !in fact , curve a is proportional to the difference between curve b and curve c. the measured amplitude of is close to expected amplitude of , which is 0.4nv/ for 10 sample resistor .this result is not bad when considering there may be affects from non ideal phase and amplitude properties of transformers and amplifiers , and uncertain pre - factors that came in from the data processing like the use of windows when doing fft . for a 10 sample resistor with 20 channel resistor ,the cross - correlation results are shown .curve a shows standard deviation of the average over time .the slope is proportional to .curve b shows vector average over time .curve c shows vector average over both frequency _ and _ time domain . with this new algorithmthe convergence is achieved almost from the first point , much faster than the conventional cross - correlation result shown by curve b. ] fft spectrum after 100 times average is shown in fig .curve b shows the result of the conventional cross - correlation vector average over 100 time steps . for comparison ,curves c , d shows separately the noise spectrum of left / right channel measured in .the vector average over both frequency and time domain is just a number , so it can not be shown in this frequency spectrum figure . from curveb , despite of those pick up noise peaks , we can still `` see '' the real noise level that is around 0.3nv/ , which was also found by the program and shown as the last point of curve b in fig .the program actually average _ scalarly _ the spectrum amplitude from 1khz to 2khz where the spectrum is almost flat and the affect of power line noise peaks is smaller . and those points close to power line noise peaks were abandoned .it is worth noting this _ scalarly _ average of amplitude over frequency is different with the vector average over frequency .fft spectrum after 100 times average .curve b shows the cross - correlation vector average over 100 time steps , curve c , d shows the noise spectrum of left / right channel in .the noise floor level of curve b is close to 0.3nv/ as shown in fig .the decrease of amplitude below 60hz and the dip near 8khz is due to the amplitude and phase properties of transformers .the cut - off near 22khz is due to the shannon limit , i.e. , half of 44.1khz sampling frequency . ] with the presence of huge noise peaks as shown in fig .[ fig3 ] , to observe the sample noise level it is required that the spectrum leakage and sidelobe background of the unwanted power line noise peaks should nt mask the real white noise floor .this is usually achieved by using special window function when doing fft and by increasing the frequency resolution . since hann windowhas a fast decreasing sidelobe magnitude , it is preferred in this situation than uniform window which is conventionally used for flat noise spectrum measurement . and by increasing the frequency resolution the peaks mainlobe can be narrowed and their sidelobes can be attenuated . in our casethe sampling rate is 44.1khz , sampling number is chosen to be 32768 ( ) points for each step , so the sampling time is second for each step , and frequency resolution is .it is possible to increase the sampling number to increase and decrease .this will require only larger pc memory and higher speed cpu which is inexpensive .a simple algorithm is used here to eliminate 3 points from both sides of those power line peaks frequency when doing the average .this is already good enough to find the real noise floor of curve b in fig .more complex ways using adaptive filter program to remove the noise peak and extract the floor level is also possible . as shown in sec .[ cross ] , the number of points used for vector average over frequency domain decides how much times faster of this new algorithm compared to conventional algorithm . heresince we used the range from 1khz to 2khz , with resolution 1.364hz , we get 733 points . after subtracting the number of those points that are too close to noise peaks, we have around 600 points .so in principle we should get 600 times faster . to test this we measured room temperature noise for 1.5 sample resistor with same 20 channel resistors .the result is shown in fig .the start point of curve a and b are mostly determined by the 20 resistor .there is a ratio about 3 between those two curves around the start point . to detect the noise level from 1.5 sample resistor, we can assume the required standard deviation to be 5 times smaller than the convergence limit , which is , we would need decrease to magnitude . the time needed for conventional cross - correlation methodscan be estimated by : for the new algorithm , we expect steps . two cross - correlation test results for 1.5 sample with 20 channel resistor, one stopped after 100 time steps , the other stopped after 1000 time steps .curve b shows that using conventional average over time methods , it approaches convergence limit after 1000 averages .curve c shows that using vector average over both frequency and time , the convergence was approached within a few time steps . ] as shown in fig .[ fig4 ] the convergence limit is about 0.126 nv/ .it is close to the estimated value of thermal noise level of a 1.5 resistor at room temperature , which is 0.158 nv/ . and is consistent with the 10 case . at 1000th time step , the last point of curve a in fig .[ fig4 ] has the value 25.6 pv/ .the ratio between convergence limit and standard deviation is , which is close to our estimation that is 5 for 1372 steps . as for curve c , it approaches the convergence limit from the first point in the 100 time step case , in the 1000 time steps case , despite of some fluctuations that may caused by some broad band noise or data processing , it also approaches the convergence limit from the first a few points .so it is proved that with the algorithm of vector average over frequency and time , convergence limit can be approached hundreds of times faster than conventional cross - correlation algorithm with this simple setup .if there are less noise peaks and if large frequency band is available , this algorithm could give even faster result .for white noise measurement , an improved cross - correlation algorithm using vector average over both frequency and time domain is presented . with consideration of low temperature noise measurement , a simpletest set - up using pc and sound card is built and tested .it is proved that this algorithm can achieve convergence hundreds of times faster than the conventional cross - correlation algorithm .even with much bigger channel noise and huge pick up noises , 100 pv/ noise level and 25pv/ sensitivity can be achieved in seconds . with a broader frequency band width , better a / d card and larger pc memory the convergence can be reached even faster . in principlethis algorithm could be used for other type of noises as long as the shape of the spectrum is known and phase in frequency domain is random(for example 1/f noise ) .dimitris g. manolakis , vinay k. ingle and stephen m. kogon , _ statistical and adaptive signal processing : spectral estimation , signal modeling , adaptive filtering , and array processing_(mcgraw - hill , boston , 2000 ) .curve a and b is smooth because in fact it is result of _ scalarly _ averaged of amplitude over a frequency range for every time step , and then _ vector _ average over time is conducted .this is further explained by fig .
white noise measurement can provide very useful information in addition to normal transport measurements . for example thermal noise measurement can be used at sub kelvin temperature to determine the absolute electron temperature without applying any heating current . and shot noise measurements helped to understand the properties of nano and mesoscopic normal metal / superconductor structures . but at low temperature and for relatively small resistance it is difficult to measure the sample s noise magnitude because the background thermal noise can be much larger and usually there are other pick - up noises . cross correlation technique is one way to solve this problem . this article describes an improved cross correlation algorithm that averages in both frequency and time domain , and the realization of a simple instrument set - up with pc and sound card . with this set - up it is shown even with much larger background noise and pickup noises , 100pv/ white noise level can be easily measured in seconds . compared to the normally used cross - correlation methods , it is several orders of magnitude faster .
random walk metropolis ( rwm ) algorithms are widely used generic markov chain monte carlo ( mcmc ) algorithms .the ease with which rwm algorithms can be constructed has no doubt played a pivotal role in their popularity .the efficiency of a rwm algorithm depends fundamentally upon the scaling of the proposal density .choose the variance of the proposal to be too small and the rwm will converge slowly since all its increments are small . conversely , choose the variance of the proposal to be too large and too high a proportion of proposed moves will be rejected . of particular interest is how the scaling of the proposal variance depends upon the dimensionality of the target distribution .the target distribution is the distribution of interest and the mcmc algorithm is constructed such that the stationary distribution of the markov chain is the target distribution .the introduction is structured as follows .we outline known results for continuous independent and identically distributed product densities from and subsequent work .we highlight the scope and limitations of the results before introducing the discontinuous target densities to be studied in this paper . while the statements of the key results ( theorem [ main ] ) in this paper are similar to those given for continuous target densities , the proofs are markedly different .a discussion of why a new method of proof is required for discontinuous target densities is given .finally , we give an outline of the remainder of the paper .the results of this paper have quite general consequences for the implementation of metropolis algorithms on discontinuous densities ( as are commonly applied in many bayesian statistics problems ) , namely : full- ( high- ) dimensional update rules can be an order of magnitude slower than strategies involving smaller dimensional updates .( see theorem [ thmprop ] below . ) for target densities with bounded support , metropolis algorithms can be an order of magnitude slower than algorithms which first transform the target support to for some . in ,a sequence of target densities of the form were considered as , where is twice differentiable and satisfies certain mild moment conditions ; see , ( a1 ) and ( a2 ) .the following random walk metropolis algorithm was used to obtain a sample from .draw from . for and let be independent and identically distributed ( i.i.d . ) according to and . at time , propose where is the proposal standard deviation to be discussed shortly .set with probability otherwise set .it is straightforward to check that has stationary distribution , and hence , for all , .the key question addressed in was : starting from the stationary distribution , how should be chosen to optimize the rate at which the rwm algorithm explores the stationary distribution ?since the components of are i.i.d . , it suffices to study the marginal behavior of the first component , . in , it was shown that if and ,1}^d ] .note that the `` speed measure '' of the diffusion only depends upon through .the diffusion limit for is unsurprising in that for a time interval of length , moves are made each of size .therefore the movements in the first component ( appropriately normalized ) converge to those of a langevin diffusion with the `` most efficient '' asymptotic diffusion having the largest speed measure .since the diffusion limit involves speeding up time by a factor of , we say that the mixing of the algorithm is .the optimal value of is , which leads to an average optimal acceptance rate ( aoar ) of 0.234 .this has major practical implications for practitioners , in that , to monitor the ( asymptotic ) efficiency of the rwm algorithm it is sufficient to study the proportion of proposed moves accepted .there are three key assumptions made in .first , , that is , the algorithm starts in the stationary distribution and is chosen to optimize exploration of the stationary distribution .this assumption has been made in virtually all subsequent optimal scaling work ; see , for example , and .the one exception is , where is started from the mode of with explicit calculations given for a standard multivariate normal distribution . in , it is shown that is optimal for maximizing the rate of convergence to the stationary distribution .since convergence is shown to occur within iterations , the time taken to explore the stationary distribution dominates the time taken to converge to the stationary distribution , and thus overall it is optimal to choose .it is difficult to prove generic results for .however , the findings of suggest that even when , it is best to scale the proposal distribution based upon .it is worth noting that in it was found that for the metropolis adjusted langevin algorithm ( mala ) , the optimal scaling of for started at the mode of a multivariate normal is compared to for .second , is an i.i.d .product density .this assumption has been relaxed by a number of authors with and an aoar of 0.234 still being the case , for example , independent , scaled product densities ( and ) , gibbs random fields , exchangeable normals and elliptical densities .thus the simple rule of thumb of tuning such that one in four proposed moves are accepted holds quite generally . in and , examples where the aoar is strictly less than 0.234are given .these correspond to different orders of magnitude being appropriate for the scaling of the proposed moves in different components .third , the results are asymptotic as .however , simulations have shown that for i.i.d .product densities an acceptance rate of 0.234 is close to optimal for ; see , for example , .departures from the i.i.d .product density require larger for the asymptotic results to be optimal , but is often seen in practical mcmc problems . in and , optimal acceptance rates are obtained for finite for some special cases . with the exceptions of and , in the above works assumed to have a continuous ( and suitably differentiable ) probability density function ( p.d.f . ) .the aim of the current work is to investigate the situation where the target distribution has a discontinuous p.d.f . , and specifically , target distributions confined to the -dimensional hypercube ^d ] with we then use the following random walk metropolis algorithm to obtain a sample from .draw from .for and let be independent and identically distributed ( i.i.d . ) according to ] converges weakly to an appropriate langevin diffusion with speed measure as , where .this gives a clear indication of how the markov chain explores the stationary distribution .by contrast the esjd only gives a measure of average behavior and does not take account of the possibility of the markov chain becoming `` stuck . ''if ] for discontinuous target densities is mathematical convenience .the results proved in this paper hold with gaussian rather than uniform proposal distributions , but some elements of the proof are less straightforward .for discussion of the esjd for densities ( [ eq1a ] ) for general subject to < \infty ] as and - 2\phi\biggl ( -\frac{l \sqrt{i}}{2 } \biggr ) \biggr| \leq\varepsilon_d,\ ] ] where as . while ( [ eqrevb7 ] ) is not explicitly stated in ,it is the essence of the requirements of the sets , stating that for large , with high probability over the first iterations the acceptance probability of the markov chain is approximately constant , being within of .( note rather than is used for dimensionality in . )thus in the limit as the effect of the other components on movements in the first component converges to a deterministic acceptance probability .the situation is more complex for of the form given by ( [ eq1a ] ) and ( [ eq1b ] ) as the acceptance rate in the limit as is inherently stochastic .for example , suppose is the uniform distribution on the -dimensional hypercube so that ^d \}} ] steps ; cf .in particular , we show that the acceptance probability converges very rapidly to its stationary measure , so that over ] proposed moves are accepted . by comparison , ,1}^d - x_{0,1}^d | \leq[d^\delta ] \sigma_d ] iterations .that is , we show that there exists such that , for any , } \{\mathbf{x}_t^d \notin\tilde{f}_d \ } ) \rightarrow0 ] iterations the markov chain stays in , where the average number of accepted proposed moves in the following ] , ^d \ } } \,d \mathbf{z}^d \nonumber\\ & \geq&\int h_d ( \mathbf{z}^d ) \ { 1 \wedge\exp(- dg^\ast \sigma_d ) \ } 1_{\ { \mathbf{x}^d + \sigma_d \mathbf { z}^d \in [ 0,1]^d \ } } \,d \mathbf{z}^d \nonumber\\[-8pt]\\[-8pt ] & = & \exp(-l g^\ast ) \int h_d ( \mathbf{z}^d ) 1_{\ { \mathbf { x}^d + \sigma_d \mathbf{z}^d \in[0,1]^d \ } } \,d \mathbf{z}^d \nonumber\\ & \geq & \exp(-l g^\ast ) \biggl ( \frac{1}{2 } \biggr)^{b_d^l ( \mathbf{x}^d)}.\nonumber\end{aligned}\ ] ] this lower bound for will be used repeatedly .the pseudo - rwm process moves at each iteration , which is the key difference to the rwm process .furthermore , the moves in the pseudo - rwm process are identical to those of the rwm process , conditioned upon a move in the rwm process being accepted , that is , its jump chain . for ,let denote the successive states of the pseudo - rwm process , where .the pseudo - rwm process is a markov process , where for , and given that , has p.d.f . note that for . since , we can couple the two processes to have the same starting value .a continued coupling of the two processes is outlined below .suppose that .then for any , that is , the number of iterations the rwm algorithm stays at before moving follows a geometric distribution with `` success '' probability .therefore for , let denote independent geometric random variables , where for , denotes a geometric random variable with `` success '' probability . for ,let and for , let where the sum is zero if vacuous . for , attach to .thus denotes the total number of iterations the rwm process spends at before moving to .hence , the rwm process can be constructed from by setting and for all , .obviously the above process can be reversed by setting equal to the accepted move in the rwm process . for each ,the components of are independent and identically distributed. therefore we focus attention on the first component as this is indicative of the behavior of the whole process . for and , let ,1}^d ] .[ main ] fix .for all , let .then , as , in the skorokhod topology on , where satisfies the ( reflected ) langevin sde on ] iterations , for .fix and let be a sequence of positive integers satisfying \leq k_d \leq[d^\delta] ] and for , let ,1}^d ] .hence , for all , \sigma_d.\ ] ] therefore by , theorem 4.1 , as , if as .hence we proceed by showing that let be the ( discrete - time ) generator of and let be an arbitrary test function of the first component only .thus } { \mathbb{e } } [ h ( \tilde{\mathbf{x}}_1^d ) - h ( \tilde{\mathbf{x}}_0^d ) |\tilde{\mathbf{x}}_0^d = \mathbf{x}^d].\ ] ] the generator of the ( limiting ) one - dimensional diffusion for an arbitrary test function is given by for all ] is the set of bounded continuous functions upon ] and ) ] and . in appendix[ secsets ] , we prove ( [ eqn114 ] ) for the sets given in ( [ emainx1 ] ) .note that ( [ eqn114 ] ) follows immediately from theorem [ lem321 ] , ( [ eqss46 ] ) since .an outline of the roles played by each is given below .for ( ) the total number of components in ( _ close to _ ) the rejection region are controlled .for after iterations the total number and position of the points in are approximately from the stationary distribution of .finally , for , ] iterations , where the sum is set equal to zero if vacuous . then }^d = \hat{\mathbf{x}}_{[p_d d^\delta]}^d ] , in particular in lemma [ lem40 ] . in appendixes [ secsets ] and [ secpd ]we make no such assumption .however , =0 ] , which is a measure of the `` roughness '' of . for discontinuous densities of the form ( [ eq1b ] ) , depends upon , the ( mean of the ) limit of the density at the boundaries ( discontinuities ) .discussion of the role of the density in the behavior of the rwm algorithm is given in section [ secextsim ] .the most important consequence of theorem [ main ] is the following result .[ mc1 ] let .then \rightarrow a(l ) \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] is maximized by with clearly , if is known , can be calculated explicitly . however , where mcmc is used , will often only be known up to the constant of proportionality .this is where corollary [ mc1 ] has major practical implications , in that , to maximize the speed of the limiting diffusion , and hence , the efficiency of the rwm algorithm , it is sufficient to monitor the average acceptance rate , and to choose such that the average acceptance rate is approximately . therefore there is no need to explicitly calculate or estimate the constant of proportionality .in this section , we discuss the extent to which the conclusions of theorem [ main ] extend beyond being an i.i.d . product density upon the -dimensional hypercube and .first we present two extensions of theorem [ main ] .the second extension , theorem [ thmprop ] , is an important practical result concerning lower - dimensional updating schema .suppose that is nonzero on the positive half - line .that is , and otherwise .[ thmhalf ] fix . for all , let , given by ( [ eqtri1 ] ) , with latexmath:[ ] where is standard brownian motion , and .let denote the average acceptance rate of the rwm algorithm in dimensions where a proportion of the components are updated at each iteration .let we then have the following result which mirrors corollaries[ mc1 ] and [ mc2 ] . [ mc3 ] let as .then for fixed , is maximized by and also corollary [ mc3 ] is of fundamental importance from a practical point of view , in that it shows that the optimal speed of the limiting diffusion is inversely proportional to .therefore the optimal action is to choose as close to 0 as possible .furthermore , we have shown that not only is full - dimensional rwm bad for discontinuous target densities but it is the worst algorithm of all the metropolis - within - gibbs rwm algorithms .we now go beyond i.i.d .product densities with a discontinuity at the boundary and .we consider general densities on the unit hypercube , discontinuities not at the boundary and . as mentioned in section [ secint ] , for i.i.d .product densities , the speed measure of the limiting one - dimensional diffusion , , is equal to the limit , as , of the esjd times .therefore we consider the esjd for the above - mentioned extensions as being indicative of the behavior of the limiting langevin diffusion .we also highlight an extra criterion which is likely to be required in moving from an esjd to a langevin diffusion limit . using the proof of theorem [ main ] , it is straightforward to show that \lim_{{d \rightarrow\infty } } { \mathbb{e}}\bigl [ 1 _ { \ { \mathbf{x}_0^d + \sigma_d \mathbf{z}_1^d \in[0,1]^d \ } } \bigr ] \\ & = & \frac{l^2}{3 } \lim_{{d \rightarrow\infty } } { \mathbb{e}}\biggl [ \biggl ( \frac{3}{4 } \biggr)^{b_d^l ( \mathbf{x}_0^d ) } \biggr].\nonumber\end{aligned}\ ] ] the first equality in ( [ eqextd1 ] ) can be proved using lemma [ lem33 ] , ( [ eq33b ] ) , where for , = 1/3 ] .that is , the acceptance probability of a proposed move is dominated by whether or not the proposed move lies inside the -dimensional unit hypercube . proposed moves inside the hypercubeare accepted with probability for any ; see lemma [ lem35 ] .thus it is the number and behavior of the components at the boundary of the hypercube ( the discontinuity ) which determine the behavior of the rwm algorithm .this is also seen in theorems [ thmhalf ] and [ thmprop ] . first , we consider discontinuities not at the boundary .suppose that , where \ } } \exp(g ( x ) ) \qquad(x \in\mathbb{r})\ ] ] for some .further suppose that is continuous ( twice differentiable ) upon ] , where is assumed to be continuous and twice differentiable .let and assuming that we have that times the esjd satisfies \rightarrow\frac{l^2}{3 } \lim_{{d \rightarrow\infty } } { \mathbb{e}}\biggl [ \biggl ( \frac{3}{4 } \biggr)^{b_d^l ( \mathbf{x}_0^d ) } \biggr ] \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] note that ( [ eqextd2 ] ) is a weak condition and should be straightforward to check using a taylor series expansion of . for i.i.d .product densities , as .more generally , the limiting distribution of will determine the limit of the right - hand side of ( [ eqextd3 ] ) .in particular , so long as there exist and such that , the right - hand side of ( [ eqextd3 ] ) will be nonzero for .it is informative to consider what conditions upon are likely to be necessary for a diffusion limit , whether it be one - dimensional or infinite - dimensional as in .suppose that as .for a diffusion limit we will require moment conditions on , probably requiring that there exists such that < \infty ] , so that is chosen uniformly at random over the hypercube .note that , if is the uniform distribution , with the right - hand side maximized by taking compared with for .we expect to see similar behavior to , in that the optimal ( in terms of the esjd ) will vary as the algorithm converges to the stationary distribution but will be of the form throughout .the rwm algorithm is unlikely to get `` stuck '' with it conjectured that for any and , } \{ b_d^l ( \mathbf{x}_t^d ) \geq \gamma\log d \ } \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] simulations with and and suggest that convergence occurs in . for convergence ,we monitor the mean of for and the variance of for .the sets consist of the intersection of four sets . for , we will define and discuss the role that it plays in the proof of theorem [ main ] , one at a time .furthermore , we show that in stationarity it is highly unlikely that does not belong to .since we rely upon a homogenization argument , it is necessary to go further than the sets to the sets . in particular , if , then it is highly unlikely that any of }^d ] iterations . to study andlater we require the following lemmas .[ lem31a ] for a random variable , suppose that there exist such that and for all , . then first note that \\[-8pt ] & = & { \mathbb{p}}(x \in a | x \in d^c , x \in b ) { \mathbb{p}}(x \notin d ( [ eq31e ] ) and using ( [ eq31c ] ) and . [ lem34 ] suppose that a sequence of sets is such that there exists such that fix and let } \ { \hat{\mathbf{x}}_i^d \notin f_d^\star\cap f_d^1 \ } | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ) \leq d^{-\varepsilon } \biggr\}.\ ] ] then since , } \ { \mathbf{x}_i^d \notin f_d^\star\cap f_d^1 \ } \biggr ) \leq d^{2 + \delta+ \gamma } { \mathbb{p}}(\mathbf{x}_0^d \notin f_d^\ast\cap f_d^1).\ ] ] therefore for all sufficiently large , by bayes s theorem , . therefore taking } \{ \mathbf{x}_i^d \notin f_d^\ast\cap f_d^1 \} ] and } \theta_i^d \geq d^\delta ] .thus } \theta_i^d < d^\delta\biggr),\ ] ] and ( [ eq34c ] ) follows from ( [ eq34h ] ) , ( [ eq34j ] ) and ( [ eq34k ] ) .as noted in section [ secalg ] , we follow by considering the behavior of the random walk metropolis algorithm over steps of size ] iterations , while over ] of the proposed moves are accepted . however , we need to control the number of components which are _ close to _ the rejection region ( ) and the distribution of the position of the components in the rejection region after ] , consequently , for any , as . fix . by stationarity and markov s inequality , for all , | \geq\sqrt{k_d}\bigr )\nonumber\\[-8pt]\\[-8pt ] & \leq & \frac{d^\kappa}{k_d^m } { \mathbb{e}}\bigl [ \bigl ( b_d^{k_d^{3/4 } } ( \mathbf{x}^d_0 ) - { \mathbb{e } } [ b_d^{k_d^{3/4 } } ( \mathbf{x}^d_0)]\bigr)^{2 m } \bigr].\nonumber\end{aligned}\ ] ] however , , so by lemma [ lemz32 ] for any , for all sufficiently large , \bigr)^{2 m } \bigr ]\leq k_m k_d^{3m/4},\ ] ] where . since ] iterations , we introduce a simple random walk on the hypercube ( rwh ) . the biggest problem in analyzing the rwm orpseudo - rwm algorithm is the dependence between the components .however , the dependence is weak and whether or not a proposed move is accepted is dominated by whether or not the proposed moves lies inside or outside the hypercube .therefore we couple the rwm algorithm to the simpler rwh algorithm . for ,define the rwh algorithm as follows .let denote the position of the rwh algorithm after iterations .then ^d ] .for our purposes it will suffice to consider the coupling of the pseudo - rwm and pseudo - rwh algorithms over ] iterations .note that the rwh algorithm coincides with the rwm algorithm with a uniform target density over the -dimensional cube , so in this case the coupling is exact .the components of the pseudo - rwh algorithm behave independently . for ,let and for , let .then is the probability that a proposed move from is accepted in the rwh algorithm .[ lem33 ] for any and ^d ] ; then we can couple and using and as follows .let ^d \mathbf{x}^d + \sigma_d \mathbf{z}_1^d \in[0,1]^d \displaystyle u \leq1 \wedge \exp\biggl(\sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ } \biggr) ] and . thus ^d,\nonumber\\ & & \hspace*{59pt }u > 1 \wedge\exp\biggl(\sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ }\biggr ) \biggr ) \nonumber\\[-8pt]\\[-8pt ] & & \qquad= d^\alpha{\mathbb{e}}\biggl [ \prod_{j=1}^d 1 _ { \ { 0< x_j + \sigma_d z_{1,j } < 1 \ } } \nonumber\\ & & \hspace*{73pt}{}\times\biggl\{1 - 1 \wedge\exp\biggl ( \sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ } \biggr ) \biggr\ } \biggr ] \nonumber\\ & & \qquad\leq d^\alpha{\mathbb{e}}\biggl [ \biggl| \sum_{j=1}^d \{g ( x_j + \sigma_d z_{1,j } ) - g(x_j ) \ } \biggr| \biggr],\nonumber\end{aligned}\ ] ] since for all , . by taylor s theorem , for , there exists lying between 0 and such that since is continuously twice differentiable on , there exists such that since the components of are independent , by jensen s inequality , ( [ eq33d ] ) and \leq2 { \mathbb{e}}[x^2 ] + 2 c^2 ] , there exists a coupling such that } \{\mathbf{x}_j^d \neq\mathbf{w}_j^d \}| \mathbf{x}_0^d \equiv\mathbf{w}_0^d = \mathbf{x}^d \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] moreover , if and , there exists a coupling such that } \{\hat{\mathbf{x}}_j^d \neq \hat{\mathbf{w}}_j^d \ }| \mathbf{x}_0^d \equiv\mathbf{w}_0^d = \mathbf{x}^d\biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ] for and let let ] . for any sequence of such that , also for all , fix and set . to prove ( [ eq38aa ] ) and ( [ eq38ab ] ) we couple the components of to a simple reflected random walk process .set for some .let be i.i.d . according to ] , there exists , such that for all , \leq k d^{- ( 1 + \beta m/8)}.\ ] ] fix .note that & \leq & { \mathbb{e } } [ q^d ( x_{0,1}^d;l ; k_d)^m ] \nonumber\\ & = & \int_0 ^ 1 q^d ( x ; l ; k_d)^m f ( x ) \,dx \nonumber\\[-8pt]\\[-8pt ] & = & \int_{r_d^{k_d^{3/4 } } } q^d ( x ; l ; k_d)^m f ( x ) \,dx\nonumber\\ & & { } + \int_{(r_d^{k_d^{3/4 } } ) ^c } q^d ( x ; l ; k_d)^m f ( x ) \,dx.\nonumber\end{aligned}\ ] ] the two terms on the right - hand side of ( [ eq37aa ] ) are bounded using ( [ eq38ag ] ) and ( [ eq38ai ] ) , respectively .thus it follows from the proof of lemma [ lem38a ] that there exist constants such that , for all , & \leq & \int_{r_d^{k_d^{3/4 } } } \biggl ( \frac{k_1}{\sqrt{k_d } } \biggr)^m f ( x ) \,dx\nonumber\\ & & { } + \int_{(r_d^{k_d^{3/4 } } ) ^c } \biggl\ { 2 \exp\biggl ( - \frac{\sqrt{k_d}}{8l^2 } \biggr ) \biggr\}^m f ( x ) \,dx \nonumber\\ & \leq & { \mathbb{p } } ( x_{0,1}^d \in r_d^{k_d^{3/4 } } ) \biggl ( \frac{k_1}{\sqrt{k_d } } \biggr)^m\\ & & { } + { \mathbb{p } } ( x_{0,1}^d \notin r_d^{k_d^{3/4 } } ) \times2 \exp\biggl ( - \frac{\sqrt{k_d}}{8l^2 } \biggr ) \nonumber\\ & \leq & k_2 \frac{k_d^{3/4}}{d } k_d^{-m/2 } + 2 \exp\biggl ( - \frac{\sqrt{k_d}}{8l^2 } \biggr).\nonumber\end{aligned}\ ] ] the corollary follows from ( [ eq37b ] ) since and ] and , by the triangle inequality , | > d^{-\gamma}/16\bigr ) \\ & & \qquad\quad { } + d^\kappa{\mathbb{p}}\bigl ( | { \mathbb{e } } [ \lambda_d ( \mathbf{x}_0^d ; r_d ; k_d ) ] - \lambda(r_d)| > d^{-\gamma}/16\bigr).\nonumber\end{aligned}\ ] ] in turn we show that the two terms on the right - hand side of ( [ eq36b ] ) converge to 0 as . by markov s inequality, we have that for any , | > d^{-\gamma}/16\bigr ) \nonumber\\ & & \qquad\leq16^{m } d^{\kappa+m \gamma } { \mathbb{e}}\biggl [ \biggl ( \sum_{j = 1}^d \ { q^d ( x_{0,j};r_d ; k_d ) - { \mathbb{e } } [ q^d ( x_{0,j};r_d ; k_d ) ] \ } \biggr)^m \biggr ] \nonumber\\[-8pt]\\[-8pt ] & & \qquad= 16^m d^{\kappa+m \gamma } \sum_{i_1 = 1}^d \cdots\sum_{i_m = 1}^d { \mathbb{e}}\biggl [ \prod_{j=1}^m \ { q^d ( x_{0,i_j};r_d ; k_d)\nonumber\\ & & \hspace*{167pt } { } - { \mathbb{e } } [ q^d ( x_{0,i_j};r_d ; k_d ) ] \ } \biggr].\nonumber\end{aligned}\ ] ] since the components of are independent and identically distributed , we have for any , there exists and with such that \ } \biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad = \prod_{j=1}^j { \mathbb{e}}\bigl [ \ { q^d ( x_{0,1};r_d ; k_d ) - { \mathbb{e } } [ q^d ( x_{0,1};r_d ; k_d ) ] \}^{l_j } \bigr].\nonumber\end{aligned}\ ] ] note that if any , then the right - hand side of ( [ eq36d ] ) is equal to 0 . by corollary [ lem37 ] ,if , there exists such that the right - hand side of ( [ eq36d ] ) is less than or equal to . furthermore , there exists such that for any and , there are at most configurations of such that for , of the components are the same .therefore there exists such that \ } \biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad \leq k d^{-m \beta/8}.\nonumber\end{aligned}\ ] ] taking , it follows from ( [ eq36e ] ) that the right - hand side of ( [ eq36c ] ) converges to 0 as .the lemma follows by showing that for all sufficiently large , - \lambda(r_d)| \leq d^{-\gamma}/16.\ ] ] note that & = & d { \mathbb{e } } [ q^d ( x_{0,1};r;k_d ) ] \nonumber\\ & = & d \int_0^{k_d^{3/4}/d } q^d ( x;r_d ; k_d ) f ( x ) \,dx\nonumber\\[-8pt]\\[-8pt ] & & { } + d \int_{k_d^{3/4}/d}^{1-k_d^{3/4}/d } q^d ( x;r_d ; k_d ) f ( x ) \,dx\nonumber\\ & & { } + d \int_{1-k_d^{3/4}/d}^1 q^d ( x;r_d ; k_d ) f ( x ) \,dx.\nonumber\end{aligned}\ ] ] by ( [ eq38ai ] ) , the second integral on the right - hand side of ( [ eq36 g ] ) is bounded above by as .let . then by taylor s theorem , for , thus \\[-8pt ] & & \qquad\leq d \times f_\star\frac{k_d^{3/4}}{d } \times \int_0^{k_d^{3/4}/d } q^d ( x;r_d ; k_d ) \,dx.\nonumber\end{aligned}\ ] ] similarly , we have that \\[-8pt ] & & \qquad\leq d \times f_\star\frac{k_d^{3/4}}{d } \times \int_{1-k_d^{3/4}/d}^1 q^d ( x;r_d ; k_d ) \,dx.\nonumber\end{aligned}\ ] ] by symmetry , , so - 2 f^\ast d \int_0 ^ 1 q^d ( x;r_d;k_d ) \,dx\biggr| \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\hspace*{-35pt}\ ] ] since , using lemma [ lem38a ] , ( [ eq38ab ] ) , we have that , for all sufficiently large , \\[-8pt ] & & \qquad\quad { } + d \int_{\sigma_d}^{1-\sigma_d } \biggl\ { \frac{1 } { \int_0 ^ 1 \omega_d ( y ) \,dy } - 1 \biggr\ } q^d ( x;r_d;k_d ) \,dx \nonumber\\ & & \qquad\leq4 d^{1 + \gamma } \sigma_d d^{-2 \gamma } + d^{1 + \gamma } \int_0 ^ 1 \frac{2 \sigma_d}{\int_0 ^ 1 \omega_d ( y ) \,dy } q^d ( x;r_d;k_d ) \,dx.\nonumber\end{aligned}\ ] ] let be defined as in lemma [ lem38a ] . note that ] . fix and let d^{- \theta},l \} ] , the lemma follows since \leq k \leq[d^\delta ] } \sup_{0 \leqr \leq l } | \lambda_d ( \mathbf{x}_0^d;r ; k ) - \lambda(r ) | > d^{-\gamma }\bigr ) \nonumber\\[-8pt]\\[-8pt ] & & \qquad\leq d^ { \kappa } \sum_{k=[d^\beta]}^{[d^\delta ] } { \mathbb{p}}\bigl ( \sup_{0 \leqr \leq l } | \lambda_d ( \mathbf{x}_0^d;r ; k ) - \lambda(r ) | > d^{-\gamma } \bigr).\nonumber\end{aligned}\ ] ] finally , we consider \biggr| < d^{-{1}/{8 } } \biggr\}.\ ] ] the sets mirror the sets in and are used when considering and but play no role in analyzing .[ lem320 ] for any , let and fix . then by hoeffding s inequality , \biggr| > d^{7/8 }\biggr ) \nonumber\\[-8pt]\\[-8pt ] & \leq & d^\kappa\times2 \exp\biggl ( - \frac{2 d^{7/4}}{d ( g^\ast)^4 } \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] finally we are in position to consider and .recall that , for , and } \hat{\mathbf{x}}_j^d \notin f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ) \leq d^{-3 } \biggr\}.\ ] ] combining lemmas [ lem31 ] , [ lem32 ] , [ lema311 ] and [ lem320 ] , we have the following theorem .[ lem321 ] for any , hence , by lemma [ lem34 ] , for any , also using the couplings outlined above , we have that } \ { \hat{\mathbf{w}}_j^d \notin f_d \ } |\hat{\mathbf{w}}_0^d \in \tilde{f}_d \biggr ) \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\ ] ]we show that for any sequence such that , the key result is lemma [ lem311 ] which states that after iterations , the configuration of the components in the rejection region resemble the configuration of the points of a poisson point process with rate on the interval ] and , fix and . let where for , are independent poisson random variables with means the lemma is proved by showing that by , theorem 1 , by lemma [ lem38a ] , ( [ eq38aa ] ) the right - hand side of ( [ eq311c ] ) converges to 0 as .for the second term on the right - hand side of ( [ eq311b ] ) , it suffices to show that ( for discrete random variables convergence in distribution and convergence in total variation distance are equivalent ; see , page 254 . )the components of and are independent , and therefore it is sufficient to show that , for all , for all , ( [ eq311d ] ) holds , if therefore the lemma follows from ( [ eq311e ] ) since \leq k_d \leq[d^\delta] ] iterations the distribution of the components in the rejection region are approximately given by .we show that studying the pseudo - rwh algorithm over ] .note that satisfies ).\ ] ] let } \sum_{j=0}^{[\pi d^\delta -1 ] } m_j ( \omega_d ( \hat{\mathbf{w}}_j^d)) ] , . then using lemma [ lem310 ] , ( [ eq310b ] ) , and can be coupled such that since , , the right - hand side of ( [ eq312d ] ) is less than .note that so by lemma [ lem33 ] for any , times the right - hand side of ( [ eq312d ] ) converges to 0 as .taking such that , } { \mathbb{p}}\bigl(m_j ( j_d ( \hat{\mathbf{x}}_j^d ) ) \neq m_j ( \omega_d ( \hat{\mathbf{w}}_j^d))| \hat{\mathbf{w}}_j^d = \hat{\mathbf{x}}_j^d \in f_d^1\bigr)\nonumber\\[-8pt]\\[-8pt ] & & \qquad\rightarrow0 \qquad \mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] the lemma then follows from ( [ eq312b ] ) and ( [ eq312e ] ) .we show that it suffices to study } \sum_{j=0}^{[\pi d^\delta-1 ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1} ] by the mean of the means of the geometric random variables .[ lem313 ] for any and for any sequence of such that , if as . let } \{\hat{\mathbf{w}}_j^d \notin f_d \} ] has the same limit as ( should one exist ) as } \biggl ( 1 + \frac{i \tau}{[d^\delta ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } \biggr ) \big| a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d \biggr],\ ] ] which in turn has the same limit as as } \exp \biggl ( \frac{i \tau}{[d^\delta ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1 } \biggr ) \big| a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d \biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad = { \mathbb{e}}[\exp(i \tau \tilde{t}_d ( \pi ) ) | a_d^c , \hat{\mathbf{w}}_0^d = \mathbf{x}^d].\nonumber\end{aligned}\ ] ] the lemma follows since as .we shall show that as using chebyshev s inequality in lemma [ lem318 ] .we require preliminary results concerning , with the key results given in lemma [ lem317 ] .first , however , we introduce useful upper and lower bounds for which allow us to exploit lemma [ lem311 ] and prove uniform integrability . for , and ,let with .for and , let then for all , [ lem314 ] for any , any sequence of such that and any sequence of positive integers satisfying \leq k_d \leq[d^\delta] ] , we have that & \rightarrow&{\mathbb{e } } [ \check{\nu}_n ( \mathbf{s}_n)^m ] \qquad\mbox{as } { d \rightarrow\infty } , \\ { \mathbb{e } } [ \hat{\nu}_n ( \tilde{\mathbf{s}}_n^d ( \mathbf{x}^d ; k_d))^m ] & \rightarrow&{\mathbb{e } } [ \hat{\nu}_n^m ( \mathbf{s}_n)^m ] \qquad\mbox{as } { d \rightarrow\infty}.\end{aligned}\ ] ] by , theorem 29.2 , and lemma [ lem311 ] the lemma follows since ( [ eqnu3 ] ) and lemma [ lem314 ] ensure the uniform integrability of the left - hand sides of ( [ eq315a ] ) and ( [ eq315b ] ). [ lem316 ] for any sequence such that and sequence of positive integers satisfying \leq k_d \leq[d^\delta] ] and ] , and using ( [ eqnu3 ] ) , lemma [ lem314 ] and markov s inequality , it is straightforward to show that for any , there exists such that therefore it follows from lemma [ lem316 ] that , for any sequence such that , \nonumber\\[-2pt ] & & \hspace*{47pt}\qquad{}- { \mathbb{e } } [ \omega_d ( \hat{\mathbf{w}}_{j_d + k_d}^d)^{-1 } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d ] \ } | \hat{\mathbf{w}}_0^d = \mathbf{x}^d \\[-2pt ] & & \qquad { \stackrel{p}{\longrightarrow}}0\qquad \mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] the uniform integrability of the left - hand side of ( [ eq317d ] ) follows from ( [ eqnu3 ] ) and lemma [ lem314 ] .hence ( [ eq317a ] ) follows .it is straightforward to show that , { \mathbb{e } } [ \hat{\nu}_n ( \mathbf{s}_n)^2 ] \rightarrow \exp(f^\ast l ( 4 \log2 - 3/2)) ] and let } \sum_{j= [ d^\beta]}^{[\pi d^\delta-1 ] } \omega_d ( \hat{\mathbf{w}}_j^d)^{-1} ] . by theorem [ lem321 ] , ( [ eqss47 ] ) , as and conditional upon , d^\gamma}{[d^\delta]} ] , for any , for all sufficiently large .the lemma follows , since ( [ eq319b ] ) ensures that the right - hand side of ( [ eq319c ] ) converges to 0 as .from appendix [ secpd ] , we have that for any sequence , such that , latexmath:[ ] and let ] , so )\qquad \mbox{as }.\ ] ] therefore it follows that with the lemma following from ( [ eq40b ] ) and ( [ eq40e ] ) by the triangle inequality .[ lem41 ] for and , where as .for , for , fix and suppose that . then \nonumber\\[-4pt]\\[-12pt ] & = & \frac{d^2}{j_d ( \mathbf{x}^d ) } { \mathbb{e}}\biggl [ \bigl(h ( \mathbf{x}^d + \sigma_d \mathbf{z}^d ) - h ( \mathbf{x}^d)\bigr ) \biggl\ { 1 \wedge \frac{\pi_d ( \mathbf{x}^d + \sigma_d \mathbf{z}^d)}{\pi_d ( \mathbf{x}^d ) } \biggr\ } \biggr].\nonumber\end{aligned}\ ] ] the right - hand side of ( [ eq41ex ] ) is familiar in that it is the generator of the rwm - algorithm divided by the acceptance probability ; see , for example , , page 113 .first , note that using ( [ eqn4b ] ) , ( [ eqn4c ] ) and noting that , we have that \nonumber\\ & = & \frac{d^2 j_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \sigma_d { \mathbb{e}}[z_1 ] h^\prime(x_1)\nonumber\\ & & { } + \frac{d^2 j_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \frac{\sigma_d^2}{2 } { \mathbb{e}}[z_1 ^ 2 ] h^ { \prime\prime } ( x_1 ) \nonumber\\ & & { } + \frac{d^2 j_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \frac{\sigma_d^2}{2 } { \mathbb{e}}[z_1 ^ 2 \{h^ { \prime\prime } ( x_1 + \psi_1^d ) - h''(x_1 ) \ } ] \\ & & { } + \frac{d^2 \tilde{j}_d^0 ( \mathbf{x}^d)}{j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } \sigma_d^2 g^\prime(x_1 ) h^\prime(x_1 ) { \mathbb{e}}[z_1 ^ 2]\nonumber\\ & & { } + \frac{d^2 } { j_d^0 ( \mathbf{x}^d ) + o ( \sigma_d^2 ) } o ( \sigma_d^3 ) .\nonumber\end{aligned}\ ] ] the first term on the right - hand side of ( [ eq41eb ] ) is 0 . since , by the continuous mapping theorem , as and then since is bounded the third term on the right - hand side of ( [ eq41eb ] ) converges to 0 as . for , , andso , the right - hand side of ( [ eq41eb ] ) equals where as .thus ( [ eq41fx ] ) is proved .the proof of ( [ eq41fa ] ) follows straightforwardly using taylor series expansions since . since , an immediate consequence of lemma [ lem41 ] is that , there exists such that [ lem42 ] for any sequence of positive integers satisfying \leq k_d \leq[d^\delta] ] , . by ( [ emainx1 ] ) , as .thus the latter term on the right - hand side of ( [ eq42ab ] ) converges to 0 as .now \nonumber\\ & & \qquad= { \mathbb{p}}(\hat{x}_{k_d,1}^d \notin r_d^l | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d ) \nonumber\\ & & \qquad\quad{}\times{\mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \notin r_d^l ] \\ & & \qquad\quad { } + { \mathbb{p}}(\hat{x}_{k_d,1}^d \in r_d^l | \hat{\mathbf { x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d )\nonumber\\ & & \qquad\quad\hspace*{11pt}{}\times{\mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) |\hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \in r_d^l ] .\nonumber\end{aligned}\ ] ] consider first the latter term on the right - hand side of ( [ eq42ac ] ) . by lemma [ lem41 ] , ( [ eq41fa ] ) , \leq\tfrac{3}{2 } l^2 h^\ast_2.\ ] ] note that \\[-8pt ] & \leq & \frac{{\mathbb{p}}(\hat{x}_{k_d,1}^d \in r_d^l | \hat{\mathbf{x}}_0^d = \mathbf{x}^d)}{{\mathbb{p}}({\hat{\mathbf{x}}}_{k_d}^d \in f_d |\hat{\mathbf{x}}_0^d = \mathbf{x}^d)}.\nonumber\end{aligned}\ ] ] by ( [ emainx1 ] ) , for , as .use corollary [ lem35 ] and lemma [ lem38a ] to show that as .hence , the right - hand side of ( [ eq42ad ] ) converges to 0 as and consequently the latter term on the right - hand side of ( [ eq42ac ] ) converges to 0 as .it follows from the above arguments that also it follows from ( [ eqn4d ] ) that there exists such that \leq k.\ ] ] therefore , it is straightforward using ( [ eq42ab ] ) , ( [ eq42ac ] ) and the triangle inequality to show that \nonumber\\ & & \qquad\quad\hspace*{-8.6pt}{}- { \mathbb{e } } [ \hat{g}_d h ( { \hat{\mathbf{x}}}_{k_d}^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \notin r_d^l ] \bigr|\\ & & \qquad \rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] by lemma [ lem41 ] , ( [ eq41fx ] ) , there exists as , such that \bigr| \nonumber\\[-1pt ] & & \qquad\leq\frac{l^2}{3 } \sup_{0 \leq y \leq1 } | g^\prime(y ) h^\prime(y)| \nonumber\\[-8.5pt]\\[-8.5pt ] & & \qquad\quad{}\times\sup_{\mathbf{x}^d \in\tilde{f}_d } { \mathbb{e}}\biggl [ \biggl| \frac{\tilde{j}_d^0 ( { \hat{\mathbf{x}}}_{k_d}^d)}{j_d^0 ( { \hat{\mathbf{x}}}_{k_d}^d ) } - \frac{1}{2 } \biggr| \big| \hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_{k_d}^d \in f_d , \hat{x}_{k_d,1}^d \notin r_d^l \biggr ] + \varepsilon_d^1 \nonumber\\[-1pt ] & & \qquad\leq\frac{l^2}{3 } g^\ast h^\ast_1 \sup_{\mathbf{y}^d \in f_d } \biggl| \frac{\tilde{j}_d^0 ( \mathbf{y}^d)}{j_d^0 ( \mathbf{y}^d ) } - \frac{1}{2 } \biggr| + \varepsilon_d^1.\nonumber\end{aligned}\ ] ] by lemma [ lem40 ] , the right - hand side of ( [ eq42ah ] ) converges to 0 as . using the triangle inequality , the lemma follows by showing that - \hat{g } h ( x_1 ) \bigr|\nonumber\\[-8.5pt]\\[-8.5pt ] & & \qquad\rightarrow0 \qquad\mbox{as } { d \rightarrow\infty}.\nonumber\end{aligned}\ ] ] note that , and so , ( [ eq42ai ] ) follows since is continuous .we are in position to prove ( [ eq41e ] ) .[ lem43 ] for any , since ( [ eq43a ] ) trivially holds for , we assume that . for all sufficiently large , by the triangle inequality , & & \qquad= \biggl| \frac{1}{[d^\delta ] } \sum_{j=0}^{[\pi d^\delta-1 ] } { \mathbb{e } }[ \hat{g}_d h ( { \hat{\mathbf{x}}}_j^d ) |\hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \pi\hat{g } h ( x_1 ) \biggr| \nonumber\\[-1pt ] & & \qquad\leq\biggl| \frac{1}{[d^\delta ] } \sum_{j=0}^{[d^\beta ] -1 } { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) |\hat{\mathbf{x}}_0^d = \mathbf{x}^d ] \biggr| \\[-1pt ] & & \qquad\quad { } + \frac{1}{[d^\delta ] } \sum_{j=[d^\beta]}^{[\pi d^\delta-1 ] } \bigl| { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) |\hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \hat{g } h ( x_1 ) \bigr|\nonumber\\[-1pt ] & & \qquad\quad { } + \biggl ( \pi- \frac{[\pi d^\delta ] - [ d^\beta]}{[d^\delta ] } \biggr ) \hat{g } h ( x_1).\nonumber\end{aligned}\ ] ] since \nonumber\\ & & \qquad = { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) |\hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_j^d \in f_d ] { \mathbb{p}}({\hat{\mathbf{x}}}_j^d \in f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ) \\ & & \qquad\quad { } + { \mathbb{e } } [ \hat{g}_d h ( \hat{\mathbf{x}}_j^d ) |\hat{\mathbf{x}}_0^d = \mathbf{x}^d , { \hat{\mathbf{x}}}_j^d \notin f_d ] { \mathbb{p}}({\hat{\mathbf{x}}}_j^d \notin f_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d),\nonumber\end{aligned}\ ] ] it is straightforward , following a similar argument to the proof of lemma [ lem42 ] , ( [ eq42af ] ) , to show that there exists such that , for all ] . by lemma [ lem42 ] the supremum over of the second term on the right - hand side of ( [ eq43b ] ) converges to 0 as and the lemma follows .[ lem44 ] fix and let \varepsilon , 1\} ] , it follows from ( [ eq43d ] ) that the right - hand side of ( [ eq44d ] ) is bounded by , where is defined in lemma [ lem43 ] .let .note that since , we have that . therefore it follows from ( [ eq44c ] ) that for all sufficiently large , latexmath:[\[\label{eq44e } \sup_{\mathbf{x}^d \in\tilde{f}_d } and , the lemma follows .finally we are in position to prove ( [ eqn115 ] ) , and hence complete the proof of theorem [ main ] .[ lem45 ] note that is given by ( [ eqn113 ] ) and . therefore by the triangle inequality , } { \mathbb{e}}\bigl [ h \bigl(\hat{\mathbf{x}}_{[p_d d^\delta]}^d\bigr ) - h ( \hat{\mathbf{x}}_0^d ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \bigr ] - \exp(- l f^\ast/2 ) \hat{g } h ( x_1 ) \biggr| \nonumber\\ & & \qquad\leq\sup_{\mathbf{x}^d \in\tilde{f}_d } \biggl| { \mathbb{e}}\biggl [ \frac{d^2}{[d^\delta ] } \bigl ( h \bigl(\hat{\mathbf{x}}_{[p_d d^\delta]}^d\bigr ) - h ( \hat{\mathbf{x}}_0^d ) \bigr ) - p_d \hat{g } h ( x_1 ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ] \biggr| \nonumber\\ & & \qquad\quad { } + \sup_{\mathbf{x}^d \in\tilde{f}_d } \bigl| { \mathbb{e } } [ p_d \hat{g } h ( x_1 ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \exp ( - l f^\ast/2 ) \hat{g } h ( x_1 ) \bigr| \nonumber\\[-8pt]\\[-8pt ] & & \qquad\leq\sup_{0 \leq\pi\leq1 } \sup_{\mathbf{x}^d \in \tilde{f}_d }\biggl| { \mathbb{e}}\biggl [ \frac{d^2}{[d^\delta ] } \bigl ( h \bigl(\hat{\mathbf{x}}_{[\pi d^\delta]}^d\bigr ) - h ( \hat{\mathbf{x}}_0^d ) \bigr ) - \pi\hat{g } h ( x_1 ) | \hat{\mathbf{x}}_0^d = \mathbf{x}^d \biggr ] \biggr| \nonumber\\ & & \qquad\quad{}+ \sup_{\mathbf{x}^d \in\tilde{f}_d } \bigl| { \mathbb{e } } [ p_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \exp(- l f^\ast/2 ) \bigr| \sup_{0 \leqy \leq1 } |\hat{g } h ( y)| \nonumber\\ & & \qquad\leq\sup_{0 \leq\pi\leq1 } \sup_{\mathbf{x}^d \in \tilde{f}_d } | \hat{g}_d^{\delta , \pi } h ( \mathbf{x}^d ) - \pi \hat{g } h ( x_1 ) | \nonumber\\ & & \qquad\quad { } + \sup _ { \mathbf{x}^d \in\tilde{f}_d } \bigl| { \mathbb{e } } [ p_d | \hat{\mathbf{x}}_0^d = \mathbf{x}^d ] - \exp(- l f^\ast/2 ) \bigr|\sup_{0 \leq y \leq1 } |\hat{g } h ( y)|.\nonumber\end{aligned}\ ] ] by corollary [ lem44 ] , the first term on the right - hand side of ( [ eq45ba ] ) converges to 0 as . by theorem [ lem319 ] , for any sequence such that , latexmath:[$p_d as .hence the latter term on the right - hand side of ( [ eq45ba ] ) converges to 0 as , since implies that .we thank the anonymous referees for their helpful comments which have improved the presentation of the paper .
we consider the optimal scaling problem for high - dimensional random walk metropolis ( rwm ) algorithms where the target distribution has a discontinuous probability density function . almost all previous analysis has focused upon continuous target densities . the main result is a weak convergence result as the dimensionality of the target densities converges to . in particular , when the proposal variance is scaled by , the sequence of stochastic processes formed by the first component of each markov chain converges to an appropriate langevin diffusion process . therefore optimizing the efficiency of the rwm algorithm is equivalent to maximizing the speed of the limiting diffusion . this leads to an asymptotic optimal acceptance rate of under quite general conditions . the results have major practical implications for the implementation of rwm algorithms by highlighting the detrimental effect of choosing rwm algorithms over metropolis - within - gibbs algorithms . , .
the massive quantities of data being generated every day , and the ease of collaborative data analysis and data science have led to severe issues in management and retrieval of datasets .we motivate our work with two concrete example scenarios . * [ intermediate result datasets ] for most organizations dealing with large volumes of diverse datasets ,a common scenario is that many datasets are repeatedly analyzed in slightly different ways , with the intermediate results stored for future use .often , we find that the intermediate results are the same across many pipelines ( e.g. , a _ pagerank _computation on the web graph is often part of a multi - step workflow ) .often times , the datasets being analyzed might be slightly different ( e.g. , results of simple transformations or cleaning operations , or small updates ) , but are still stored in their entirety .there is currently no way of reducing the amount of stored data in such a scenario : there is massive redundancy and duplication ( this was corroborated by our discussions with a large software company ) , and often the computation required to recompute a given version from another one is small enough to not merit storing a new version . * [ data science dataset versions ] in our conversations with a computational biology group , we found that every time a data scientist wishes to work on a dataset , they make a private copy , perform modifications via cleansing , normalization , adding new fields or rows , and then store these modified versions back to a folder shared across the entire group .once again there is massive redundancy and duplication across these copies , and there is a need to minimize these storage costs while keeping these versions easily retrievable . in such scenarios and many others ,it is essential to keep track of versions of datasets and be able to recreate them on demand ; and at the same time , it is essential to minimize the storage costs by reducing redundancy and duplication . the ability to manage a large number of datasets , their versions , and derived datasets , is a key foundational piece of a system we are building for facilitating collaborative data science , called datahub .datahubenables users to keep track of datasets and their versions , represented in the form of a directed _ version graph _ that encodes derivation relationships , and to retrieve one or more of the versions for analysis . in this paper , we focus on the problem of trading off storage costs and recreation costs in a principled fashion . specifically , the problem we address in this paper is : given a collection of datasets as well as ( possibly ) a directed version graph connecting them , minimize the overall storage for storing the datasets and the recreation costs for retrieving them .the two goals conflict with each other minimizing storage cost typically leads to increased recreation costs and vice versa .we illustrate this trade - off via an example. indicates a storage cost of and a recreation cost of ; ( ii , iii , iv ) three possible storage graphs ] [ fig : version_graph ] indicates a storage cost of and a recreation cost of ; ( ii , iii , iv ) three possible storage graphs , title="fig : " ] ( i ) indicates a storage cost of and a recreation cost of ; ( ii , iii , iv ) three possible storage graphs , title="fig : " ] ( ii ) [ fig : extreme1 ] indicates a storage cost of and a recreation cost of ; ( ii , iii , iv ) three possible storage graphs , title="fig : " ] ( iii ) [ fig : extreme2 ] indicates a storage cost of and a recreation cost of ; ( ii , iii , iv ) three possible storage graphs , title="fig : " ] ( iv ) [ fig : storage_graph ] figure [ fig : version_graph](i ) displays a version graph , indicating the derivation relationships among 5 versions .let be the original dataset .say there are two teams collaborating on this dataset : team 1 modifies to derive , while team 2 modifies to derive . then , and are merged and give . as presented in figure [ fig : version_graph ] , is associated with , indicating that s storage cost and recreation cost are both when stored in its entirety ( we note that these two are typically measured in different units see the second challenge below ) ; the edge is annotated with , where is the storage cost for when stored as the modification from ( we call this the _ delta _ of from ) and is the recreation cost for given , i.e , the time taken to recreate given that has already been recreated .one naive solution to store these datasets would be to store all of them in their entirety ( figure [ fig : version_graph ] ( ii ) ) . in this case , each version can be retrieved directly but the total storage cost is rather large , i.e. , . at the other extreme , only one version is stored in its entirety while other versions are stored as modifications or deltas to that version , as shown in figure [ fig : version_graph ] ( iii ) .the total storage cost here is much smaller ( ) , but the recreation cost is large for and .for instance , the path needs to be accessed in order to retrieve and the recreation cost is .figure [ fig : version_graph ] ( iv ) shows an intermediate solution that trades off increased storage for reduced recreation costs for some version .here we store versions and in their entirety and store modifications to other versions .this solution also exhibits higher storage cost than solution ( ii ) but lower than ( iii ) , and still results in significantly reduced retrieval costs for versions and over ( ii ) . despite the fundamental nature of the storage - retrieval problem, there is surprisingly little prior work on formally analyzing this trade - off and on designing techniques for identifying effective storage solutions for a given collection of datasets .version control systems ( vcs ) like git , svn , or mercurial , despite their popularity , use fairly simple algorithms underneath , and are known to have significant limitations when managing large datasets .much of the prior work in literature focuses on a linear chain of versions , or on minimizing the storage cost while ignoring the recreation cost ( we discuss the related work in more detail in section [ sec : related ] ) . in this paper, we initiate a formal study of the problem of deciding how to jointly store a collection of dataset versions , provided along with a version or derivation graph . aside from being able to handle the scale , both in terms of dataset sizes and the number of versions ,there are several other considerations that make this problem challenging . * different application scenarios and constraintslead to many variations on the basic theme of balancing storage and recreation cost ( see table [ table : prob ] ) .the variations arise both out of different ways to reconcile the conflicting optimization goals , as well as because of the variations in how the differences between versions are stored and how versions are reconstructed .for example , some mechanisms for constructing differences between versions lead to symmetric differences ( either version can be recreated from the other version ) we call this the _ undirected _ case . the scenario with asymmetric , one - way differencesis referred to as _ directed _ case .* similarly , the relationship between storage and recreation costs leads to significant variations across different settings . in some cases the recreation cost is proportional to the storage cost ( e.g. , if the system bottleneck lies in the i / o cost or network communication ) , but that may not be true when the system bottleneck is cpu computation .this is especially true for sophisticated differencing mechanisms where a compact derivation procedure might be known to generate one dataset from another . *another critical issue is that computing deltas for all pairs of versions is typically not feasible .relying purely on the version graph may not be sufficient and significant redundancies across datasets may be missed . *further , in many cases , we may have information about relative _ access frequencies _ indicating the relative likelihood of retrieving different datasets . several baseline algorithms for solving this problem can not be easily adapted to incorporate such access frequencies .we note that the problem described thus far is inherently `` online '' in that new datasets and versions are typically being created continuously and are being added to the system . in this paper, we focus on the static , off - line version of this problem and focus on formally and completely understanding that version .we plan to address the online version of the problem in the future .the key contributions of this work are as follows .* we formally define and analyze the dataset versioning problem and consider several variations of the problem that trade off storage cost and recreation cost in different manners , under different assumptions about the differencing mechanisms and recreation costs ( section [ sec : proboverview ] ) .table [ table : prob ] summarizes the problems and our results .we show that most of the variations of this problem are np - hard ( section [ sec : complexity ] ) .* we provide two light - weight heuristics : one , when there is a constraint on average recreation cost , and one when there is a constraint on maximum recreation cost ; we also show how we can adapt a prior solution for balancing minimum - spanning trees and shortest path trees for undirected graphs ( section [ sec : algorithms ] ) .* we have built a prototype system where we implement the proposed algorithms .we present an extensive experimental evaluation of these algorithms over several synthetic and real - world workloads demonstrating the effectiveness of our algorithms at handling large problem sizes ( section [ sec : experiments ] ) . [cols="<,<,<,<,<,<",options="header " , ] [ table : ilp ]perhaps the most closely related prior work is source code version systems like git , mercurial , svn , and others , that are widely used for managing source code repositories . despite their popularity, these systems largely use fairly simple algorithms underneath that are optimized to work with modest - sized source code files and their on - disk structures are optimized to work with line - based diffs .these systems are known to have significant limitations when handling large files and large numbers of versions . as a result , a variety of extensions like git - annex , git - bigfiles , etc ., have been developed to make them work reasonably well with large files .there is much prior work in the temporal databases literature on managing a linear chain of versions , and retrieving a version as of a specific time point ( called _ snapshot _ queries ) . proposed an archiving technique where all versions of the data are merged into one hierarchy .an element appearing in multiple versions is stored only once along with a timestamp .this technique of storing versions is in contrast with techniques where retrieval of certain versions may require undoing the changes ( unrolling the deltas ) .the hierarchical data and the resulting archive is represented in xml format which enables use of xml tools such as an xml compressor for compressing the archive .it was not , however , a full - fledged version control system representing an arbitrarily graph of versions ; rather it focused on algorithms for compactly encoding a linear chain of versions .snapshot queries have recently also been studied in the context of array databases and graph databases . seering et al . considered the problem of storing an arbitrary tree of versions in the context of scientific databases ; their proposed techniques are based on finding a minimum spanning tree ( as we discussed earlier , that solution represents one extreme in the spectrum of solutions that needs to be considered ) .they also proposed several heuristics for choosing which versions to materialize given the distribution of access frequencies to historical versions .several databases support `` time travel '' features ( e.g. , oracle flashback , postgres ) .however , those do not allow for branching trees of versions . articulates a similar vision to our overall datahubvision ; however , they do not propose formalisms or algorithms to solve the underlying data management challenges .in addition , the schema of tables encoded with flashback can not change .there is also much prior work on compactly encoding differences between two files or strings in order to reduce communication or storage costs .in addition to standard utilities like unix diff , many sophisticated techniques have been proposed for computing differences or edit sequences between two files ( e.g. , xdelta , vdelta , vcdiff , zdelta ) . that work is largely orthogonal and complementary to our work .many prior efforts have looked at the problem of minimizing the total storage cost for storing a collection of related files ( i.e. , problem 1 ) .these works do not typically consider the recreation cost or the tradeoffs between the two .quinlan et al . propose an archival `` deduplication '' storage system that identifies duplicate blocks across files and only stores them once for reducing storage requirements .zhu et al . present several optimizations on the basic theme .douglis et al . present several techniques to identify pairs of files that could be efficiently stored using delta compression even if there is no explicit derivation information known about the two files ; similar techniques could be used to better identify which entries of the matrices and to reveal in our scenario .ouyang et al . studied the problem of compressing a large collection of related files by performing a sequence of pairwise delta compressions .they proposed a suite of text clustering techniques to prune the graph of all pairwise delta encodings and find the optimal branching ( i.e. , mca ) that minimizes the total weight .burns and long present a technique for in - place re - construction of delta - compressed files using a graph - theoretic approach . that work could be incorporated into our overall framework to reduce the memory requirements during reconstruction .similar dictionary - based reference encoding techniques have been used by to efficiently represent a target web page in terms of additions / modifications to a small number of reference web pages .kulkarni et al . present a more general technique that combines several different techniques to identify similar blocks among a collection files , and use delta compression to reduce the total storage cost ( ignoring the recreation costs ) .we refer the reader to a recent survey for a more comprehensive coverage of this line of work .large datasets and collaborative and iterative analysis are becoming a norm in many application domains ; however we lack the data management infrastructure to efficiently manage such datasets , their versions over time , and derived data products . given the high overlap and duplication among the datasets , it is attractive to consider using delta compression to store the datasets in a compact manner , where some datasets or versions are stored as modifications from other datasets ; such delta compression however leads to higher latencies while retrieving specific datasets . in this paper, we studied the trade - off between the storage and recreation costs in a principled manner , by formulating several optimization problems that trade off these two in different ways and showing that most variations are np - hard .we also presented several efficient algorithms that are effective at exploring this trade - off , and we presented an extensive experimental evaluation using a prototype version management system that we have built .there are many interesting and rich avenues for future work that we are planning to pursue .in particular , we plan to develop online algorithms for making the optimization decisions as new datasets or versions are being created , and also adaptive algorithms that reevaluate the optimization decisions based on changing workload information .we also plan to explore the challenges in extending our work to a distributed and decentralized setting .git uses delta compression to reduce the amount of storage required to store a large number of files ( objects ) that contain duplicated information .however , git s algorithm for doing so is not clearly described anywhere .an old discussion with linus has a sketch of the algorithm .however there have been several changes to the heuristics used that do nt appear to be documented anywhere .here we focus on `` repack '' , where the decisions are made for a large group of objects .however , the same algorithm appears to be used for normal commits as well .most of the algorithm code is in file : ` builtin / pack - objects.c ` note the name hash is not a true hash ; the ` pack_name_hash ( ) ` function ( ` pack - objects.h ` ) simply creates a number from the last 16 non - white space characters , with the last characters counting the most ( so all files with the same suffix , e.g. , ` .c ` , will sort together ) .* step 2 : * the next key function is ` ll_find_deltas ( ) ` , which goes over the files in the sorted order .it maintains a list of objects ( = window size , default 10 ) at all times . for the next object ,say , it finds the delta between and each of the objects , say , in the window ; it chooses the the object with the minimum value of : ` delta(b , o ) / ( max_depth - depth of b ) ` where ` max_depth ` is a parameter ( default 50 ) , and depth of b refers to the length of delta chain between a root and b. the original algorithm appears to have only used ` delta(b , o ) ` to make the decision , but the `` depth bias '' ( denominator ) was added at a later point to prefer slightly larger deltas with smaller delta chains .the key lines for the above part : * line 1812 ( check each object in the window ) : + .... ret = try_delta(n , m , max_depth , & mem_usage ) ; .... * lines 1617 - 1618 ( depth bias ) : + .... max_size = ( uint64_t)max_size * ( max_depth - src->depth ) / ( max_depth - ref_depth + 1 ) ; .... * line 1678 ( compute delta and compare size ) : + .... delta_buf = create_delta(src->index , trg->data , trg_size , & delta_size , max_size ) ; .... ` create_delta ( ) ` returns non - null only if the new delta being tried is smaller than the current delta ( modulo depth bias ) , specifically , only if the size of the new delta is less than ` max_size ` argument .note : lines 1682 - 1688 appear redundant given the depth bias calculations .* step 3 . *originally the window was just the last objects before the object under consideration .however , the current algorithm shuffles the objects in the window based on the choices made .specifically , let be the current objects in the window .let the object chosen to delta against for be .then would be moved to the end of the list , so the new list would be : ] .small detail : the list is actually maintained as a circular buffer so the list does nt have to be physically `` shifted '' ( moving to the end does involve a shift though ) .relevant code here is lines 1854 - 1861 .finally we note that git never considers / computes / stores a delta between two objects of different types , and it does the above in a multi - threaded fashion , by partitioning the work among a given number of threads .each of the threads operates independently of the others .
the relative ease of collaborative data science and analysis has led to a proliferation of many thousands or millions of _ versions _ of the same datasets in many scientific and commercial domains , acquired or constructed at various stages of data analysis across many users , and often over long periods of time . managing , storing , and recreating these dataset versions is a non - trivial task . the fundamental challenge here is the _ storage - recreation trade - off _ : the more storage we use , the faster it is to recreate or retrieve versions , while the less storage we use , the slower it is to recreate or retrieve versions . despite the fundamental nature of this problem , there has been a surprisingly little amount of work on it . in this paper , we study this trade - off in a principled manner : we formulate six problems under various settings , trading off these quantities in various ways , demonstrate that most of the problems are intractable , and propose a suite of inexpensive heuristics drawing from techniques in delay - constrained scheduling , and spanning tree literature , to solve these problems . we have built a prototype version management system , that aims to serve as a foundation to our datahubsystem for facilitating collaborative data science . we demonstrate , via extensive experiments , that our proposed heuristics provide efficient solutions in practical dataset versioning scenarios .
entanglement is a fundamental property of quantum systems and a basic resource for quantum information and computation .however , its detection and quantification are solved only in some simple cases , like the one of two - qubit systems .discriminating separability from entanglement for higher dimensional non - pure two - particle states seems to be in the complexity class np - hard .finding good measures of entanglement is a related nontrivial problem , even for the next simplest possibilities : non - pure two - particle states in arbitrary dimensions or three - qubit non - pure states . for three - qubit pure states some authors , divide the infinite set of equivalence classes under local unitary ( lu ) transformations in subsets characterized by particular values of some lu invariants : these invariants are not directly related with entanglement measures , and therefore this is not exactly a classification of entanglement types .other authors , classify three - qubit states in equivalence classes under stochastic local operations and classical communication ( slocc ) ; some of the types considered in other classifications , like the _ star - shaped states _ introduced in , do not have their own class in this scheme .several entanglement measures for pure states of three - particle systems have been proposed , , , , , ; as we will discuss in [ sec:3 ] , these measures do not distinguish adequately between fully entangled and separable states .extensions to mixed states are even more problematic ; we will discuss a measure that improves on these results .the paper is organized as follows . in [ sec:1 ]we summarize some known results for the entanglement of two and three - qubit systems , including definitions and notation that we will use in the following sections . in [ sec:2 ]we introduce a classification for arbitrary three - qubit states in terms of their three - qubit and reduced two - qubit entanglements , with a graphic representation of the different types for pure states . in [ sec:3 ]we comment several previous measures of tripartite entanglement , and introduce a multiplicative generalization of two - particle _ negativity_. in [ sec:4 ] we apply a generalized schmidt decomposition ( gsd ) to pure three - qubit states and specify the form of the states in each of our classes ; this could be used to give a physical interpretation to the abstract classes of . in [ sec:5 ] , we calculate the tripartite negativity of some families of mixed states and compare our results with previously published ones . we conclude in [ sec:6 ] with a summary of our results .arbitrary states of pure two - qubit systems , being a basis of the hilbert space for qubits a and b , can always be converted in , by means of lu transformations , being and unitary operators acting on and respectively .each pair of this schmidt decomposition define a lu equivalence class ; this infinite set can be divided in two big subsets : _ separable states _ if one of the is zero , and _ entangled states _ otherwise ; generalization to pure states in arbitrary dimensions is easy . for pure states , separability equals factorizability . for the non - pure case ,a state is separable if it can be written as , where and are state operators of subsystems a and b respectively , , ; otherwise is entangled .a list of criteria for separability in two - particle systems can be found in .sometimes , , separable but not factorizable mixed states are qualified as _classically correlated _ ; we will not use this distinction , because we are interested only in quantum correlated states .we shall cite only three of the several entanglement measures of pure two - qubit systems : _ von neumann s entropy _ of reduced states , _ wootters concurrence _ , and _ negativity _ .the von neumann s entropy of a state is defined in information theory as , being the eigenvalues of .the reduced states are and .von neumann s entropy of reduced states is the simplest measure of two - particle entanglement for pure states , but its extension to non - pure states discriminates between separable and entangled states only if the _ mutual entropy _ or _ correlation index _ is zero .therefore , it is not a good measure of entanglement for general mixed states .concurrence is defined as where are the eigenvalues of in decreasing order , with , being the complex conjugate of the state operator . in arbitrary dimensions ,simple computable generalizations of concurrence are known only for pure states ; there is no efficient way known to calculate sophisticate generalizations like biconcurrence .the negativity of a bipartite state was introduced in ; we will use the convention of , that is twice the value of the original definition : where are the negative eigenvalues of the partial transpose of the total state respect to the subsystem a , defined as , a and b denoting the two subsystems . for pure bipartite states of arbitrary dimensionsthe negativity ( 1 ) is equal to the concurrence .the negativity is not additive and some authors prefer to use the logarithmic negativity , which is additive but not convex .the negativity can be evaluated in the same way for pure and non - pure states in arbitrary dimensions , although there are entangled mixed states with zero negativity in every dimensions except and , .no measure discriminating separable from entangled states in the general non - pure case is known .however , non - zero negativity is a sufficient condition for entanglement . for three qubits, there is no ternary schmidt decomposition of pure states in a strict sense , but there is a generalized schmidt decomposition ( gsd ) : arbitrary states can always be converted to states that contain at most five of the eight terms of the hilbert basis , by means of a very simple algorithm .each set of coefficients define a lu equivalence class ; in [ sec:2 ] and [ sec:4 ] we deal with the problem of dividing the infinite set of lu equivalence classes in big subsets corresponding to the different types of entanglement in our classification . for three - particle systems ,a pure state is _ fully separable _ if it can be written as , _ biseparable _ if it is not fully separable but can be written as with entangled ( the index a can denote any of the three subsystems ) , and _ fully inseparable _ otherwise ; in this last case we will say that the state has _ full tripartite entanglement_. for three - particle systems a non - pure state operator may be fully separable , biseparable or fully inseparable ; biseparable states can be _ simply biseparable _ or _generalized biseparable_. fully separable states can be written as .are not fully separable but can be written as where runs from a to c and from to respectively and at least one is entangled , ; simply biseparable states have only for a single value of ( one single qubit is separable from the other two , that are entangled ) ; generalized biseparable states are convex sums of states of the previous kind ( non vanishing coefficients for more than one ) . fully inseparable states are those not fully separable nor biseparable .not all authors agree with this classification of mixed three - qubits : for some of them , biseparable states were only those that we call simply biseparable , and generalized biseparable states were included in the fully inseparable class , .fully separable estates contain no quantum entanglement .simply biseparable states have bipartite entanglement in a single pair of qubits ; they have _ partial bipartite entanglement_. generalized biseparable states contain bipartite entanglement in more than one pair of qubits ; they have _ distributed bipartite entanglement_. fully inseparable states have full tripartite entanglement ( true nonclassical 3-particle correlations , in the notation of ) .the characterization of separability or biseparability for non - pure three - particle states , and the measure of their full tripartite entanglement are nontrivial , even in the simplest case of three qubits .an ideal measure of tripartite entanglement would discriminate fully entangled from fully separable or biseparable states ; no such measure is known yet , even for three qubits . finally , entanglement of the reduced two - particle states will be called the _ reduced entanglement _ of the pair .they show how much two - qubit entanglement remains when the third qubit is not observed .as we will show in the next section , for non - pure states there are unexpected results ; for instance , some simply biseparable mixed states have no reduced entanglement when the separable qubit is traced over .we propose a classification of three - qubit states based in the presence of no entanglement ( full separability ) , bipartite entanglements only ( simple or generalized biseparability ) , or full tripartite entanglement ( true 3-qubit entanglement , full inseparability ) , both for pure and non - pure states , with subtypes based on the number of entangled reduced two - qubit states .reduced , bipartite and full tripartite entanglements have direct physical meaning and are lu invariants : local operations can not create entanglement , and therefore all the invertible local operations ( unitary ones in particular ) must leave these entanglements invariant .thus , a state in any of our subtypes can not be transformed by a lu to a state in any other subtype .this classification is equally valid for pure and non - pure states , although there are several subtypes that exist only for non - pure ones .the practical implementation of the classification is also more difficult for mixed states , since the detection of tripartite entanglement is not completely solved in this case .another difference is that for pure states we can relate these subtypes with the coefficients of a gsd , as we will show in [ sec:4 ] , while no similar decomposition exists for non - pure states .fig.1 summarizes the different types and subtypes in this classification for pure states ; with the same conventions , we could draw a similar graphic for the subtypes that exist only for mixed states ; we omit it for simplicity .[ fig:1 ] we will briefly comment now the different subtypes for general states ( pure or not ) . * type : fully separable states , no quantum entanglement .* type 1 : biseparable states , bipartite entanglements only .* subtype : simply biseparable states with no reduced entanglement .two of the qubits are entangled when the total three - qubit state is considered , but when the other qubit is traced over , the reduced two - particle state is separable .this subtype exists only for non - pure states : for instance , a biseparable state + , where are bell states , has a reduced state that is separable : ( fig.1 ) without the segment connecting the lower two qubits .* subtype : the three - qubit state is simply biseparable , and when the separable qubit is traced over , the other two remain entangled .all pure biseparable states are of this subtype : any pure state , with entangled , gives an entangled reduced state , there are also non - pure states of this subtype : for instance , the family of biseparable states , with negativities ; therefore these reduced states are entangled for and the three qubit state is in subtype ; for , is the state previously considered as an example of subtype ; note that this state is a continuous limit ( ) of states of subtype . *subtype : generalized biseparable states with bipartite entanglements in two pairs of qubits , for instance , with and at least one entangled for each kl . according to the posible entanglements of their reduced states the following subtypes are possible in principle : , , .we omit the graphic representation of these subtypes .* subtype : generalized biseparable states with bipartite entanglements in the three pairs of qubits , for instance : , with and the three entangled .according to the posible entanglements of their reduced states the following subtypes are possible in principle : , , , .we omit also the graphic representation of these subtypes .generalized biseparable states of subtypes and exist only in the non - pure case . *type 2 : fully inseparable states , non - zero full tripartite entanglement .subtypes to exist for pure and non - pure states . * subtype : their three reduced entanglements are zero .we will call them _ ghz - like states _ , because the well known ghz states ( see [ sec:4 ] ) belong to this subtype .the entanglement of ghz - like states disappears if any of the three qubits is traced over ; their entanglement is _fragile_. * subtype : one reduced entanglement is non - zero , and the other two are zero . * subtype : two reduced entanglements are non - zero . for pure states they have been called _star shaped states _subtype : their three reduced entanglements are non - zero .we will call them _ w - like states _ because the well known states ( see also [ sec:4 ] ) belong to this subtype .the entanglement in these states survives the loss ( tracing over ) of any of the three qubits ; it is _robust_. for pure statesour classification is very easy to implement .a qubit is factorizable if and only if the reduced state of the other two qubits is pure ( see for instance chapter 8 of ) . in the affirmative case ,this pure two - qubit state is factorizable if and only if their reduced one - qubit states are pure ; the total state is then fully separable ( subtype ) ; otherwise the reduced two - qubit state is entangled and the total state is biseparable ( type ) .if no qubit is factorizable , the total state is fully entangled ( type 2 ) ; the subtypes , , , or can be ascertain calculating the negativities or concurrences of their reduced two - qubit states to determine the number of entangled pairs .no ambiguity remains in the qualitative application of our classification to pure states . in [ sec:4 ]we will show the results for all possible gsd s of pure three - qubit states , obtaining the explicit form of the vectors in each one of our classes , and the quantitative values of their entanglements .for non - pure states the situation is more complicated .we do not know of any measure that would identify unambiguously entanglement in general non - pure three - qubit states .therefore , the discrimination between fully separable , biseparable and fully entangled states remains open in general , although it can be answered in some particular cases , like those of the non - pure examples of subtypes and given previously .in [ sec:3 ] we will propose a measure that , even if it is not a complete solution , in some cases improves on previous results , as we shall see in the examples of [ sec:5 ] .there are families of states depending on one or several parameters , such that by continuous variations of these the state goes from one type to another . or more generally , there are states , , in two different types or subtypes such that the distance is arbitrarily small ; we will show examples in section 5 .parts of this classification have antecedents in the literature . in , a classification of pure three - qubit states based only in the number of the entangled reduced states was proposed . but with only this criterion , fully separable states ( type in our classification ) and ghz - like states ( type ) are in the same class ; types ( i=1,2,3 ) and are also in another common class , and so on . this classification leaves out the more important property of the entanglement of three - qubit states : no entanglement , bipartite entanglements only , or full tripartite entanglement . in , the same authors introduced new classes for non - pure states , taking into account the existence of classical correlations ( separable but non - factorizable states ) ; in this paper we shall restrict ourselves to quantum correlations . in a classification of three - qubit entanglement in fully separable , biseparable ( in the sense that we denote as simply biseparable ) or fully inseparable states was given ;generalized biseparable states were not considered as a class different from fully inseparable ones , and the existence or not of reduced binary entanglements was not considered .an ideal measure of the full tripartite entanglement of three qubits should have at least the following characteristics : i ) to be zero for any fully separable or biseparable state and non - zero for any fully entangled state , ii ) to be invariant under lu , iii ) to be non increasing under locc , that is , to be an entanglement monotone .condition i ) seems self evident , but some proposals for tripartite entanglement measures do not fulfill it , even for pure states , as we will see below ; the proposal that we will make ( tripartite negativity ) satisfies both parts of i ) for pure states .conditions ii ) and iii ) are the mathematical expression of the non - local character of entanglement .some authors include other desirable conditions but we will restrict ourselves to the three conditions listed above .there are in the literature proposals for measures of tripartite entanglement of pure states , for instance those in , , , , ; besides specific objections that we will comment below , they can not be generally extended to non - pure states . in , and ,some particular results were given for non - pure states .in a generalization of the bipartite concurrence , called the _ 3-tangle _ , is proposed .the original states have 3-tangle equal to zero ; in fact , we found that the 3-tangle is zero for a large number of pure states in subtype ( -like ) , although none of their qubits is separable ( the three reduced states for these -like states are all of them non - pure ) . therefore , the 3-tangle is not a good measure of full tripartite entanglement even for pure states ; it is a measure of something that has been called _ residual entanglement _ by some authors ; in it was used to characterize one of the slocc classes ( called by the authors ) .yu and song showed that any good measure of two - particle entanglement could be extended to multiparticle systems , by taking bipartite partitions of them ; they would define the following measure of tripartite entanglement : this is the idea underlying and , which generalize concurrence , and which generalize von neumman s entropy of reduced states .since the three terms in ( 2 ) verify separately conditions ii ) and iii ) , so does also the tripartite additive measure .this was used in , , and to prove that generalizations of concurrence and von neumann s entropy of reduced states verify conditions ii ) and iii ) .but , with any of these elections , would be non - zero for pure biseparable states of subtype , violating the first part of condition i ) .the same would also happen if a probability density function were used , as in .it is possible to avoid this objection by using the geometric mean instead of the arithmetic one : this idea was proposed , in a more general context , in .so , if is the tangle ( the square of the concurrence ) , we have a multiplicative redefinition of the global entanglement ( whose additive version was introduced in and ) , and if is von neumann s entropy of reduced states , we have a redefinition of the additive measure that appears in .the same argument of the previous paragraph proves that these redefined product versions of and verify conditions ii ) and iii ) .von neumann s entropy of reduced states is an unambiguous measure of entanglement only for pure states , and the concurrence , although well defined for non - pure states of two qubits , has been extended in a practical way to higher dimensions only for pure states .therefore , in order to have a measure of tripartite entanglement valid also for non - pure states we will use the negativity . we will define the tripartite negativity of a state as where the bipartite negativities are defined as in ( 1 ) , + , being the negative eigenvalues of , the partial transpose of with respect to subsystem , , with , and , respectively .the bipartite negativities verify conditions ii ) and iii ) , and so our tripartite negativity verifies them . for pure states ,our multiplicative tripartite negativity fulfills the three conditions at the beginning of this section .although the qualitative classification of pure three - qubit states was easily done in [ sec:2 ] , tripartite negativity adds a quantitative appraisal of the full tripartite entanglement of these states . for non - pure two - party states in dimensions there are entangled states with zero negativity , .therefore , violates the second part of i ) . on the other hand, could be non - zero for generalized biseparable states , violating also the first part of i ) .we could fulfill desideratum i ) if we were able to replace negativity for some other measure that will be non - zero for any entangled two - party state in dimension , and also found a way to discriminate unambiguously between generalized biseparable and fully entangled states of three - qubits : these are difficult and still open problems .the set of three bipartite measures , , contains more information than the geometric mean .but even this set can not completely discriminate between fully separable , biseparable and fully entangled general mixed states , and therefore does not improve essentially on the unique tripartite negativity , at least for classification purposes . from the results in it can be proved that is a sufficient condition for distillability to a ghz state ( _ ghz - distillability _ ) , a property of central importance in quantum computation ; therefore , tripartite negativity is useful also for non - pure states , even if it does not solve the separability vs. entanglement problem .our objective in this section is to divide the infinite set of lu equivalence classes of three - qubit pure states in six big subsets corresponding to the six subtypes of fig.1 .since a lu equivalence class is defined by a set of gsd coefficients , we exhaustively analyzed the type of entanglement of all possible sets .the full entanglement of the three - qubit state can be checked by studying its factorizability , as explained in [ sec:2 ] .the entanglement of the reduced states ( non - pure in general ) will be determined calculating their negativity ( for these mixed two - qubit states , we could have used also the concurrence , but not von neumann s entropy ) .the results are listed below . in the following, we will denote the five coefficients of the gsd decomposition by ; an arbitrary three - qubit pure state can always be transformed to : this decomposition is symmetric in the interchange of the last two qubits , but not in the exchange of any of them with the first .a more elegant totally symmetric gsd with five coefficients is also possible , but the algorithm to obtain the coefficients from an arbitrary initial state is much more complicated ; therefore it is not very useful for practical purposes .although the canonical form ( 5 ) contains five complex coefficients , it depends on only five independent real parameters ; by suitably choosing the relative phases for each qubit and the global phase of the state one can express it in terms of five positive coefficients , related by the normalization condition , and a unique relevant phase , as it was done in . but starting from several generic vectors , different choices of phases will in general be needed to obtain a simplified form of this kind ; therefore , in order to compare the canonical forms for different initial vectors , we will postpone any phase choices ( and the normalization condition ) until they are convenient ; we will come back to this below when discussing ghz states .now we will show the canonical forms for our six entanglement subtypes : * type : the conditions for fully separability are listed for instance in , in terms of the eight coefficients of the hilbert space .the gsd forms can be obtained from those conditions , taking into account that three coefficients ( in the notation of ) are always zero .* subtype : they have one of the following gsd forms : + ( where and or can be zero ; if the state is of type ) .+ ( where can be zero ) + ( where can be zero ) .+ for instance , for state the bipartite reduced states , are separable , while entangled , with negativity .* subtype ( ghz - like states ) : + the states with these properties are of the canonical form .+ the three separable reduced states are of the form : .+ in particular , if the two coefficients are equal in modulus we will say that we have a ghz state , and denote it by .all the vectors of the form , with real , belong to the bidimensional space of ghz states . with the election of canonical form of only the final vector have been obtained ; then , two orthogonal initial vectors in the ghz class would have been reduced to the same canonical vector , while by postponing the choice of relative phases we have preserved the dimensionality of the ghz class and in particular the orthogonality relations in it .+ fig.2 shows several multiplicative measures of the full tripartite entanglement of ghz - like states , as functions of the coefficient ( now we impose the normalization condition ) .+ ( solid line ) , ( dash line ) and ( dot line ) , as a function of for ghz - like states.( , ] the graphic is symmetric and can be thought off as representing the simultaneous interchange of values 0 and 1 for the three qubits ) .the three measures induce the same order for the full tripartite entanglement .ghz states ( ) are the ones with maximum full tripartite entanglement ( ) . is more sensitive for small values of .,scaledwidth=90.0% ] + [ fig:2 ] + the three measures induce the same order for the full tripartite entanglement ; if a state has larger entanglement than other with one measure , it has larger entanglement with the others .the negativity is more sensitive for small values of .+ we have performed a similar analysis for all the other classes , but the results depend on more than one parameter , and are more difficult to represent : therefore we will not show them here .the results show in all cases that the three measures above considered give qualitatively similar behaviours , and that the maximum full tripartite entanglement of any three - qubit state corresponds to the lu equivalence class of ghz states .another explicit example will be given below when discussing subtype .* subtype : states of this class are those with one of the three following gsd forms : + , + , + , + with the three coefficients non - zero in each case . + for instance , for state the separable reduced states are latexmath:[$\rho^{(ab ) } = \rho^{(ac)}=(\alpha|0\rangle+\beta|1\rangle)(\alpha^{\star}\langle0|+\beta^{\star}\langle1|)\otimes while is entangled , with negativity .note that if one of the three coefficients goes to zero , the state changes its classification ; in the limit we obtain a state of subtype , in the limit a state of subtype , and in the limit a state of subtype .* subtype or star - shaped states : the gsd forms that have these properties are + , + , + with all the coefficients non - zero .+ for instance , for state the separable reduced state is , while and are entangled . in this case , although the negativities are perfectly computable for any value of the coefficients , their analytic expressions in terms of generic coefficients are not very comfortable .thus , we give here the concurrences : , and ( for two - qubits , concurrence and negativity coincide for pure states , not for mixed states like , ) .* subtype or -like class : states with the most general gsd form ( five coefficients , , , , different from zero ) belong to this class , and also those who have and/or equal to zero . + states with the canonical form , withthe three coefficients non - zero , are examples of this class . for these states ,the bipartite reduced states are : + , , and , with negativities : ; + ; + .+ in particular , if the three coefficients in are equal in modulus we will denote the state by and refer to it as a state ; the original symmetric state , has a gsd of this form .+ the maximum values of the tripartite entanglement measures in the -like class , are reached for the states : , , .+ the negativities of the reduced bipartite states of the states ( which are the same as the negativities for the symmetric state ) are . our classification of pure three - qubit states in terms of full tripartite entanglement and reduced binary entanglements could be used to give a physical interpretation to the abstract gsd classes of and .we do not have a canonical form like the gsd of the pure case to simplify the general form of mixed states ; therefore we will restrict ourselves to consider some concrete uniparametric families of non - pure states , to show some of the improvements allowed by the use of tripartite negativity , the problems remaining and relations with previous works . as a first example we will consider a family of mixed ghz and states : where has been defined in ( 6 ) and .its tripartite negativity is + + ( ) , that is different from zero for any value of the parameter ( see fig .3 ) , and has an absolute maximum for pure ghz states and a secondary maximum for pure states ( fig.3 ) .this excludes full separability or simple biseparability for any state in the family , but can not discriminate between distributed binary entanglement and full tripartite entanglement .nevertheless this result improves on , where the 3-tangle for this family was found to be zero for a value of the parameter ( generalized biseparability was not considered as distinct from full entanglement in ) .+ states , , as a function of p.,scaledwidth=90.0% ] our second example is a family of mixed states with non - zero tripartite negativity only for some values of a parameter : , where * 1 * is the unity matrix , and . we find that if , and if , reaching a maximum for ( a pure ghz state ) .dr et al . showed that this state is ghz - distillable if .therefore , for this family of mixed states our tripartite negativity quantifies ghz - distillability ; it starts at and increases with , with an obvious maximum of 1 if the state is a ghz state .+ finally , we will consider a family of three - qubit states that can be fully entangled or generalized biseparable and nevertheless have zero tripartite negativity . in , a family of mixed states of the form + + with was considered .these states have zero negativity for any value of the parameter b , although they were proved to be non - separable .we do not know of any practical measure of bipartite entanglement that will discriminate these bound entangled states from separable ones .biconcurrence , could do this in principle ; unfortunately its determination needs the calculus of the minimum of a certain function for any unitary operator in a hilbert space of dimension 64 , and no efficient way to do this is known .we can convert these states in dimension to three - qubit states simply by taking the following basis in the 4 dimensional space: .the bipartite negativities are then , , , and therefore . according to ,the states are not separable into a and bc subsystems .since the other two negativities are non - zero , we also know that they are not fully separable nor simply biseparable .thus , this family of states can be fully entangled or generalized biseparable .we have proposed a classification of three - qubit states based on the existence of bipartite and tripartite entanglements and the diverse possibilities for the reduced binary entanglements , including a graphic representation for pure states that can be extended to non - pure ones , although we have not done it here for reasons of space .we have considered a measure ( tripartite negativity ) of the full tripartite entanglement avoiding some of the problems of previous proposals ; for pure states this measure quantifies full tripartite entanglement and confirms the distinction between fully entangled states and biseparable or fully separable ones that was obtained in [ sec:2 ] in a qualitative way .we have given also the explicit form of the pure states in each subtype of our classification , after performing an easily computable gsd to a simplified canonical form ; in the simplest cases we have compared their tripartite negativity with other multiplicative generalizations of bipartite entanglement measures , concluding that they induce the same ordering of full tripartite entanglement .we have analyzed some non - pure states that have non - zero tripartite negativity ( a sufficient condition for ghz - distillability ) or that have zero tripartite negativity although they are known to be entangled , to show the problems that remain in the practical classification of mixed three - qubit states .+ we thank g. lvarez , d. salgado , l. lamata and j. len for valuable discussion and help .acknowledge financial support from csic i3 program .99 l. gurvits , journal of computer and system sciences * 69 * , 448 ( 2004 ) .a. acn , a. andrianov , l.costa , e. jan , j.i .latorre , r. tarrach , phys .* 85 * , 7 ( 2000 ) .a. acn , a. andrianov , e. jan , r. tarrach , j. phys .gen . * 34 * , 6725 ( 2001 ) .w. dr , g. vidal and j. cirac , phys . rev . a * 62 * , 062314 ( 2000 ) .l. lamata , j. len , d. salgado and e. solano , phys .a 74 , 052336 ( 2006 ) .m. plesch , v. buek , phys .a * 67 * , 012322 ( 2003 ) .m. plesch , v. buek , phys .a * 68 * , 012313 ( 2003 ) . v. coffman , j.kundu , w.k .wootters , phys .rev a * 61 * , 0532306 ( 2000 ) .chang - shui yu , he - shan song , phys . lett a 330 , 377 ( 2004 ) .d. meyer , n.r .wallach , j. of math .phys . , * 43 * , pp.4273 ( 2002 ) .brennen , quantum information and computation , vol . 3 ( 6 ) , 619 - 626 ( 2003 ). f. pan , d. liu , g. lu , j.p .draayer , int.j.theor.phys .* 43 * , 1241 ( 2004 ) .p.facchi , g. florio and s. pascazio , phys .a * 74 * , 042331 ( 2006 ) .werner , phys .a * 40*,4277 ( 1989 ) .t. radtke , s. fritzsche , comput .. comm . * 175 * , 145 ( 2006 ) .sakurai , _ modern quantum mechanics _ , addison - wesley publishing company ( 1994 ) , 183 - 184 .wootters , s. hill , phys .lett * 78 * , 5022 - 5025 ( 1997 ) .g. vidal , r.f .werner , phys . rev . a * 65 * , 032314 ( 2002 ) . s. barn , s.j.d .phoenix , phys .a * 44 * , 535 ( 1991 ) .p. rungta , v. buzek , c. m. caves , m. hillery , g. j. millburn , phys . rev .a , * 64 * , 042315 ( 2001 ) .r. horodecki , m. horodecki , p. horodecki , k. horodecki , quant - ph/070225 .p.badziag , p. deuar , m. horodecki , p. horodecki , and r. horodecki , j. mod . opt . * 49 * , 1289 ( 2002 ) .a. miranowicz , a. grudka , j. opt b : quantum semiclass .optics 6 542 - 548 ( 2004 ) .a. miranowicz , a. grudka , phys .a * 70 * , 032326 ( 2004 ) .plenio , s. virmani , quant .inf . comp .* 7 * , 1 ( 2007 ) .plenio , phys .lett . * 95 * , 090503 ( 2005 ) .m. horodecki , p. horodecki , r. horodecki , phys .lett . * 80 * 5239 - 5242 ( 1998 ) .p. horodecki , phys .a , * 232 * , 333 ( 1997 ) .a. acn , d. bru , m.lewnstein , a. sanpera , phys .lett . * 87 * , 040401 ( 2001 ) .d.bru et al . phys .a * 72 * , 014301 ( 2005 ) .w. dr , j.i .cirac , r. tarrach , phys .lett . * 83 * 3562 - 3565 ( 1999 ) .t. eggeling and r. f. werner , phys .a * 63 * , 042111 ( 2001 ) .w. laskowki , m. zukowski , phys .a , * 72 * , 062112 ( 2005 ) .ballentine , _ quantum mechanics : a modern development _ , world scientific publishing , 1998 .g. vidal , j. mod . opt . * * 47**355 ( 2000 ) .chang - shui yu , he - shan song , phys .a * 73 * , 022325 ( 2006 ) .chang - shui yu , he - shan song , phys .a * 73 * , 032322 ( 2006 ) .tzu - chieh wei , paul m. goldbart , phys .a * 68 * , 024307 ( 2003 ) .r. lohmayer , a. osterloh , j. siewert , a. uhlmann , phys .lett . * 97 * , 260502 ( 2006 ) .love et al . , quantum information processing * 6 * , 187 ( 2007 ) .akhtarshenas , j. phys .a * 38 * , 6777 ( 2005 ) .h. a. carteret , a. higuchi , a. sudbery , j. math .phys . * 41 * , ( 2000 ) 7932 - 7939 .chang - shui yu , he - shan song , phys .a * 72 * , 022333 ( 2005 ) .
we present a classification of three - qubit states based in their three - qubit and reduced two - qubit entanglements . for pure states these criteria can be easily implemented , and the different types can be related with sets of equivalence classes under local unitary operations . for mixed states characterization of full tripartite entanglement is not yet solved in general ; some partial results will be presented here .
let us consider a portfolio of loans .let the notional of loan be equal to .then the loan represents fraction of the notional of the whole portfolio .this means that if loan defaults and the entire notional of the loan is lost the portfolio loses fraction or of its value . in practicewhen a loan defaults a fraction of its notional will be recovered by the creditors . thus the actual loss given default ( lgd ) of loan is fraction or of the notional of the entire portfolio .we now describe the gaussian m - factor model of portfolio losses from default .the model requires a number of input parameters . for each loan we are give a probability of its default .also for each and each we are given a number such that .the number is the loading factor of the loan with respect to factor .let and be independent standard normal random variables .let be the cdf of the standard normal distribution . in our model loan defaults if this indeed happens with probability .the factors are usually interpreted as the state of the global economy , the state of the regional economy , the state of a particular industry and so on .thus they are the factors that affect the default behavior of all or at least a large group of loans in the portfolio .the factors are interpreted as the idiosyncratic risks of the loans in the portfolio .let be defined by we define the random loss caused by the default of loan as where is the recovery rate of loan .the total loss of the portfolio is an important property of the gaussian factor model is that the s are not independent of each other .their mutual dependence is induced by the dependence of each on the common factors .historical data supports the conclusion that losses due to defaults on different loans are correlated with each other .historical data can also be used to calibrate the loadings .the s are not independent of each other .their mutual dependence is induced by the dependence of each on the common factors .historical data supports the conclusion that losses due to defaults on different loans are correlated with each other .historical data can also be used to calibrate the loadings .when the values of the factors are fixed , the probability of the default of loan becomes the random losses become conditionally independent bernoulli variables with the mean given by and the variance given by by the central limit theorem the conditional distribution of the portfolio loss , given the values of the factors , can be approximated by the normal distribution with the mean and the variance we define the cdf of the unconditional portfolio loss by for all real numbers .since the conditional loss distribution is approximately normal , the cdf can be approximated by where is the cdf of the normal distribution with mean and variance , while is the density of the standard normal distribution .in this section we apply the proposed algorithm to the single factor gaussian model of a portfolio with names . we choose a 125 name portfolio because it is the size of the standard djcdx.na.ig portfolio .we choose a single factor model because it is the one most frequently used in practice .the parameters of the portfolio are where . in figure [ resultsfig ]we compare the cdf computed using monte carlo samples with the cdf computed using formula ( [ theformula ] ) .the agreement between the two is good .we add that the quality of approximation will improve even further for a bigger portfolio and many bank portfolios have much more than 125 names .[ resultsfig ]we define the ( value at risk ) of a given portfolio as the level of loss ( expressed as fraction of portfolio notional ) , such that the probability of the portfolio loss being less or equal to is equal to a predefined confidence level .typically , is chosen to be between and .thus we have from the results of the previous section it follows that an accurate approximation to can be found by solving the equation this equation can be solved , for example , by bisection . to find the solution to ( [ equv ] ) with accuracy of 1 basis point ( 0.01% ) we would need to evaluate the left hand side of ( [ equv ] ) no more than 14 times .* example . *_ we want to calculate the of the portfolio ( [ port ] ) with the confidence level .we solve the equation ( [ equv ] ) and round the solution to the nearest basis point to arrive at .we now run monte carlo simulation with samples and compute the probability that the portfolio loss is less or equal to . rounded to the nearest basis point it turns out to be .thus the of the portfolio is indeed ._ if desired , the convergence can be sped - up by using newton s method .after the is calculated we can calculate the economic capital by subtracting the average portfolio loss from .we now compare the algorithm proposed here with other proposed alternatives .the fft based methods require the computation of a large number of fourier transforms . to determine the with an error of less than 1 basis point ( 0.01% ) it is necessary to compute approximately 10,000 fourier transforms .each fourier transform is as expensive to evaluate as the left hand side of ( [ equv ] ) .thus our algorithm is significantly faster than the fft based methods .it is well known that the fft methods are much faster than the direct monte carlo simulation .thus our algorithm is much faster than the monte carlo approach .finally , the recursive approach of hull - white is comparable in speed to the algorithm proposed here .however , it assumes that all the loans in the portfolio have equal notionals and recovery rates .this is a very restrictive assumption which is unrealistic for many portfolios encountered in practice .our algorithm makes no assumptions about homogeneity of the portfolio .additionally , it is easier to implement than the algorithm of hull - white .since satisfies equation ( [ equv ] ) we can use the implicit function theorem to find its partial derivatives with respect to the parameters of the model .these partial derivatives are traditionally called the greeks in finance .we arrive at the following expressions and these expressions can be easily evaluated by numerical integration using hermite - gauss quadrature .we proposed an algorithm for computing the and the economic capital of a loan portfolio in the gaussian factor model .the proposed method was tested on a portfolio of 125 names and gave high accuracy results. the accuracy will be even higher for portfolios with more names .many of the bank portfolios are much larger than 125 names .the proposed algorithm is much faster than the fft based methods and the brute force monte carlo approach .the speed of the hull - white algorithm is comparable to that of the the algorithm proposed here , but the hull - white algorithm requires that all the loans in the portfolio have equal notionals and recovery rates .this is a very restrictive assumption which is unrealistic for many portfolios encountered in practice .our algorithm makes no assumptions about the homogeneity of the portfolio .also , it is easier to implement than the algorithm of hull - white . additionally , we obtained analytical expressions for the greeks using the implicit function theorem .we also comment that the algorithm can be extended trivially to the case of stochastic recovery rates and recovery rates correlated with the state of the factor variables .some of the ideas used in this paper were previously explored in and .i thank my adviser a. chorin for his help and guidance during my time in uc berkeley .i thank mathilda regan and valerie heatlie for their help in preparing this article .last , but not least , i thank my family for their constant support .p. okunev .using hermite expansions for fast and arbitrarily accurate computation of the expected loss of a loan portfolio tranche in the gaussian factor model .report - lbnl-57835 , lawrence berkeley national laboratory , berkeley , ca , 2005 .
we propose a fast algorithm for computing the economic capital , value at risk and greeks in the gaussian factor model . the algorithm proposed here is much faster than brute force monte carlo simulations or fourier transform based methods . while the algorithm of hull - white is comparably fast , it assumes that all the loans in the portfolio have equal notionals and recovery rates . this is a very restrictive assumption which is unrealistic for many portfolios encountered in practice . our algorithm makes no assumptions about the homogeneity of the portfolio . additionally , it is easier to implement than the algorithm of hull - white . we use the implicit function theorem to derive analytic expressions for the greeks .
i suggest to define the ( general ) misr as this has the following advantages : * the name misr still works . * for , it is the standard misr . * for , we have in close analogy to the asymptotics at .essentially the is replaced by , while is replaced by . *the definition of the gain is unchanged , , it is still this should resolve items 7 and 8 below . + here are some simulations for nakagami- fading .so the efir does depend _ slightly _ on the fading parameter .one comment about this : we have and since , , with very good approximation as soon as is not too small , we have quickly . + and here are the resulting gains : the gains are relatively constant . increases slightly , so using for will give a conservative ( safe ) bound .as far as is concerned , is essentially the no - fading case ( which i think has been observed elsewhere ) . 1 .add prior work : xinchen , naoto miyoshi .also cite li et al .2 . simulation for lattices : + + so the efir is more sensitive to than the misr . in this range at least , it is roughly , but this obviously can not hold for smaller . + general observation : since for , a.s . , we have for all stationary point processes .( new . ) * for advanced transmission schemes ( , bs silencing , joint transmission , interference cancellation ) , how is the efir defined ?in particular , what is the relevant fading random variable and the relevant interference ? 4 .plots of the horizontal shift for the lattices ( together with and ) : + + let , , and . +* minimum gap . *generally , and , , is finite , and close to in these cases .so it appears that the gap is always first decreasing , then increasing .+ if this is the case , then the asappp approximation ( with ) would be slightly optimistic for . +* maximum gap . *it is possible that the maximum is always assumed at or , i.e. , so a shift by the maximum of and would always result in an upper bound : + + here are some simulation results for the gains on a linear scale : + + + for , and are essentially the same , and for all , there is essentially no difference in between the square and the triangular lattice . 5 . * ( this is a less pertinent task . ) * taking this one step further , we could propose a highly accurate approximation by interpolating between and as follows : this way should be extremely accurate , with exactly the right asymptotics on both ends .( i can try this out , say for the square lattice . )6 . about the moment : is it always proportional to , so that the product ( and thus efir ) does not depend on the fading , and is it always proportional to ? ( we can at least simulate some scenarios . ) 7 .gain with general fading : if , then \quad\rightarrow\quad g_0^{(m)}=\left(\frac{\e(\isr_{\rm ppp}^m)}{\e(\isr^m)}\right)^{1/m } \label{g0_a}\ ] ]so it seems that depends on , but how strongly ?simulations indicated that at least for modest .+ in this case , the asappp approximation is where is the success probability for the ppp with fading parameter ( which is not known , of course ) .+ at any rate , the second step here does _ not _ hold : 8 .related : is it possible to normalize " with respect to the fading parameter so that transmission techniques with different diversity can be compared ?i am thinking of comparing the ppp with rayleigh fading with another scheme with fading parameter as follows : this way , instead of comparing against the ppp with diversity , the comparison can be against the ppp with diversity : what is the more natural definition , or ? 9 .introduce the _ relative distance process ( rdp ) _ and show how the misr can be calculated as what can be said about the rdp for general stationary point processes ? 1 .shall we also consider the relative vertical gap between the asappp approximations and the exact ( simulated ) curves ?for near 0 , the relevant quantity is the relative outage and for , it is the two curves look as follows : + + the blue curve is the one obtained using the interpolation in .+ the fact that the red and black solid curves in the right plot exceed for a short interval indicates that shifting by does not yield a lower bound valid for all .however , a shift by may yield an upper bound .2 . bounded path loss models ?the decay at the tail will be faster than any polynomial ( * ? ? ?* remark 6 ) .it should be given by the fading .3 . what is the gap in the no fading case ? 4 . a word on ?the distribution of the signal - to - interference ratio ( sir ) is a key quantity in the analysis and design of interference - limited wireless systems . herewe focus on general single - tier cellular networks where users are connected to the strongest ( nearest ) base station ( bs ) .let be a point process representing the locations of the bss and let be the serving bs of the typical user at the origin , , define .assuming all bss transmit at the same power level , the downlink sir is given by where are iid random variables representing the fading and is the path loss law .the complementary cumulative distribution ( ccdf ) of the sir is under the sir threshold model for reception , the ccdf of the sir can also be interpreted as the success probability of a transmission , , . in the case where is a homogeneous poisson point process ( ppp ) , rayleigh fading , and ,the success probability was determined in .it can be expressed in terms of the gaussian hypergeometric function as where . for ,remarkably , this simplifies to in , it is shown that the same expression holds for the homogeneous independent poisson ( hip ) model , where the different tiers in a heterogeneous cellular network form independent homogeneous ppps .for all other cases , the success probability is intractable or can at best be expressed using combinations of infinite sums and integrals .hence there is a critical need for techniques that yield good approximations of the sir distribution for non - poisson networks .it has recently been observed in that the sir ccdfs for different point processes and transmission techniques ( e.g. , bs cooperation or silencing ) _ appear to be merely horizontally shifted versions of each other _ ( in db ) , as long as their diversity gain is the same .consequently , the success probability of a network model can be accurately approximated by that of a reference network model by scaling the threshold by this sir gain factor ( or shift in db ) , , formally , the horizontal gap at target probability is defined as where is the inverse of the ccdf of the sir and is the success probability where the gap is measured .it is often convenient to consider the gap as a function of , defined as due to its tractability , the ppp is a sensible choice as the reference model .if the shift is indeed approximately a constant , , , then can be determined by evaluating for an arbitrary value of . as shown in ,the limit of as is relatively easy to calculate .here we focus in addition on the positive limit and compare the two asymptotic gains to demonstrate the effectiveness of the idea of horizontally shifting sir distributions by a constant .so the main focus of this paper are the asymptotic gains relative to the ppp , defined as follows . the asymptotic gains ( whenever the limits exist ) and are defined as where the ppp is used as the reference model .some insights on are available from prior work . in is shown that for rayleigh fading , is closely connected to the mean interference - to - signal ratio ( misr ) .the misr is the mean of the interference - to-(average)-signal ratio isr , defined as where is the mean received signal power averaged only over the fading .not unexpectedly , the calculation of the misr for the ppp is relatively straightforward and yields ( * ? ? ?( 8) ) in sec .iv in this paper . ] .since , the success probability can in general be expressed as where is the ccdf of the fading random variables .for rayleigh fading , and thus , , resulting in and so , asymptotically , shifting the ccdf of the sir distribution of the ppp is exact . and the lower bound ( which is asymptotically tight ) for the ppp ( dash - dotted ) . the horizontal gap between the sir distributions of the ppp and the triangular lattice is 3.4 db for a wide range of values . the shaded band indicates the region in which the sir distributions for all stationary point process fall that are more regular than the ppp] an example is shown in , where , which results in , while for the triangular lattice .hence the horizontal shift is db . for rayleigh fading, we also have the relationship by jensen s inequality , also shown in the figure . here is a lower bound with asymptotic equality . in ,the authors considered coherent and non - coherent joint transmission for the hip model and derived expressions for the sir distribution .the diversity gain and the asymptotic pre - constants as are also derived . in ,the benefits of bs silencing ( inter - cell interference coordination ) and re - transmissions ( intra - cell diversity ) in poisson networks with rayleigh fading are studied . for , it is shown that when the strongest interfering bss are silenced , while for intra - cell diversity with transmissions . for , and for bssilencing and retransmissions , respectively .the constants , , , and are also determined .lastly , ( * ?2 ) gives an expression for the limit for the ppp and the ginibre point process ( gpp ) with rayleigh fading .for the gpp , it consists of a double integral with an infinite product . in ,the authors consider a poisson model for the bss and define a new point process termed _ signal - to - total - interference - and - noise ratio ( stinr ) process_. they obtain the moment measures of the new process and use them to express the probability that the user is covered by bss . in our work , we consider a different map of the original point process based on relative distances , which results in simplified moment measures for the ppp and permits generalizations to other point process models for the base stations .this paper makes the following contributions : * we define the _ relative distance process ( rdp ) _ , which is the relevant point process for cellular networks with nearest - bs association , and derive some of its pertinent properties , in particular the probability generating functional ( pgfl ) .* we introduce the _ generalized misr _ , defined as , which is applicable to general fading models , and give an explicit expression and tight bounds for the ppp .* we provide some evidence why the gain is insensitive to the path loss exponent and the fading statistics . *we show that for all stationary point process models and any type of fading , the tail of the sir distribution always scales as , , we have , , where the constant captures the effects of the network geometry and fading . the asymptotic gain follows as and we have * we introduce the _ expected fading - to - interference ratio ( efir ) _ and show that the constant is related to the efir by .consequently , is given by the ratio of the efir of the general point process under consideration and the efir of the ppp .the base station locations are modeled as a simple stationary point process . without loss of generality , we assume that the typical user is located at the origin .the path loss between the typical user and a bs at is given by , .let denote the ccdf of the iid fading random variables , which are assumed to have mean .we assume nearest - bs association , wherein a user is served by the closest bs .let denote the closest bs to the typical user at the origin and define and . with the nearest - bs association rule, the downlink sir of the typical user can be expressed as _ further notation : _ denotes the open disk of radius at , and is its complement .in this section , we introduce a new point process that is a transformation of the original point process and helps in the analysis of the interference - to - signal ratio . from , the misr is defined as the first expectation is taken over and , while the second one is only over since .since only depends on , it is apparent that the misr is determined by the relative distances of the interfering and serving bss .accordingly , we introduce a new point process on the unit interval that captures only these relative distances .[ def : rdp_def ] for a simple stationary point process , let . the _ relative distance process ( rdp ) _ is defined as using the rdp , the can be expressed as and , since , the misr is for the stationary ppp , the cdfs of the elements of are , ] such that the integral in the denominator of is finite .the pgfl ] .it is easily seen that this is not the case .let be a ppp on with the same intensity function as , , .if was a ppp , the success probability for rayleigh fading would follow from the pgfl of ( specializing to a ppp on ] . ] the factorial moment measures are defined as where indicates that the sum is taken over -tuples of _ distinct _ points .the moment measures are related to the pgfl as \label{mom_measures_pgfl}\ ] ] evaluated at . using lemma [ lem : pgfl_rdp ]we obtain = \\ \frac{1}{1+\sum_{i=1}^n s_i(t_i^{-2}-1)}.\ ] ] differentiating with respect to and setting , we have the moment densities follow from differentiation , noting that denotes the start of the interval , which causes a sign change since increasing decreases the measure .so the product densities are a factor larger than they would be if was a ppp .this implies , interestingly , that the pair correlation function ( * ? ? ?6.6 ) of the rdp of the ppp is , .the moment densities of the rdp provide an alternative way to obtain the success probability for the ppp : where . from the definition of the moment densities, we have ^n } \left(\prod_{i=1}^n\nu(\theta , t_i ) \right)\rho^{(n)}(t_1,t_2 , \hdots , t_n)\dd t_1\hdots \dd t_n \label{eq : pc_moment_measure}\end{aligned}\ ] ] using lemma [ lem : moment_densities_rdp ] , we have }\nu(\theta , t ) t^{-3 } \dd t \right)^n,\\ & = \sum_{n=0}^\infty { \theta^n(-1)^n}\left ( \frac { \delta \ , _ 2f_1\left(1,1-\delta;2-\delta;-\theta \right)}{1 -\delta}\right)^n\\ & = \sum_{n=0}^\infty { \theta^n(-1)^n}\left ( \,_2f_1(1,-\delta ; 1-\delta ; -\theta)-1\right)^n\\ & = \frac{1}{\,_2f_1(1,-\delta ; 1-\delta ; -\theta)},\end{aligned}\ ] ] which equals the success probability given in .we now characterize the pgfl of the rdp generated by a stationary point process .let be a positive function of the distance and the point process .the average ] , hence the asappp approximation follows as where is the success probability for the ppp with fading parameter , which is not known in closed - form . in , the sir ccdf for a poisson cellular networkwhen is gamma distributed is discussed .however , we have the exact from and the lower bound . transmissions in rayleigh fading channels over a single transmission in nakagami- fading channels , for poisson networks with . ] for nakagami- fading , the pre - constant is , and we have \\ & \lesssim 1-\frac{m^{m-1}}{\gamma(m)}\misr_1 m!\theta^m \\ & = 1-\misr_1(m\theta)^m , \end{aligned}\ ] ] where indicates an upper bound with asymptotic equality . adding the second term in the lower bound and noting that yields the slightly sharper result .\ ] ] the gain for general fading is applicable to arbitrary transmission techniques that provide the same amount of diversity , not just to compare different base station deployments . as an example , we determine the gain from selection combining of the signals from transmissions over rayleigh fading channels with a single transmission over nakagami- fading channels , both for poisson distributed base stations .the misr for the selection combining scheme follows from ( * ? ? ?shows that there is a very small gain from selection combining .simulation results indicate that at least for moderate , the scaling holds for arbitrary motion - invariant point processes .this implies that , which indicates that is insensitive to the fading statistics for small to moderate .next we show that the gain is also insensitive to the path loss exponent .illustrates the densities of the square and triangular lattices relative to the ppp s , which is , ] interval , the gains do not depend strongly on .indeed , if the density of the rdp of a general point process could be expressed as , we would have irrespective of .another way to show the insensitivity of the gain to is by exploring the asymptotic behavior of the misr for general point processes given in theorem [ thm : misr_pp ] in the high- regime .the result is the content of the next lemma . for a motion - invariant point process , where } \e^{!}_{y_0,\left(\|y_0\|,\varphi_1\right ) } [ g(\phi , y_0)]\rho^{(2)}_\phi\left(y_0,\left(\|y_0\|,\varphi_1\right)\right ) \dd \varphi_1 \dd y_0.\ ] ] the for a general point processis given by theorem [ thm : misr_pp ] as using the laplace asymptotic technique ( * ? ? ?6.419 ) , this shows that for arbitrary point processes decays as , which implies approaches a constant for large ( see ( a ) ) .in this section , we define the _ expected fading - to - interference ratio ( efir ) _ and explore its connection to the gain in .we shall see that the efir plays a similar role for as the misr does for .[ def : efir ] for a point process , let and let be a fading random variable independent of all . the _ expected fading - to - interference ratio _( efir ) is defined as \right)^{1/\delta } , \label{efir_def}\ ] ] where is the expectation with respect to the reduced palm measure of .here we use for the interference term , since the interference here is the total received power from all points in , in contrast to the interference , which stems from . _ remark ._ for the ppp , the efir does not depend on , since . to see this ,let be a scaled version of .then and thus .multiplying by the intensities , since .the same argument applies to all point processes for which changing the intensity by a factor is equivalent in distribution to scaling the process by , , for point processes where .this excludes hard - core processes with fixed hard - core distance but includes lattices and hard - core processes whose hard - core distance scales with .[ lem : efir_ppp ] for the ppp , with arbitrary fading , the term in can be calculated by taking the expectation of the following identity which follows from the definition of the gamma function . hence from slivnyak s theorem ( * ? ? ?8.10 ) , for the ppp , so we can replace by the unconditioned laplace transform , which is well known for the ppp and given by from , we have \gamma(1-\delta)s^\delta } s^{-1+\delta}\dd s \\ & = \frac{1}{\lambda \pi \e(h^\delta)\gamma(1-\delta)\gamma(1+\delta)}=\frac{\sinc\delta}{\lambda\pi \e(h^\delta)}.\end{aligned}\ ] ] so , and the result follows .remarkably , only depends on the path loss exponent .it can be closely approximated by .next we use the representation in to analyze the tail asymptotics of the ccdf of the sir ( or , equivalently , the success probability ) .[ thm : main ] for all simple stationary bs point processes , where the typical user is served by the nearest bs , from , we have . using the representation given in, it follows from the campbell - mecke theorem that the success probability equals \dd x,\end{gathered}\ ] ] where is a translated version of .substituting , \dd x\nonumber\\ & \stackrel{(a)}{\sim}\lambda\theta^{-\delta } \int_{\r^2 } \e^{!}_o \bar f\left(\|x\|^\alpha i_\infty \right ) \ddx,\quad \theta\to\infty \label{eq : alt}\\ & \stackrel{(b)}{=}\lambda\theta^{-\delta } \e^{!}_o(i_\infty^{-\delta } ) \int_{\r^2 } \bar f_h\left(\|x\|^\alpha \right ) \dd x,\quad \theta\to\infty,\nonumber\end{aligned}\ ] ] where follows since and hence .the equality in follows by using the substitution .changing into polar coordinates , the integral can be written as where follows since . since and , it follows that . for rayleigh fading , from the definition of the success probability and theorem [ thm : main ] , the laplace transform of behaves as for large . hence using the tauberian theorem in , we can infer that from theorem [ thm : main ] , the gain immediately follows . for an arbitrary simple stationary point process with efir given in def .[ def : efir ] , the asymptotic gain at relative to the ppp is from theorem [ thm : main ] , we have that the constant in is given by . follows from lemma [ lem : efir_ppp ] as .the laplace transform of the interference in for general point processes can be expressed as ,\end{aligned}\ ] ] where is the probability generating functional with respect to the reduced palm measure and is the laplace transform of the fading distribution .[ rayleigh fading ] [ cor : ray ] with rayleigh fading , the expected fading - to - interference ratio simplifies to \dd x\right)^{1/\delta},\ ] ] where with rayleigh fading , the power fading coefficients are exponential , , . from , we have and the result follows from the definition of the reduced probability generating functional . for rayleigh fading , the fact that as was derived in ( * ? ? ?* thm . 2 ) .while theorem [ thm : main ] shows that , it is not clear , if the scaling is mainly contributed by the received signal strength or the interference .intuitively , since an infinite network is considered , the event of the interference being small is negligible and hence for large , the event is mainly determined by the random variable .this is in fact true as is shown in the next lemma . for all stationary point processes and arbitrary fading ,the tail of the ccdf of the desired signal strength is the cdf of the distance to the nearest bs is for all stationary point processes .hence .\end{aligned}\ ] ] so the tail of the received signal power is of the same order , and the interference and the fading only affect the pre - constant . in the poisson case with rayleigh fading, the same holds near .if for the fading cdf , , , for the ppp , _ so on both ends of the sir distribution , the interference only affects the pre - constant . _we now explore the tail of the distribution to the maximum sir seen by the typical user for exponential .assume that the typical user connects to the bs that provides the _ instantaneously _ strongest sir ( as opposed to the strongest sir _ on average _ as before ) .also assume that .let denote the sir between the bs at and the user at the origin .then \dd x\\ & = \lambda\theta ^{-\delta } \int_{\r^2 } \calg^{!}_o [ \delta(x,\cdot)]\dd x.\end{aligned}\ ] ] from the above we observe that ( for exponential fading ) , which shows that the tail with the maximum connectivity coincides with the nearest neighbor connectivity . for a general , the pre - constantis given by hence , when the bss are distributed as a poisson point process , in particular when , the success probability behaves like .the laplace transform of the interference in a ppp is \gamma(1-\delta)s^\delta).\ ] ] from , we have &= \frac{1}{\gamma(\delta)}\int_0^\infty e^{-\lambda \pi \e[h^\delta]\gamma(1-\delta)s^\delta } s^{-1+\delta}\dd s \\ & = \frac{1}{\lambda \pi \e(h^\delta)\gamma(1-\delta)\gamma(1+\delta)}=\frac{\sinc(\delta)}{\lambda\pi \e(h^\delta)}.\end{aligned}\ ] ] hence from theorem [ thm : main ] this shows that the tail distribution in a ppp network does not depend on the fading distribution .let be iid uniform random variables in $ ] .the unit intensity ( square ) lattice point process is defined as . for this lattice , with rayleigh fading ,the laplace transform of the interference is bounded as where is the epstein zeta function , is the riemann zeta function , and is the dirichlet beta function .hence from the upper bound equals , and it follows that for rayleigh fading , for the square lattice point process with rayleigh fading and . the asymptote ( dashed line ) is and is tight as . ] as increases ( ) , the upper and lower bounds approach each other and thus both bounds get tight .the success probability multiplied by , the asymptote and its bounds for a square lattice process are plotted in figure [ fig : lattice ] for .we observe that the lower bound , which is 1.29 , is indeed a good approximation to the numerically obtained value , and that for db , the ccdf is already quite close to the asymptote. for the square and triangular lattices , shows the gain as a function of and the asymptotic gains and for rayleigh fading .interestingly , the behavior of the gap is not monotone .it decreases first and then ( re)increases to .it appears that . if this holds in general , a shift by the maximum of the two asymptotic gains always results in an upper bound on the sir ccdf .shows the dependence of and on . as pointed out in subs .[ sec : insens ] , is very insensitive to . appears to increase slightly and linearly with in this range . and ( linear scale ) for square and triangular lattices for rayleigh fading as a function of . ]determinantal ( fermion ) point processes ( dpps ) exhibit repulsion and thus can be used to model the fact that bss have a minimum separation .the kernel of the dpp is denoted by and due to stationarity is of the form .its determinants yield the product densities of the dpp , hence the name .the reduced palm measure pertaining to a dpp with kernel is defined as whenever .let denote the kernel associated with the reduced palm distribution of the dpp process .the reduced probability generating functional for a dpp is given by \triangleq \e^{!}_o\left[\prod_{x\in \phi } f(x)\right]=\detf(\mathbf{1}-(1-f)k^o ) , \label{eq : pgfl_dpp}\end{aligned}\ ] ] where is the fredholm determinant and is the identity operator .the next lemma characterizes the efir a general dpp with rayleigh fading .when the bss are distributed as a stationary dpp , the efir with rayleigh fading is follows from corollary [ cor : ray ] and ._ ginibre point processes : _ ginibre point processes ( gpps ) are determinantal point processes with density and kernel using the properties of gpps , it can be shown that from which can be evaluated using .for the gpp for rayleigh fading with .the asymptote ( dashed line ) is at . ] in , the scaled success probability and the asymptote are plotted as a function of for the gpp .we observe a close match even for modest values of .shows the simulated values of the gains and for the gpp as a function of the path loss exponent . for all values of , while . and for the gpp with rayleigh fading as a function of .this paper established that the asymptotics of the sir ccdf ( or success probability ) for arbitrary stationary cellular models are of the form for a fading cdf , . both constants and depend on the path loss exponent and the point process model , and also depends on the fading statistics . depending on the point process fading _ may _ also affect . is related to the mean interference - to - signal - ratio ( misr ) . for , , and for , depends on the generalized misr . is related to the expected fading - to - interference ratio ( efir ) through . for the ppp , .the study of the misr is enabled by the relative distance process , which is a novel type of point process that fully captures the sir statistics .a comparison of and shows that a horizontal shift of the sir distribution of the ppp by provides an excellent approximation of the entire sir distribution of an arbitrary stationary point process .for all the point process models investigated so far ( which were all repulsive and thus more regular than the ppp ) , the gains relative to the ppp are between and about db , so the shifts are relatively modest .higher gains can be achieved using advanced transmission techniques , including adaptive frequency reuse , bs cooperation , mimo , or interference cancellation . as long as the diversity gain of the network architectureis known and the ( generalized ) misr can be calculated ( or simulated ) , the asappp method can be applied to arbitrary cellular architectures .such extensions will be considered in future work .a generalization to heterogeneous networks ( hetnets ) is proposed in .the method can be expected to be applicable whenever the misr is finite .this excludes networks where interferers can be arbitrarily close to the receiver under consideration while the intended transmitter is further away , such as poisson bipolar networks .x. zhang and m. haenggi , `` a stochastic geometry analysis of inter - cell interference coordination and intra - cell diversity , '' _ ieee transactions on wireless communications _ , vol . 13 , no . 12 , pp .66556669 , dec . 2014 .a. guo and m. haenggi , `` asymptotic deployment gain : a simple approach to characterize the sinr distribution in general cellular networks , '' _ ieee transactions on communications _ , vol . 63 , no . 3 , pp .962976 , mar .2015 . , `` asappp : a simple approximative analysis framework for heterogeneous cellular networks , '' dec .2014 , keynote presentation at the 2014 workshop on heterogeneous and small cell networks ( hetsnets14 ) .available at http://www.nd.edu/~mhaenggi/talks/hetsnets14.pdf .b. blaszczyszyn and h. p. keeler , `` studying the sinr process of the typical user in poisson networks by using its factorial moment measures , '' _ ieee transactions on information theory _ , vol .61 , pp . 67746794 , dec .2015 .s. t. veetil , k. kuchi , a. k. krishnaswamy , and r. k. ganti , `` coverage and rate in cellular networks with multi - user spatial multiplexing , '' in _ 2013 ieee international conference on communications ( icc13 ) _ , budapest , hungary , jun .2013 .m. haenggi and r. k. ganti , `` interference in large wireless networks , '' _ foundations and trends in networking _ , vol . 3 , no . 2 ,pp . 127248 , 2008 , available at http://www.nd.edu/~mhaenggi/pubs/now.pdf .r. giacomelli , r. k. ganti , and m. haenggi , `` outage probability of general ad hoc networks in the high - reliability regime , '' _ ieee / acm transactions on networking _ , vol .19 , no . 4 , pp .11511163 , aug .j. b. hough , m. krishnapur , y. peres , and b. virg , _ zeros of gaussian analytic functions and determinantal point processes _ , ser .university lecture series 51.1em plus 0.5em minus 0.4emamerican mathematical society , 2009 .h. wei , n. deng , w. zhou , and m. haenggi , `` a simple approximative approach to the sir analysis in general heterogeneous cellular networks , '' in _ ieee global communications conference ( globecom15 ) _ , san diego , ca , dec . 2015 .
it has recently been observed that the sir distributions of a variety of cellular network models and transmission techniques look very similar in shape . as a result , they are well approximated by a simple horizontal shift ( or gain ) of the distribution of the most tractable model , the poisson point process ( ppp ) . to study and explain this behavior , this paper focuses on general single - tier network models with nearest - base station association and studies the asymptotic gain both at 0 and at infinity . we show that the gain at 0 is determined by the so - called mean interference - to - signal ratio ( misr ) between the ppp and the network model under consideration , while the gain at infinity is determined by the expected fading - to - interference ratio ( efir ) . the analysis of the misr is based on a novel type of point process , the so - called relative distance process , which is a one - dimensional point process on the unit interval [ 0,1 ] that fully determines the sir . a comparison of the gains at 0 and infinity shows that the gain at 0 indeed provides an excellent approximation for the entire sir distribution . moreover , the gain is mostly a function of the network geometry and barely depends on the path loss exponent and the fading . the results are illustrated using several examples of repulsive point processes . cellular networks , stochastic geometry , signal - to - interference ratio , poisson point processes .
consider a sensor network in which a sensor measures a certain physical quantity over time the aim of the sensor is communicating a symbol - by - symbol processed version of the measured sequence to a receiver .as an example , each element can be obtained by quantizing or denoising , for to this end , based on the observation of and , the sensor communicates a message of bits to the receiver ( is the message rate in bits per source symbol ) .the receiver is endowed with sensing capabilities , and hence it can measure the physical quantity as well .however , as the receiver is located further away from the physical source , such measure may come with some delay , say for some . assuming that at time the decoder must put out an estimate of the source symbol by design constraints, it follows that the estimate can be made to be a function of the message and of the delayed side information ( see for an illustration ) . following related literature ( e.g. , ), we will refer to as the delay for simplicity .delay may or may not be known at the sensor .the situation described above can be illustrated schematically as in fig .[ fig0 ] for the case in which the delay is known at the encoder . in fig .[ fig0 ] , the encoder ( enc ) represents the sensor and the decoder ( dec ) the receiver .the decoder at time ( more precisely , ) has access to delayed _ side information _ with delay fig .[ fig2 ] accounts for a setting where the side information at the decoder , unbeknownst to the encoder , _ may _ be delayed by or not delayed , where the first case is modelled by decoder 1 and the second by decoder 2 .note that , in the latter case , the receiver has available the sequence at time . for generality , in the setting in fig .[ fig2 ] , we further assume that the encoder is allowed to send additional information in the form of a message of bits when the side information is not delayed. this can be justified in the sensor example mentioned above , as a non - delayed side information may entails that the receiver is closer to the transmitter and is thus able to decode an additional message of rate ( bits / source symbol ) . to start ,let us first assume that sequences and are _ memoryless sources _ so that the entries ( ) are arbitrarily correlated for a given index but independent identically distributed ( i.i.d . ) for different to streamline the discussion , the following lemma summarizes the optimal trade - off between rate and distortion , as measured by a distortion metric , for the point - to - point setting of fig .[ fig0 ] with memoryless sources .similar conclusions apply for the more general set - up of fig .[ fig2 ] . for memoryless source , and zero delay , i.e. , , the rate - distortion function for the point - to - point system in fig .[ fig0 ] is given by the conditional rate - distortion function \leq d}}}i(x;z|y).\ ] ] this result remains unchanged even if the decoder has access to non - causal side information , i.e. , if the reconstruction can be based on the entire sequence , rather than only . instead , for strictly positive delay , the rate - distortion function is the same as if there was no side information , namely \ensuremath{\leq}d}}}i(x;z) ] , such that for all for .as explained below , the subscript `` 1 '' in indicates that denotes one - step transition probabilities . the random process , , is a stationary and ergodic markov chain with transition probability =w_{1}(a|b). ] and also the -step transition probability \triangleq w_{k}(a|b), ] and ; ] , which , at each time map message or rate [ bits / source symbol ] , and the delayed side information into the estimate ; ( _ iii _ ) a sequence of decoding function for decoder 2 \times[1,2^{n\delta r}]\times\mathcal{y}^{i}\rightarrow\mathcal{z}_{2}\label{eq : decoder2}\ ] ] for ] as the interval ] if . and implicitly considered to be rounded up to the nearest larger integer . ] encoding / decoding functions ( [ eq : encoder])-([eq : decoder2 ] ) must satisfy the distortion constraints \leq d_{j},\text { for } j=1,2.\label{dist constraints}\ ] ] note that these constraints are fairly general in that they allow to impose not only requirements on the lossy reconstruction of or ( obtained by setting independent of or respectively ) , but also on some function of both and ( by setting to be dependent on such function of ( ) ) . given a delay , for a distortion pair ( ), we say that rate pair ( ) is achievable if , for every and sufficiently large , there exists a code .we refer to the closure of the set of all achievable rates for a given distortion pair ( ) and delay as the _ rate - distortion region _ . from the general description above for the setting of fig .[ fig2 ] , the special case of fig .[ fig0 ] is produced by neglecting the presence of decoder 2 , or equivalently by choosing . in this case , the rate - distortion region is fully characterized by a function as .function hence characterizes the infimum of rates for which the pair is achievable , and is referred to as the _ rate - distortion function _ for the setting of fig .[ fig0 ] . for the special case of the model in fig .[ fig2 ] in which , we define the rate - distortion function in a similar way . _ notation _ : for integer with , we define ; if instead we set .we will also write for for simplicity of notation . given a sequence ] we define sequence as ] , and or for ] , it outputs a string of bits which is a function of and .encoding is constrained so that the code for each ( ) is prefix - free .the decoder , based on delayed side information , can then uniquely decode each codeword as soon as it is received .following the considerations in ( * ? ? ?* ; * ? ? ?iv ) , it is easy to verify that rate ( and , more generally , ( [ eq : directed info ] ) ) is also the infimum of the average rate in bits / source symbol required by such code .moreover , it is possible to construct universal context - based compression strategies by adapting the approach in .we refer to sec .[ sec : examples ] for some examples that further illustrate some implications of proposition 1 ., for ( symbols corresponding to out - of - range indices are set to zero).,width=345 ] ( achievability ) here we propose a coding scheme that achieves rate ( [ lossless ] ) .the basic idea is a non - trivial extension of the approach discussed in ( * ? ? ?* ; * ? ? ?* remark 3 , p. 5227 ) and is described as follows . a block diagram is shown in fig .[ figmux ] for encoder ( fig .[ figmux]-(a ) ) and decoder ( fig .[ figmux]-(b ) ) .we first describe the _ encoder , _ which is illustrated in fig .[ figmux]-(a ) . to encode sequences we first partition the interval ] , for all and .every such subinterval is defined as \text { and } y_{i - d}=\tilde{y},\text { } x_{i - d+1}^{i-1}=\tilde{x}^{d-1}\}.\label{eq : i(xy)}\ ] ] in words , the subinterval contains all symbol indices such that the corresponding delayed side information available at the decoder is and the previous samples in are .we refer to the value of the tuple ( ) as the _ context _ of sample ., this definition of context is consistent with the conventional one given in when specialized to markov processes .see also remark [ rem : consider - a - variable - length ] . ] for the out - of - range indices ] .[ figillustration ] illustrates the definitions at hand for . as a result of the partition described above , the encoder `` demultiplexes '' sequence into sequences , one for each possible context ( .this demultiplexing operation , which is controlled by the previous values of source and side information , is performed in fig .[ figmux]-(a ) by the block labelled as `` demux '' , and an example of its operation is shown in fig .[ figillustration ] . by the ergodicity of process and , for every and all sufficiently large , the length of any sequence is guaranteed to be less than symbols with probability arbitrarily close to one .this because the length of the sequence equals the number of occurrences of the context ( ) and by birkhoff s ergodic theorem ( see ( * ? ? ?16.8 ) ) . in particular, for any we can find an such that \leq\frac{\epsilon}{2|\mathcal{x}|^{d-1}|\mathcal{y}|},\label{eq : error_1}\ ] ] where we have defined the `` error '' event each sequence is encoded by a separate encoder , labelled as `` enc '' in fig . [figmux]-(a ) . in casethe cardinality does not exceed ( i.e. , the `` error '' event does not occur ) , the encoder compresses sequence using an entropy encoder , as explained below . if the cardinality condition is instead not satisfied ( i.e. , is realized ) , then an arbitrary bit sequence of length , to be specified below , is selected by the encoder `` enc '' .the entropy encoder can be implemented in different ways , e.g. , using typicality or huffman coding ( see , e.g. , ) . herewe consider a typicality - based encoder .note that the entries of each sequence are i.i.d . with distribution , since conditioning on the context makes the random variables independent . as it is standard practice , the entropy encoder assigns a distinct label to all -typical sequences with respect to such distribution , and an arbitrary label to non - typical sequences . from the asymptotic equipartion property ( aep ), we can choose sufficiently large so that ( see , e.g. , ) \leq\frac{\epsilon}{2|\mathcal{x}|^{d-1}|\mathcal{y}|},\label{eq : error_2}\ ] ] where we have defined the `` error '' event moreover , by the aep , a rate in bits per source symbol of is sufficient for the entropy encoder to label all -typical sequences . from the discussion above, it follows that the proposed scheme encodes each sequence with bits . by concatenating the descriptions of all the sequences , we thus obtain that the overall rate of message for the scheme at hand is .the concatenation of the labels output by each entropy encoder is represented in fig .[ figmux]-(a ) by the block `` mux '' .we emphasize that encoder and decoder agree a priori on the order in which the descriptions of the different subsequences are concatenated .for instance , with reference to the example in fig .[ figillustration ] ( with ) , message can contain first the description of the sequence corresponding to , then , etc .we now describe the _ decoder , _ which is illustrated in fig . [ figmux]-(b ) . by undoing the multiplexing operation just described, the decoder , from the message , can recover the individual sequences through a simple demultiplexing operation for all contexts .this operation is represented by block `` demux '' in fig . [ figmux]-(b ) . to be precise , this demultiplexing is possible , unless the encoding `` error '' event takes place .in fact , occurrence of the `` error '' event implies that some of the sequences was not correctly encoded and hence can not be recovered at the decoder .the effect of such errors will be accounted for below .assume now that no error has taken place in the encoding . while the individual sequences can be recovered through the discussed demultiplexing operation, this does not imply that the decoder is also able to recover the original sequence .in fact , that decoder does not know a priori the partition : and of the interval ] , where we recall that is the sequence reconstructed at the decoder .moreover , the following inequality holds in general \geq\frac{1}{n}\sum\limits _{ i=1}^{n}\pr[x_{i}\neq z_{1i}].\label{eq : ineqperr}\ ] ] therefore ,we have \leq\epsilon ] into subintervals , namely for each , so that ( cf .( [ eq : i(xy ) ] ) ) \text { and } y_{i - d}=\tilde{y}\}.\label{eq : i(xy)-1}\ ] ] similar to sec .[ sub : proof - of - achievability ] , a different compression codebook is used for each such interval , and thus for each pair of `` demultiplexed '' subsequences . the compression of each pair of sequences is based on a test channel specifically , the corresponding codewords are generated i.i.d . according to the marginal distribution and compressionis done based on standard joint typicality arguments . by the covering lemma , compression of sequences into the corresponding reconstruction sequence rate bits per source symbol in each interval , and thus an overall rate following the same considerations as in sec .[ sub : proof - of - achievability ] .in particular , the encoder multiplexes the compression indices corresponding to the intervals to produce message .therefore , the latter only carries information about the individual sequences but not about the ordering of each entry within the overall sequence . based on the sequence produced in the first encoding phase described above , the encoder then performs also a finer partition of the interval ] and using different test channels in each subinterval ..,width=345 ] in this section , we consider two specific examples relative to the scenario in fig . [ fig0 ] .the first example consists of binary - alphabet sources , while the second applies the results derived above to ( continuous - alphabet ) gaussian sources .we focus on a distortion metric of the form that does not depend on in other words , the decoder is interested in reconstructing within some distortion .we note that , under this assumption , the rate ( [ cor : for - any - delay ] ) equals the simpler expression with mutual informations evaluated with respect to the joint distribution where minimization is done over all distributions such that \leq d_{1}. ] , well known from markov chain theory ( see , e.g. , ) . ]note that this is a logistic map such that for large .we also set , consistently with the convention adopted in the rest of the paper .finally , we assume that with `` '' being the modulo-2 sum and being i.i.d .binary variables , independent of , with , .we adopt the hamming distortion .we start by showing in fig .[ fig4 ] the rate obtained from proposition 1 corresponding to zero distortion ( ) versus the delay for different values of and for .note that the value of measure the `` memory '' of the process : for small , the process tends to keep its current value , while for , the values of are i.i.d .. for , we have , irrespective of the value of , where we have defined the binary entropy function .instead , for increasingly large , the rate tends to the entropy rate .this can be calculated numerically to arbitrary precision following ( * ? ? ?* sec . 4.5 ) . note that a larger memory , i.e. , a smaller leads to smaller required rate for all values of .[ fig5 ] shows the rate for versus for different values of . for reference, we also show the performance with no side information , i.e. , . for ,the source is i.i.d . and delayed side information is useless in the sense that ( remark [ rem : is - delayed - side ] ) .moreover , for , we have , so that is a markov chain and the problem becomes one of lossless source coding with feedforward . from remark[ rem : is - delayed - side ] , we know that delayed side information is useless also in this case , as . for intermediate values of ,side information is generally useful , unless the delay is too large .we now turn to the case where the distortion is generally non - zero . to this end , we evaluate the achievable rate ( [ eq : simpler ] ) in appendix [ sec : proof - of-()- ( ) ] obtaining for and otherwise .in ( [ eq : ex1])-([eq : ex2 ] ) we have defined . recall that rate has been proved to coincide with the rate - distortion function only for ( corollary [ cor : for - any - delay ] ) . as a final remark ,we use the result derived above to discuss the advantages of delayed side information . to this end ,set so that and the problem becomes one of source coding with feedforward .for , result ( [ eq : ex1])-([eq : ex2 ] ) recovers the calculation in ( * ? ? ?* example 2 ) ( see also ) , which states that the rate - distortion function for the markov source at hand with feedforward ( ) is for and otherwise . from ( see also ) , it is known that the rate - distortion function of a markov source without feedforward , i.e. , , is equal to ( [ slb ] ) only for smaller than a critical value , but is otherwise larger .this demonstrates that feedforward , unlike in the lossless setting discussed above , can be useful in the lossy case for distortion levels sufficiently large , as first discussed in . for lossless reconstruction for the set - up of fig .[ fig0 ] with binary sources versus delay ( ).,width=432 ] for lossless reconstruction for the set - up of fig .[ fig0 ] with binary sources versus parameter ( ).,width=432 ] we now assume that is a gauss - markov process with zero - mean , power =1 ] ( so that =\rho^{d} ] .it follows that the first inequality ( [ eq : app13a ] ) follows from the fact that is a function of by ( [ eq : decoder ] ) and by conditioning reduces entropy ; the second inequality ( [ eq : app14 ] ) follows from fano s inequality and the third from ( [ eq : app13b ] ) .finally , from ( [ eq : app11]),([eq : app12]),([eq : app13]),([eq : app16 ] ) we obtain \\ & -b-\sum_{i = d+1}^{n}\left[h(y_{i - d}|y^{i - d-1}x^{i-1})+n\delta(\epsilon)\right]\\ & = a - b+\sum_{i = d+1}^{n}h(x_{i}|y_{i - d}x_{i - d+1}^{i-1})+n\delta(\epsilon),\end{aligned}\ ] ] which concludes the proof .we prove the converse for proposition [ pro : for - any - delay ] , since proposition [ pro:2 ] follows as a special case .we focus on , since the proof for can be obtained in a similar fashion . to this end , fix a code as defined in sec .[ sec : system - model ] .using the definition of encoder ( [ eq : encoder ] ) and decoder ( [ eq : decoder ] ) we have where we have defined ] and defining random variables , , and , and by leveraging the convexity of the mutual informations in ( [ eq : app32 ] ) and ( [ eq : app4 ] ) with respect to the distribution .here we prove that ( [ eq : ex1])-([eq : ex2 ] ) equals ( [ eq : simpler ] ) for the binary hidden markov model of sec .[ sub : binary - hidden - markov ] .first , for , we can simply set to obtain and \leq d_{1} ] , we have the following inequalities where the third line follows by conditioning decreases entropy and the last line from the fact that is increasing in for .this lower bound can be achieved in ( [ eq : simpler ] ) by choosing the test channel so that can be written as where is binary with and independent of and , and is also independent of . to obtain , we need to impose that the joint distribution is preserved by the given choice of . to this end , note that the joint distribution is such that we can write , where is binary and independent of , with .therefore , preservation of is guaranteed if the equality =p_{z_{1}}(1)*d_{1}=\varepsilon^{(d)}*q ] and independent of and , and is also zero - mean gaussian and independent of . to obtain ] .therefore , preservation of the joint distribution of and is guaranteed if the equality +d_{1}=1-\rho^{2d}+\sigma_{n}^{2} ] , due to the assumed inequality on the distortion .references r. venkataramanan and s. s. pradhan , source coding with feed - forward : rate - distortion theorems and error exponents for a general source , _ ieee trans .inform . theory _2154 - 2179 , jun . 2007 .r. venkataramanan and s. s. pradhan , directed information for communication problems with side - information and feedback / feed - forward , in _ proc .of the 43rd annual allerton conference _ , monticello , il , 2005 .r. venkataramanan and s. s. pradhan , `` on computing the feedback capacity of channels and the feed - forward rate - distortion function of sources , '' _ ieee trans .58 , no . 7 , pp . 18891896 , jul .2010 .s. s. pradhan , `` on the role of feedforward in gaussian sources : point - to - point source coding and multiple description source coding , '' _ ieee trans .inform . theory _1 , pp . 331 - 349 , jan .2007 .h. permuter , y .- h .kim and t. weissman , `` interpretations of directed information in portfolio theory , data compression , and hypothesis testing , '' _ ieee trans .inform . theory _57 , no . 6 , pp . 3248 - 3259 , jun .2011 .d. vasudevan , c. tian , and s. diggavi , lossy source coding for a cascade communication system with side - informations , in communication , control , and computing , 2006 44th annual allerton conference on , sept . 2006
for memoryless sources , delayed side information at the decoder does not improve the rate - distortion function . however , this is not the case for sources with memory , as demonstrated by a number of works focusing on the special case of ( delayed ) feedforward . in this paper , a setting is studied in which the encoder is potentially uncertain about the delay with which measurements of the side information , which is available at the encoder , are acquired at the decoder . assuming a hidden markov model for the source sequences , at first , a single - letter characterization is given for the set - up where the side information delay is arbitrary and known at the encoder , and the reconstruction at the destination is required to be asymptotically lossless . then , with delay equal to zero or one source symbol , a single - letter characterization of the rate - distortion region is given for the case where , unbeknownst to the encoder , the side information may be delayed or not , and additional information can be received by the decoder when the side information is not delayed . finally , examples for binary and gaussian sources are provided . rate - distortion function , hidden markov model , markov gaussian process , multiplexing , strictly causal side information , causal conditioning .
it is well known that the anomalous diffusion processes in various real - world complex systems can be well characterized by using fractional order anomalous diffusion models ( ; ) after the introduction of continuous time random walks ( ctrws ) in + .regarded as a natural extension of the brownian motions , the ctrws are proven to be useful in deriving the time or space fractional order diffusion system by allowing the incorporation of waiting time probability density function ( pdf ) and general jump pdf ( ; ; ) .for example , if the particles are supposed to jump at fixed time intervals with a incorporating waiting times , the particles then undergo a sub - diffusion process and the time fractional diffusion system is introduced to efficiently describe this process . as stated in and + , instead of analyzing a system by purely theoretical viewpoint ( for example ,see ) , using the notions of sensors and actuators to investigate the structures and properties of systems can allow us to understand the system better and consequently enable us to steer the real - world system in a better way .this situation happens in many real dynamic systems , for example the optimal control of pest spreading ( ) , the flow through porous media microscopic process ( ) , or the swarm of robots moving through dense forest ( ) etc .it is now widely believed that fractional order controls can offer better performance not achievable before using integer order controls systems ( ; ) .this is the reason why the fractional order models are superior in comparison with the integer order models .moreover , it is worth noting that in many real dynamic systems , the regional observation problem occurs naturally when one is interested in the knowledge of the states in a subregion of the spatial domain ( ; ; ) .focusing on regional observations would allow for a reduction in the number of physical sensors and offer the potential to reduce computational requirements in some cases .in addition , it should be pointed out that the concepts of regional observability are of great help to reconstruct the initial vector for those non - observable system when we are interested in the knowledge of the initial vector only in a critical subregion of the system domain . motivated by the argument above , in this paper , by considering the locations , number and spatial distributions of sensors , our goal is to study the regional gradient observability of the riemann - liouville time fractional order diffusion process , which is introduced to better characterize those sub - diffusion processes ( . ) more precisely ,consider the problem below and suppose that the initial vector and its gradient are unknown and the measurements are given by using output functions ( depending on the number and structure of sensors ) .the purpose here is to reconstruct the initial gradient vector on a given subregion of the whole domain of interest .we also explore the characterizations of strategic sensors when the system is regional gradient observability . moreover , there are many applications of gradient modeling .for example , the concentration regulation of a substrate at the upper bottom of a biological reactor sub - diffusion process , which is observed between two levels ( see fig . [ fig1 ] ) ; anther example the energy exchange problem between a casting plasma on a plane target which is perpendicular to the direction of the flow sub - diffusion process from measurements carried out by internal thermocouples ( ) . for richer background on gradient modeling, we refer the reader to and . to the best of our knowledge ,no results are available on this topic and we hope that the results obtained here could provide some insights into the control theory analysis of the fractional order diffusion systems and be useful in real - life applications .the rest contents of the present paper are structured as follows .the problem studied and some preliminaries are introduced in the next section and in section we focus on the characteristic of the strategic sensors .an approach which enables us to reconstruct the initial gradient vector of the system under consideration in the considered subregion is addressed in section .several application examples are worked out in the end for illustrations .in this section , we formulate the regional gradient observability problems for the riemann - liouville time fractional order diffusion system and then introduce some preliminary results to be used thereafter .let be a connected , open bounded subset of with lipschitz continuous boundary and consider the following abstract time fractional diffusion process : ,~0<\alpha\leq 1 , \\\lim\limits_{t\to 0^+ } { } _ 0i^{1-\alpha}_{t}y(t)=y_0\mbox { supposed to be unknown , } \end{array}\right\}\ ] ] where generates a strongly continuous semigroup on the hilbert space , is a uniformly elliptic operator , , and denote the riemann - liouville fractional order derivative and integral with respect to time , respectively , given by ( and ) the measurements ( possibly unbounded ) are given depending on the number and the structure of the sensors with dense domain in and range in as follows : where is the finite number of sensors .let and both the initial vector and its gradient are supposed to be unknown .the system admits a unique mild solution given by ( and ) : ,\end{aligned}\ ] ] where is the strongly continuous semigroup generated by , and is a probability density function defined by satisfying ( ) let be a given region of positive lebesgue measure and let then the regional gradient observability problem is concerned with the directly reconstruction of the initial gradient vector in . consider the following two restriction mappings their adjoint operators are ,respectively , denoted by and moreover , by eq . , the output function gives where . to obtain the adjoint operator of , we have + * case 1 . is bounded ( e.g. zone sensors ) * + denote the adjoint operator of and by and , respectively .since is a bounded operator ( ) , we get that the adjoint operator of can be given by * case 2 . is unbounded ( e.g. pointwise sensors ) * + note that is densely defined , then exists . to state our results ,the following two assumptions are needed : can be extended to a bounded linear operator in ; + exists and .extend by one has based on the hahn - banach theorem , similar to the argument in , it is possible to derive the duality theorems as in and with the above two assumptions .then the adjoint operator of can be defined as let be an operator defined by we see that the adjoint of the gradient operating on a connected , open bounded subset with a lipschitz continuous boundary is minus the divergence operator , i.e. , is given by ( ) where solves the following dirichlet problem similar to the discussion in ; and , it follows that the necessary and sufficient condition for the regional weak observability of the system described by and in at time is that and we see the following definition .[ rgodef ] the system with output function is said to be regional weak gradient observability in at time if and only if [ proposition1 ] there is an equivalence among the following properties : the system is regional weak gradient observability in at time ; + + the operator is positive definite .* by definition , it is obvious to know that as for in fact , we have let , which then allows us to complete the proof . when , the system is deduced to the normal diffusion process as considered in , which is a particular case of our results . a system which is gradient observable on is gradient observable on for every moreover , the definition is also valid for the case when and there exist systems that are not gradient observable but regionally gradient observable .this can be illustrated by the following example .let \times [ 0,1 ] \subseteq \mathbf{r}^2 ] . , \\y(\xi,\eta , t)= 0~\mbox { on } \partial\omega \times [ 0,b],\\ \lim\limits_{t\to 0^+}{}_0i^{1-\alpha}_{t}y(x_1,x_2,t)=y_0(x_1,x_2)~\mbox { in } \omega \end{array}\right.\end{aligned}\ ] ] with the output functions where and is the dirac delta function on the real number line that is zero everywhere except at zero . according to the problem , . then the eigenvalue , eigenvector and the semigroup on generated by are respectively , and moreover , one has ( ) where is the generalized mittag - leffler function in two parameters .next , we show that there is a gradient vector , which is not gradient observable in the whole domain but gradient observable in a subregion .let . by eq . , we obtain that , then however , let \times[0,1/6], ] , , the following formula holds {t = a}^{t = b}- \int_a^b{g(t){}_t^cd_b^{\alpha}f(t)}dt .\end{aligned}\ ] ] ( )[lem1 ] let be an open set and be the class of infinitely differentiable functions on with compact support in and be such that then almost everywhere in section is devoted to addressing the characteristic of sensors when the studied system is regionally gradient observable in a given subregion of the whole domain .firstly , we recall that a sensor can be defined by a couple where is the support of the sensor and is its spatial distribution . for example , if with and where is the dirac delta function in at time that is zero everywhere except at the sensor is called pointwise sensor . in this case the operator is unbounded and the output function can be written as it is called zone sensor when and .the output function is bounded and can be defined as follows : for more information on the structure characteristic and properties of sensors and actuators , we refer the reader to ( , , ) and the references cited therein . next , to state our results , it is supposed that the measurements are made by sensors , where and , then can be rewritten as ,\\ y(\eta , t)=0~~\mbox { on } \partial \omega \times [ 0,b],\\ \lim\limits_{t\to 0^+ } { } _ 0i^{1-\alpha}_{t}y(x , t)=y_0(x)~\mbox { in } \omega \end{array}\right.\ ] ] with the measurements where .moreover , since the operator is a uniformly elliptic operator , for any , satisfies }{\left[y_1(\eta , t)\frac{\partial y_2(\eta , t)}{\partial v_a}-y_2(\eta , t ) \frac{\partial y_2(\eta , t)}{\partial v_{a^*}}\right]}dtd\eta,\end{aligned}\ ] ] where is the adjoint operator of .moreover , by , there exists a sequence , such that each is the eigenvalue of the operator with multiplicities and for each , is the orthonormal eigenfunction corresponding to , i.e. , where and is the inner product of space .then it follows that the strongly continuous semigroup on generated by can be expressed as the sequence is an orthonormal basis in and for any , it can be expressed as a sensor ( or a suite of sensors ) is said to be gradient if the observed system is regionally gradient observable in .[ lemma3.1 ] for any with , suppose that satisfies the following system ,\\ e(\eta , t)=0~~\mbox { on } \partial \omega \times [ 0,b ] , \\e(x , b)=0 ~~\mbox { in } \omega , \end{array}\right.\ ] ] where is the adjoint operator of and denotes the right - sided caputo fractional order derivative with respect to time of order ] , \frac{\partial\rho(x , t)}{\partial x_s}}dtdx\\ = -\int_q{a^*e(x , t)\frac{\partial\rho(x , t)}{\partial x_s}}dtdx\\{\kern 8pt } + \int_0^b \int_\omega { \sum\limits_{i=1}^p{p_{d_i}f_i(x)z_i(t ) } \frac{\partial\rho(x , t)}{\partial x_s } } dxdt .\end{array}\end{aligned}\ ] ] consider the fractional integration by parts in lemma , one has \frac{\partial\rho(x , t)}{\partial x_s}}dtdx\\= -\int_\omega{\left[\lim\limits_{t\to 0^+}{}_0i^{1-\alpha}_t\rho ( x , t)\right ] \frac{\partial e(x,0)}{\partial x_s}}dx\\{\kern 8pt}-\int_q{e(x , t)\left [ { } _ 0d^{\alpha}_t\frac{\partial\rho(x , t)}{\partial x_s}\right]}dtdx .\end{array}\end{aligned}\ ] ] then the boundary condition gives }{\partial x_s}\left[\lim\limits_{t\to 0^+}{}_0i^{1-\alpha}_te(x , t)\right]}dx\\\textcolor[rgb]{1.00,0.00,0.00}{= } \int_0^b \int_\omega { \sum\limits_{i=1}^p{p_{d_i}f_i(x)z_i(t ) } \frac{\partial\rho(x , t)}{\partial x_s } } dxdt , ~s=1,2,\cdots , n . \end{array}\end{aligned}\ ] ] thus , we have }{\partial x_s},p_{1\omega}^ * y_s\right)_{l^2(\omega)}\\ & = & \sum\limits_{s=1}^n\left(\frac { \partial \left[-e(x,0)\right]}{\partial x_s},\left[\lim\limits_{t\to 0^+}{}_0i^{1-\alpha}_te(x , t)\right]\right)_{l^2(\omega)}\\ & = & \sum\limits_{s=1}^n\sum\limits_{j=1}^{\infty}\sum\limits_{k=1}^{r_j}\sum\limits_{i=1}^{p}\int_0^b { e_{\alpha,\alpha}(\lambda_j\tau^{\alpha})z_i(\tau)}d\tau \left(\frac{\partial \xi_{jk}}{\partial x_s},p_{d_i}f_i\right ) y_{jks}.\end{aligned}\ ] ] by lemma , since is arbitrary , we see that the system is regionally gradient observable in at time if and only if i.e. , for any , one has where is a vector in . finally , since for all , we then show our proof by using the reductio and absurdum . necessity .if and and there exists a nonzero element with such that then we can find a nonzero vector satisfying this means that the sensors are not . sufficiency . on the contrary , if the sensors are not , i.e. , then there exists a nonzero element such that this allows us to complete the first conclusion of the theorem . in particular , when similar to the argument in , if and , there exists a nonzero vector satisfying then the sensors are not .moreover , if the sensors are not , there exists a nonzero element satisfying then if , it is sufficient to see that for all .the proof is complete .this section is focused on an approach , which allows us to reconstruct the initial gradient vector of the system in the method used here is hilbert uniqueness method ( hums ) introduced by , which can be considered as an extension of those given in .let be the set given by for any , there exists a function satisfying . consider the following system , \\\lim\limits_{t\to 0^+ } { } _ 0i^{1-\alpha}_t\varphi(t)=\tilde{g}^ * , \end{array}\right.\ ] ] which admits a unique solution \times \omega) ] controlled by the solution of the system .we then conclude that the regional gradient reconstruction problem is equivalent to solving the equation .[ theorem4.1 ] if is regionally gradient observable in at time , then has a unique solution and the initial gradient in subregion is equivalent to . by lemma , we see that is a norm of the space provided that the system is regionally gradient observable in at time .let the completion of with respect to the norm again by .by the theorem 1.1 in , to obtain the existence of the unique solution of problem , we only need to show that is coercive from to i.e. , there exists a constant such that indeed , for any we have then is coercive and has a unique solution , which is also the initial gradient to be estimated in the subregion at time .the proof is complete .note that if the riemann - liouville fractional derivative in system is replaced by a caputo fractional derivative , its unique mild solution will be given by ( ) .\end{aligned}\ ] ] we see that and then the lemma 7 fails .new lemmas similar to lemma 4 and lemma 7 are of great interest .besides , this challenge is also our interest now and we shall try our best to study it in our forthcoming papers .let \times[0,1] ] and the output functions are where the system is observed by one sensor and is bounded .moreover , we get that the eigenvalue , corresponding eigenvector of and the semigroup generated by are , and + respectively .then the multiplicity of the eigenvalues is one and let .we see that is not gradient observable on .however , [ prop12 ] the sensor is gradient strategic in if and only if where and * proof . *according to the argument above , we have .it then follows that {1 \times 1 } ] let and by theorem , then the necessary and sufficient condition for the sensor to be gradient strategic in is that + the proof is complete .let be a set defined by and for any , we see that + ^ 2}dt ] , is continuous and ( ) , we get that the assumption is satisfied .further , for any one has then the assumption holds . by theorem ,similar to proposition , let and , we see that , there exists a subregion such that the sensor is gradient if and only if can imply .further , for any , by lemma if is regionally gradient observable , then ^ 2}dt\ ] ] defines a norm on .consider the following system ,\\ \psi(\eta , t)=0,~~(\eta , t)\in \partial\omega_2 \times [ 0,b],\\ \lim\limits_{t\to 0^+}{}_0i^{1-\alpha}_{t}\psi(x , t)=0 , ~~x\in \omega_2 .\end{array}\right.\ ] ] it follows from theorem that the equation has a unique solution in , which is also the initial gradient on + + * case 3 .filament sensors * + consider the case where the observer is located on the curve \times \{\sigma\}\subseteq \omega_2 ] is a norm on and by theorem , the equation has a unique solution in and on this paper , we investigate the regional gradient observability problem for the time fractional diffusion system with riemann - liouville fractional derivatie , which is motivated by many real world applications where the objective is to obtain useful information on the state gradient in a given subregion of the whole domain .we hope that the results here could provided some insights into the control theoretical analysis of fractional order systems .moreover , the results presented here can also be extended to complex fractional order dpss and various open questions are still under consideration .for example , the problem of state gradient control of fractional order dpss , regional observability of fractional order system with mobile sensors as well as the regional sensing configuration are of great interest . for more information on the potential topics related to fractional dpss, we refer the readers to and the references therein
this paper for the first time addresses the concepts of regional gradient observability for the riemann - liouville time fractional order diffusion system in an interested subregion of the whole domain without the knowledge of the initial vector and its gradient . the riemann - liouville time fractional order diffusion system which replaces the first order time derivative of normal diffusion system by a riemann - liouville time fractional order derivative of order $ ] is used to well characterize those anomalous sub - diffusion processes . the characterizations of the strategic sensors when the system under consideration is regional gradient observability are explored . we then describe an approach leading to the reconstruction of the initial gradient in the considered subregion with zero residual gradient vector . at last , to illustrate the effectiveness of our results , we present several application examples where the sensors are zone , pointwise or filament ones . , , regional gradient observability ; gradient reconstruction ; time fractional diffusion process ; strategic sensors .
this paper studies learning problems of the following form . consider a finite , but potentially very large , collection of binary - valued functions defined on a domain . in this paper , will be called the _ hypothesis space _ and will be called the _ query space_. each is a mapping from to . throughout the paperwe will let denote the cardinality of .assume that the functions in are unique and that one function , , produces the correct binary labeling .it is assumed that is fixed but unknown , and the goal is to determine through as few queries from as possible . for each query , the value , possibly corrupted with independently distributed binary noise , is observed .the goal is to strategically select queries in a sequential fashion in order to identify as quickly as possible .conditions are established under which gbs ( and a noise - tolerant variant ) have a near - optimal query complexity .the main contributions of this paper are two - fold .first , incoherence and geometric relations between the pair are studied to bound the number of queries required by gbs .this leads to an easily verifiable sufficient condition that guarantees that gbs terminates with the correct hypothesis after no more than a constant times queries .second , noise - tolerant versions of gbs are proposed .the following noise model is considered .the binary response to a query is an independent realization of the random variable satisfying , where denotes the underlying probability measure . in other words ,the response to is only probably correct .if a query is repeated more than once , then each response is an independent realization of . a new algorithm based on a weighted ( soft - decision ) gbs procedureis shown to confidently identify after a constant times queries even in the presence of noise ( under the sufficient condition mentioned above ) .an agnostic algorithm that performs well even if is not in the hypothesis space is also proposed .the following notation will be used throughout the paper .the hypothesis space is a finite collection of binary - valued functions defined on a domain , which is called the query space .each is a mapping from to . for any subset , denotes the number of hypotheses in . the number of hypotheses in is denoted by .the efficiency of classic binary search is due to the fact at each step there exists a query that splits the pool of viable hypotheses in half .the existence of such queries is a result of the special ordered structure of the problem . because of ordering, optimal query locations are easily identified by bisection . in the generalsetting in which the query and hypothesis space are arbitrary it is impossible to order the hypotheses in a similar fashion and `` bisecting '' queries may not exist .for example , consider hypotheses associated with halfspaces of .each hypothesis takes the value on its halfspace and on the complement .a bisecting query may not exist in this case . to address such situationswe next introduce a more general framework that does not require an ordered structure .while it may not be possible to naturally order the hypotheses within , there does exist a similar local geometry that can be exploited in the search process .observe that the query space can be partitioned into equivalence subsets such that every is constant for all queries in each such subset .let denote the smallest such partition splits into two disjoint sets .let and let denote its complement . is the collection of all non - empty intersections of the form , where , and it is the smallest partition that refines the sets . is known as the _ join _ of the sets . ] .note that .for every and , the value of is constant ( either or ) for all ; denote this value by .observe that the query selection step in gbs is equivalent to an optimization over the partition cells in .that is , it suffices to select a partition cell for the query according to .the main results of this paper concern the query complexity of gbs , but before moving on let us comment on the computational complexity of the algorithm .the query selection step is the main computational burden in gbs .however , given the computational complexity of gbs is , up to a constant factor , where denotes the number of partition cells in .the size and construction of is manageable in many practical situations .for example , if is finite , then , where is the cardinality of . later , in section [ linear ] , we show that if is defined by halfspaces of , then grows like .the partition provides a geometrical link between and .the hypotheses induce a distance function on , and hence . for every pair the hamming distance between the response vectors and provides a natural distance metric in .two sets are said to be _ -neighbors _ if or fewer hypotheses ( along with their complements , if they belong to ) output different values on and .[ neighbor ] for example , suppose that is symmetric , so that implies .then two sets and are -neighbors if the hamming distance between their respective response vectors is less than or equal to . if is non - symmetric ( implies that is not in ) , then and are -neighbors if the hamming distance between their respective response vectors is less than or equal to .the pair is said to be _-neighborly _ if the -neighborhood graph of is connected ( i.e. , for every pair of sets in there exists a sequence of -neighbor sets that begins at one of the pair and ends with the other ) .[ neighborly ] if is -neighborly , then the distance between and is bounded by times the minimum path length between and . moreover, the neighborly condition implies that there is an incremental way to move from one query to the another , moving a distance of at most at each step .this local geometry guarantees that near - bisecting queries almost always exist , as shown in the following lemma ._ assume that is -neighborly and define the coherence parameter where the minimization is over all probability mass functions on . for every and any constant satisfying there exists an that approximately bisects orthe set is a small where denotes the cardinality of ._ [ lemma1 ] _ proof : _ according to the definition of it follows that there exists a probability distribution such that this implies that there exists an such that or there exists a pair and such that in the former case , it follows that a query from will reduce the size of by a factor of at least ( i.e. , every query approximately bisects the subset ) . in latter case , an approximately bisecting query does not exist , but the -neighborly condition implies that must be small . to see this note that the -neighborly condition guarantees that there exists a sequence of -neighbor sets beginning at and ending at . by assumption in this case , on every set andthe sign of must change at some point in the sequence .it follows that there exist -neighbor sets and such that and .two inequalities follow from this observation .first , .second , .note that if and its complement belong to , then their contributions to the quantity cancel each other . combining these inequalities yields . and . without loss of generalitywe may assume that all hypotheses agree with at these two points .the dashed path between the points and reveals a bisecting query location .as the path crosses a decision boundary the corresponding hypothesis changes its output from to ( or vice - versa , depending on the direction followed ) . at a certain point , indicated by the shaded cell , half of the hypotheses output and half output .selecting a query from this cell will _ bisect _ the collection of hypotheses ., title="fig:",width=377 ] the coherence parameter quantifies the informativeness of queries .the coherence parameter is optimized over the choice of , rather than sampled at random according to a specific distribution on , because the queries may be selected as needed from .the minimizer in ( [ cstar ] ) exists because the minimization can be computed over the space of finite - dimensional probability mass functions over the elements of . for to be close to , there must exist a distribution on so that the moment of every is close to zero ( i.e. , for each the probabilities of the responses and are both close to ) .this implies that there is a way to randomly sample queries so that the expected response of every hypothesis is close to zero . in this sense ,the queries are incoherent with the hypotheses . in lemma [ lemma1 ] , bounds the proportion of the split of _ any _ subset generated by the best query ( i.e. , the degree to which the best query bisects any subset ) .the coherence parameter leads to a bound on the number of queries required by gbs . _if is -neighborly , then gbs terminates with the correct hypothesis after at most queries , where ._ [ thm1 ] _ proof : _ consider the step of the gbs algorithm .lemma [ lemma1 ] shows that for any either there exists an approximately bisecting query and or .the uniqueness of the hypotheses with respect to implies that there exists a query that eliminates at least one hypothesis .therefore , .it follows that each gbs query reduces the number of viable hypotheses by a factor of at least therefore , and gbs is guaranteed to terminate when satisfies .taking the logarithm of this inequality produces the query complexity bound .theorem [ thm1 ] demonstrates that if is neighborly , then the query complexity of gbs is near - optimal ; i.e. , within a constant factor of . the constant depends on coherence parameter and , and clearly it is desirable that both are as small as possible .note that gbs does not require knowledge of or .we also remark that the constant in the bound is not necessarily the best that one can obtain .the proof involves selecting to balance splitting factor and the `` tail '' behavior , and this may not give the best bound .the coherence parameter can be computed or bounded for many pairs that are commonly encountered in applications , as covered later in section [ coherence ] .in noisy problems , the search must cope with erroneous responses .specifically , assume that for any query the binary response is an independent realization of the random variable satisfying ( i.e. , the response is only probably correct ) .if a query is repeated more than once , then each response is an independent realization of . define the _ noise - level _ for the query as . throughout the paper we will let and assume that . before presenting the main approach to noisy gbs , we first consider a simple strategy based on repetitive querying that will serve as a benchmark for comparison .we begin by describing a simple noise - tolerant version of gbs .the noise - tolerant algorithm is based on the simple idea of repeating each query of the gbs several times , in order to overcome the uncertainty introduced by the noise .similar approaches are proposed in the work k " a " ari " ainen . karp and kleinberg analyze of this strategy for noise - tolerant classic binary search .this is essentially like using a simple repetition code to communicate over a noisy channel .this procedure is termed noise - tolerant gbs ( ngbs ) and is summarized in fig . [fig : ngbs ] .[ thm : ngbs ] consider a specific query repeated times , let denote the frequency of in the trials , and let ] and .the measure can be viewed as an initial weighting over the hypothesis class .for example , taking to be the uniform distribution over expresses the fact that all hypothesis are equally reasonable prior to making queries .we will assume that is uniform for the remainder of the paper , but the extension to other initial distributions is trivial .note , however , that we still assume that is fixed but unknown .after each query and response the distribution is updated according to where , is any constant satisfying , and is normalized to satisfy .the update can be viewed as an application of bayes rule and its effect is simple ; the probability masses of hypotheses that agree with the label are boosted relative to those that disagree .the parameter controls the size of the boost .the hypothesis with the largest weight is selected at each step : if the maximizer is not unique , one of the maximizers is selected at random . note that , unlike the hard - decisions made by the gbs algorithm in fig .[ fig : gbs ] , this procedure does not eliminate hypotheses that disagree with the observed labels , rather the weight assigned to each hypothesis is an indication of how successful its predictions have been .thus , the procedure is termed soft - decision gbs ( sgbs ) and is summarized in fig . [fig : sgbs ] .the goal of sgbs is to drive the error to zero as quickly as possible by strategically selecting the queries .the query selection at each step of sgbs must be informative with respect to the distribution .in particular , if the _ weighted prediction _ is close to zero for a certain ( or ) , then a label at that point is informative due to the large disagreement among the hypotheses .if multiple minimize , then one of the minimizers is selected uniformly at random . to analyze sgbs ,define , .the variable was also used by burnashev and zigangirov to analyze classic binary search .it reflects the amount of mass that places on incorrect hypotheses .let denotes the underlying probability measure governing noises and possible randomization in query selection , and let denote expectation with respect to .note that by markov s inequality \ .\label{markov } \end{aligned}\ ] ] at this point , the method of analyzing sgbs departs from that of burnashev and zigangirov which focused only on the classic binary search problem .the lack of an ordered structure calls for a different attack on the problem , which is summarized in the following results and detailed in the appendix . _consider any sequence of queries and the corresponding responses .if , then is a nonnegative supermartingale with respect to ; i.e. , \leq c_n ] and by the martingale convergence theorem we have that exists and is finite ( for more information on martingale theory one can refer to the textbook by brmaud ) .furthermore , we have the following theorem ._ _ [ thm2 ] first observe that for every positive integer & = & \e[(\mm_n/\mm_{n-1 } ) \ , \mm_{n-1 } ] \ = \\e\left[\e[(\mm_n/\mm_{n-1 } ) \ , \mm_{n-1}|p_{n-1}]\right ] \\ & = & \e\left[\mm_{n-1 } \ , \e[(\mm_n/\mm_{n-1})|p_{n-1}]\right ] \ \leq \ \e[\mm_{n-1 } ] \ , \max_{p_{n-1}}\e[(\mm_n/\mm_{n-1})|p_{n-1 } ] \\ & \leq & \mm_0 \left(\max_{i = 0,\dots , n-1 } \max_{p_i}\ , \e[(\mm_{i+1}/\mm_{i})|p_i]\right)^n \ .\end{aligned}\ ] ] in the proof of lemma [ martingale ] , it is shown that if , then <1 ] .it follows that the sequence \right)^n ] , , then it follows that that .the modified sgbs algorithm is outlined in fig .[ fig : msgbs ] .it is easily verified that lemma [ martingale ] and theorem [ thm2 ] also hold for the modified sgbs algorithm .this follows since the modified query selection step is identical to that of the original sgbs algorithm , unless there exist two neighboring sets with strongly bipolar weighted responses . in the latter case ,a query is randomly selected from one of these two sets with equal probability . for every and any probability measure on the _ weighted prediction _ on defined to be , where is the constant value of for every .the following lemma , which is the soft - decision analog of lemma [ lemma1 ] , plays a crucial role in the analysis of the modified sgbs algorithm . _if is -neighborly , then for every probability measure on there either exists a set such that or a pair of -neighbor sets such that and . _ [ lemma2 ] suppose that . then there must exist such that and , otherwise can not be the incoherence parameter of , defined in ( [ cstar ] ) .to see this suppose , for instance , that for all .then for every distribution on we have .this contradicts the definition of since .the neighborly condition guarantees that there exists a sequence of -neighbor sets beginning at and ending at . since on every set andthe sign of must change at some point in the sequence , it follows that there exist -neighbor sets satisfying the claim .the lemma guarantees that there exists either a set in on which the weighted hypotheses significantly disagree ( provided is significantly below ) or two neighboring sets in on which the weighted predictions are strongly bipolar . in either case , if a query is drawn randomly from these sets , then the weighted predictions are highly variable or uncertain , with respect to .this makes the resulting label informative in either case .if is -neighborly , then the modified sgbs algorithm guarantees that exponentially fast .the -neighborly condition is required so that the expected boost to is significant at each step .if this condition does not hold , then the boost could be arbitrarily small due to the effects of other hypotheses .fortunately , as shown in section [ coherence ] , the -neighborly condition holds in a wide range of common situations ._ let denote the underlying probability measure ( governing noises and algorithm randomization ) . if and is -neighborly , then the modified sgbs algorithm in fig .[ fig : msgbs ] generates a sequence of hypotheses satisfying with exponential constant , where is defined in ( [ cstar ] ) ._ [ thm3 ] the theorem is proved in the appendix .the exponential convergence rate in the exponential rate parameter is a positive constant strictly less than . for a noise level factor is maximized by a value which tends to as tends to . ]is governed by the coherence parameter . as shown in section [ coherence ] , the value of is typically a small constant much less than that is independent of the size of . in such situations ,the query complexity of modified sgbs is near - optimal .the query complexity of the modified sgbs algorithm can be derived as follows .let be a pre - specified confidence parameter .the number of queries required to ensure that is , which is near - optimal . intuitively , about bits are required to encode each hypothesis .more formally , the noisy classic binary search problem satisfies the assumptions of theorem [ thm3 ] ( as shown in section [ cbs ] ) , and hence it is a special case of the general problem . using information - theoretic methods ,it has been shown by burnashev and zigangirov ( also see the work of karp and kleinberg ) that the query complexity for noisy classic binary search is also within a constant factor of .in contrast , the query complexity bound for ngbs , based on repeating queries , is at least logarithmic factor worse .we conclude this section with an example applying theorem [ thm3 ] to the halfspace learning problem ._ consider learning multidimensional halfspaces .let and consider hypotheses of the form where and parameterize the hypothesis and is the inner product in .the following corollary characterizes the query complexity for this problem ._ let be a finite collection of hypotheses of form ( [ halfspace ] ) and assume that the responses to each query are noisy , with noise bound .then the hypotheses selected by modified sgbs with satisfy with .moreover , can be computed in time polynomial in .[ thms ] the error bound follows immediately from theorem [ thm3 ] since and is -neighborly , as shown in section [ linear ] .the polynomial - time computational complexity follows from the work of buck , as discussed in section [ linear ] .suppose that is an -dense set with respect to a uniform probability measure on a ball in ( i.e. , for _ any _ hyperplane of the form ( [ halfspace ] ) contains a hypothesis whose probability of error is within of it ) .the size of such an satisfies , for a constant , which is the proportional to the minimum query complexity possible in this setting , as shown by balcan et al .those authors also present an algorithm with roughly the same query complexity for this problem .however , their algorithm is specifically designed for the linear threshold problem .remarkably , near - optimal query complexity is achieved in polynomial - time by the general - purpose modified sgbs algorithm .so far we have assumed that the correct hypothesis is in . in this sectionwe drop this assumption and consider _ agnostic _ algorithms guaranteed to find the best hypothesis in even if the correct hypothesis is not in and/or the assumptions of theorem [ thm1 ] or [ thm3 ] do not hold .the best hypothesis in can be defined as the one that minimizes the error with respect to a probability measure on , denoted by , which can be arbitrary .this notion of `` best '' commonly arises in machine problems where it is customary to measure the error or _risk _ with respect to a distribution on . a common approach to hypothesis selection is _ empirical risk minimization _ ( erm ) , which uses queries randomly drawn according to and then selects the hypothesis in that minimizes the number of errors made on these queries . given a budget of queries , consider the following agnostic procedure . divide the query budget into three equal portions .use gbs ( or ngbs or modified sgbs ) with one portion , erm ( queries randomly distributed according ) with another , and then allocate the third portion to queries from the subset of where the hypothesis selected by gbs ( or ngbs or modified sgbs ) and the hypothesis selected by erm disagree , with these queries randomly distributed according to the restriction of to this subset . finally , select the hypothesis that makes the fewest mistakes on the third portion as the final choice .the sample complexity of this agnostic procedure is within a constant factor of that of the better of the two competing algorithms .for example , if the conditions of theorems [ thm1 ] or [ thm3 ] hold , then the sample complexity of the agnostic algorithm is proportional to . in general , the sample complexity of the agnostic procedure is within a constant factor of that of erm alone . we formalize this as follows .[ runoff]_let denote a probability measure on and for every let denote its probability of error with respect to .consider two hypotheses and let denote the subset of queries for which and disagree ; i.e. , for all .suppose that queries are drawn independently from , the restriction of to the set , let and denote average number of errors made by and on these queries , let ] , and select .then with probability less than . _ define and let . by hoeffding s inequalitywe have ] of the following form , for ( and ) .assume that .first consider the neighborly condition . recall that is the smallest partition of into equivalence sets induced by . in this case , each is an interval of the form , .observe that only a single hypothesis , , has different responses to queries from and and so they are -neighbors , for .moreover , the -neighborhood graph is connected in this case , and so is -neighborly .next consider coherence parameter .take to be two point masses at and of probability each .then for every , since and .thus , . since and , we have and the query complexity of gbs is proportional to according to theorem [ thm1 ]. the reduction factor of , instead of , arises because we allow the situation in which the number of hypotheses may be odd ( e.g. , given three hypotheses ) , the best query may eliminate just one ) .if is even , then the query complexity is , which is information - theoretically optimal .now let ] yields that , regardless of the number of interval hypotheses under consideration .therefore , in this setting theorem [ thm1 ] guarantees that gbs determines the correct hypothesis using at most a constant times steps .however , consider the special case in which the intervals are disjoint .then it is not hard to see that the best allocation of mass is to place mass in each subinterval , resulting in .and so , theorem [ thm1 ] only guarantees that gbs is will terminate in at most steps ( the number of steps required by exhaustive linear search ) .in fact , it is easy to see that no procedure can do better than linear search in this case and the query complexity of any method is proportional to .however , note that if queries of a different form were allowed , then much better performance is possible . for example ,if queries in the form of dyadic subinterval tests were allowed ( e.g. , tests that indicate whether or not the correct hypothesis is -valued anywhere within a dyadic subinterval of choice ) , then the correct hypothesis can be identified through queries ( essentially a binary encoding of the correct hypothesis ) .this underscores the importance of the geometrical relationship between and embodied in the neighborly condition and the incoherence parameter . optimizing the query space to the structure of related to the notion of arbitrary queries examined in the work of kulkarni et al , and somewhat to the theory of compressed sensing developed by candes et al and donoho .let be a collection of multidimensional threshold functions of the following form .the threshold of each determined by ( possibly nonlinear ) decision surface in -dimensional euclidean space and the queries are points in .it suffices to consider linear decision surfaces of the form where , , the offset satisfies for some constant , and denotes the inner product in . each hypothesis is associated with a halfspace of . note that hypotheses of this form can be used to represent nonlinear decision surfaces , by first applying a mapping to an input space and then forming linear decision surfaces in the induced query space .the problem of learning multidimensional threshold functions arises commonly in computer vision ( see the review of swain and stricker and applications by geman and jedynak and arkin et al ) , image processing studied by korostelev and kim , and active learning research ; for example the investigations by freund et al , dasgupta , balcan et al , and castro and nowak .first we show that the pair is -neighborly .each is a polytope in .these polytopes are generated by intersections of the halfspaces corresponding to the hypotheses .any two polytopes that share a common face are -neighbors ( the hypothesis whose decision boundary defines the face , and its complement if it exists , are the only ones that predict different values on these two sets ) . since the polytopes tessellate , the -neighborhood graph of is connected .we next show that the the coherence parameter .since the offsets of the hypotheses are all less than in magnitude , it follows that the distance from the origin to the nearest point of the decision surface of every hypothesis is at most .let denote the uniform probability distribution on a ball of radius centered at the origin in .then for every of the form ( [ surface ] ) there exists a constant ( depending on ) such that and .therefore , and it follows from theorem [ thm1 ] guarantees that gbs determines the correct multidimensional threshold in at most steps . to the best of our knowledge this is a new result in the theory of learning multidimensional threshold functions , although similar query complexity bounds have been established for the subclass of linear threshold functions with ( threshold boundaries passing through the origin ) ; see for example the work of balcan et al . these results are based on somewhat different learning algorithms , assumptions and analysis techniques . observe that if is an -dense ( with respect lesbegue measure over a compact set in ) subset of the continuous class of threshold functions of the form ( [ surface ] ) , then the size of the satisfies .therefore the query complexity of gbs is proportional to the metric entropy of the continuous class , and it follows from the results of kulkarni et al that no learning algorithm exists with a lower query complexity ( up to constant factors ) .furthermore , note that the computational complexity of gbs for hypotheses of the form ( [ surface ] ) is proportional to the cardinality of , which is equal to the number of polytopes generated by intersections of half - spaces .it is a well known fact ( see buck ) that .therefore , gbs is a polynomial - time algorithm for this problem . in general, the cardinality of could be as large as .next let again be the hypotheses of the form ( [ surface ] ) , but let ^d ] ( the natural generalization of the chosen in the case of classic binary search in section [ cbs ] above ) .then for every , since for each there is at least one vertex on where it predicts and one where it predicts .thus , . we conclude that the gbs determines the correct hypothesis in proportional to steps .the dependence on is unavoidable , since it may be that each threshold function takes that value only at one of the vertices and so each vertex must be queried .a noteworthy special case is arises when ( i.e. , the threshold boundaries pass through the origin ) . in this case , with as specified above , , since each hypothesis responds with at half of the vertices and on the other half .therefore , the query complexity of gbs is at most , independent of the dimension . as discussed above , similar results for this special casehave been previously reported based on different algorithms and analyses ; see the results in the work of balcan et al and the references therein .note that even if the threshold boundaries do not pass through the origin , and therefore the number of queries needed is proportional to so long as . the dependence on dimension also be eliminated if it is known that for a certain distribution on the absolute value of the moment of the correct hypothesis w.r.t . is known to be upper bounded by a constant , as discussed at the beginning of this section .finally , we also mention hypotheses associated with axis - aligned rectangles in ^d ] and otherwise . the complementary hypothesis may also be included .consider a finite collection of hypotheses of this form .if the rectangles associated with each have volume at least , then by taking to be the uniform measure on ^d ] associated with a collection of such hypotheses are rectangles themselves .if the boundaries of the rectangles associated with the hypotheses are distinct , then the -neighborhood graph of is connected .theorem [ thm1 ] implies that the number of queries needed by gbs to determine the correct rectangle is proportional to . in many situations both the hypothesis and query spaces may be discrete .a machine learning application , for example , may have access to a large ( but finite ) pool of unlabeled examples , any of which may be queried for a label . because obtaining labels can be costly , `` active '' learning algorithms select only those examples that are predicted to be highly informative for labeling .theorem [ thm1 ] applies equally well to continuous or discrete query spaces .for example , consider the linear separator case , but instead of the query space suppose that is a finite subset of points in .the hypotheses again induce a partition of into subsets , but the number of subsets in the partition may be less than the number in . consequently , the neighborhood graph of depends on the specific points that are included in and may or may not be connected .as discussed at the beginning of this section , the neighborly condition can be verified in polynomial - time ( polynomial in ) .consider two illustrative examples .let be a collection of linear separators as in ( [ surface ] ) above and first reconsider the partition .recall that each set in is a polytope .suppose that a discrete set contains at least one point inside each of the polytopes in .then it follows from the results above that is -neighborly .second , consider a simple case in dimensions .suppose consists of just three non - colinear points and suppose that is comprised of six classifiers , , satisfying , , , and , . in this case , and the responses to any pair of queries differ for four of the six hypotheses .thus , the -neighborhood graph of is connected , but the -neighborhood is not . also note that a finite query space naturally limits the number of hypotheses that need be considered .consider an uncountable collection of hypotheses .the number of unique labeling assignments generated by these hypotheses can be bounded in terms of the vc dimension of the class ; see the book by vapnik for more information on vc theory . as a result, it suffices to consider a finite subset of the hypotheses consisting of just one representative of each unique labeling assignment .furthermore , the computational complexity of gbs is proportional to in this case .generalized binary search can be viewed as a generalization of classic binary search , shannon - fano coding as noted by goodman and smyth , and channel coding with noiseless feedback as studied by horstein .problems of this nature arise in many applications , including channel coding ( e.g. , the work of horstein and zigangirov ) , experimental design ( e.g. , as studied by rnyi ) , disease diagnosis ( e.g. , see the work of loveland ) , fault - tolerant computing ( e.g. , the work of feige et al ) , the scheduling problem considered by kosaraju et al , computer vision problems investigated by geman and jedynak and arkin et al ) , image processing problems studied by korostelev and kim , and active learning research ; for example the investigations by freund et al , dasgupta , balcan et al , and castro and nowak .past work has provided a partial characterization of this problem .if the responses to queries are noiseless , then selecting the sequence of queries from is equivalent to determining a binary decision tree , where a sequence of queries defines a path from the root of the tree ( corresponding to ) to a leaf ( corresponding to a single element of ) . in general the determination of the optimal ( worst- or average - case )tree is np - complete as shown by hyafil and rivest .however , there exists a greedy procedure that yields query sequences that are within a factor of of the optimal search tree depth ; this result has been discovered independently by several researchers including loveland , garey and graham , arkin et al , and dasgupta .the greedy procedure is referred to here as _ generalized binary search _( gbs ) or the _ splitting algorithm _ , and it reduces to classic binary search , as discussed in section [ cbs ] .the number of queries an algorithm requires to determine is called the _ query complexity _ of the algorithm . since the hypotheses are assumed to be distinct , it is clear that the query complexity of gbs is at most ( because it is always possible to find query that eliminates at least one hypothesis at each step ) .in fact , there are simple examples ( see section [ cbs ] ) demonstrating that this is the best one can hope to do in general .however , it is also true that in many cases the performance of gbs can be much better , requiring as few as queries . in classic binary search , for example ,half of the hypotheses are eliminated at each step ( e.g. , refer to the textbook by cormen et al ) . rnyi first considered a form of binary search with noise and explored its connections with information theory . in particular , the problem of sequential transmission over a binary symmetric channel with noiseless feedback , as formulated by horstein and studied by burnashev and zigangirov and more recently by pelc et al , is equivalent to a noisy binary search problem . there is a large literature on learning from queries ; see the review articles by angluin .this paper focuses exclusively on membership queries ( i.e. , an is the query and the response is ) , although other types of queries ( equivalence , subset , superset , disjointness , and exhaustiveness ) are possible as discussed by angluin . _ arbitrary queries _have also been investigated , in which the query is a subset of and the output is if belongs to the subset and otherwise .a finite collection of hypotheses can be successively halved using arbitrary queries , and so it is possible to determine with arbitrary queries , the information - theoretically optimal query complexity discussed by kulkarni et al .membership queries are the most natural in function learning problems , and because this paper deals only with this type we will simply refer to them as queries throughout the rest of the paper .the number of queries required to determine a binary - valued function in a finite collection of hypotheses can be bounded ( above and below ) in terms of a combinatorial parameter of due to hegeds ( see the work of hellerstein et al for related work ) . due to its combinatorial nature , computing such bounds are generally np - hard . in contrast , the geometric relationship between and developed in this paper leads to an upper bound on the query complexity that can be determined analytically or computed in polynomial time in many cases of interest .the term gbs is used in this paper to emphasize connections and similarities with classic binary search , which is a special case the general problem considered here .classic binary search is equivalent to learning a one - dimensional binary - valued threshold function by selecting point evaluations of the function according to a bisection procedure .consider the threshold function on the interval ] by showing that \leq 1 ] it is helpful to condition on .define .if , then & = & \frac{(1-\delta_{a_i}^+)\beta+\delta_{a_i}^+ ( 1-\beta)}{1-\beta}(1-q_i)\ + \\frac{\delta_{a_i}^+\beta+(1-\delta_{a_i}^+ ) ( 1-\beta)}{\beta}q_i \\ & = & \delta_{a_i}^+ + ( 1-\delta_{a_i}^+)\left[\frac{\beta(1-q_i)}{1-\beta}+\frac{q_i(1-\beta)}{\beta } \right ] \.\end{aligned}\ ] ] define ] ( and strictly less than if ). the proof amounts to obtaining upper bounds for and , defined above in ( [ pb ] ) and ( [ nb ] ) .consider two distinct situations .define .first suppose that there do not exist neighboring sets and with and .then by lemma [ lemma1 ] , this implies that , and according the query selection step of the modified sgbs algorithm , .note that because , .hence , both and are bounded above by .now suppose that there exist neighboring sets and with and .recall that in this case is randomly chosen to be or with equal probability .note that and .if , then applying ( [ pb ] ) results in & < & \frac{1}{2}(1 + \frac{1-b_i}{2}+\frac{1+b_i}{2}(1-\varepsilon_0 ) ) \ = \ \frac{1}{2}(2-\varepsilon_0\frac{1+b_i}{2 } ) \ \leq \ 1-\varepsilon_0/4 \ , \end{aligned}\ ] ] since .similarly , if , then ( [ nb ] ) yields < 1-\varepsilon_0/4 ] by the maximum of the conditional bounds above to obtain & \leq & \max \left\{1-\frac{\varepsilon_0}{2}(1-p_i(h^ * ) ) \ , , \ , 1-\frac{\varepsilon_0}{4 } \ , , \ , 1-(1-\ch)\frac{\varepsilon_0}{2 } \right\ } \ , \end{aligned}\ ] ] and thus it is easy to see that & = & \frac{\e\left [ \gamma_i|p_i\right]- p_i(h^ * ) } { 1-p_i(h^ * ) } \ \leq \ 1-\min\left\{\frac{\varepsilon_0}{2}(1-\ch),\frac{\varepsilon_0}{4}\right\ } \ .\hspace{.5 in } \blacksquare \end{aligned}\ ] ] first consider the bound on ] ; i.e. , expectation with respect to the queries drawn from the region , conditioned on the queries used to select and . by lemma [ runoff ] & \leq & ( 1-\delta_n ) \ , \min\{r(h_1),r(h_2)\ } \ + \ \delta_n \ , \max\{r(h_1),r(h_2)\ } \ , \\ & = & \min\{r(h_1),r(h_2)\ }\ + \ \delta_n \, \left[\max\{r(h_1),r(h_2)\}-\min\{r(h_1),r(h_2)\}\right ] \ , \\ & = & \min\{r(h_1),r(h_2)\ } \ + \ \delta_n \ , |r(h_1)-r(h_2)| \ , \\ & = & \min\{r(h_1),r(h_2)\ } \ + \ 2 \ , |r(h_1)-r(h_2)| \ , e^{-n|r_\delta(h_1)-r_\delta(h_2)|^2/6 } \ , \\ & \leq & \min\{r(h_1),r(h_2)\}+ 2 |r(h_1)-r(h_2)| \ , e^{-n|r(h_1)-r(h_2)|^2/6 } \ , \end{aligned}\ ] ] where the last inequality follows from the fact that .the function attains its maximum at , and therefore & \leq & \min\{r(h_1),r(h_2)\ } \ , + \ ,\sqrt{3/n } \ .\end{aligned}\ ] ] now taking the expectation with respect to and ( i.e. , with respect to the queries used for the selection of and ) & \leq & \e[\min\{r(h_1),r(h_2)\ } ] \ , + \, \sqrt{3/n } \ , \\ & \leq & \min\{\e[r(h_1)],\e[r(h_2)]\ } \ , + \ ,\sqrt{3/n } \ , \end{aligned}\ ] ] by jensen s inequality .next consider the bound on .this also follows from an application of lemma [ runoff ] .note that if the conditions of theorem [ thm3 ] hold , then .furthermore , if and , then .the bound on follows by applying the union bound to the events and . a. rnyi , `` on a problem in information theory , '' _ mta mat ._ , p. 505516, 1961 , reprinted in _ selected papers of alfred rnyi _ , vol . 2 , p. turan , ed . , pp .631 - 638 .akademiai kiado , budapest , 1976 .e. j. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .inform . theory _52 , no . 2 ,489509 , feb . 2006 .
this paper investigates the problem of determining a binary - valued function through a sequence of strategically selected queries . the focus is an algorithm called generalized binary search ( gbs ) . gbs is a well - known greedy algorithm for determining a binary - valued function through a sequence of strategically selected queries . at each step , a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets , a natural generalization of the idea underlying classic binary search . this paper develops novel incoherence and geometric conditions under which gbs achieves the information - theoretically optimal query complexity ; i.e. , given a collection of hypotheses , gbs terminates with the correct function after no more than a constant times queries . furthermore , a noise - tolerant version of gbs is developed that also achieves the optimal query complexity . these results are applied to learning halfspaces , a problem arising routinely in image processing and machine learning .
time - delay systems have attracted a lot of attention in recent years , in part due to the fact that multistability , i.e. the coexistence of multiple attractors , is a common occurrence when the delays are large typically , much larger than the response time of the system .interest in multistability arises because multistable systems play a key role in pattern recognition processes and memory storage devices . by choosing appropriate initial conditions , prescribed periodic solutionscan be stored as oscillatory patterns of a time - delay system .synchronization of chaotic time - delay systems has also received attention , since it has potential applications to secure communications .perez and cerdeira have shown that , in low - dimensional chaotic systems , a hidden message can be unmasked by the dynamical reconstruction of the chaotic signal using nonlinear dynamical methods .encrypting a message in the chaotic output of a time - delay system has the advantage that the dynamics is in this case high - dimensional ( the dimension increases linearly with the delay ) but , in spite of this fact , synchronization can be achieved by transmitting a single scalar signal .unfortunately , this method is not as secure as initially expected , since it has been shown that by using a special embedding space , the delay time can be identified , and the message can be successfully unmasked .coupled oscillators with time delays in the coupling , which represent interactions being transmitted at finite speed , have been extensively studied as well .recently , a new effect of delayed coupling was reported by voss , who showed the existence of an anticipating synchronization regime . in this regime ,the slave system becomes synchronized to the chaotic future state of the master system .anticipation occurs when the coupling is delayed , and results from the interplay of memory effects and relaxation mechanisms .numerically , this regime was found in coupled semiconductor lasers with optical feedback . in this paperwe investigate the existence and stability of anticipated synchronization in coupled time - delay maps , which have the advantage of allowing for analytical calculations . in the next section ,we introduce a system of two time - delay coupled maps in a master - slave configuration , and present analytical results on the stability of anticipated and retarded synchronization for generic maps . in section 3 , we apply the results to a delay map that arises from the discretization of the ikeda delay - differential equation .section 4 presents a summary and the conclusions .we consider a one - dimensional map of the form for , the first term in the right - hand side represents a relaxation mechanism . under its sole action would asymptotically vanish .this relaxation , however , competes with the effect of the nonlinear function , which has the form of a time - delayed feedback with delay . the ( master ) map ( [ x ] ) is used to partially drive the evolution of a new ( slave ) system .the dynamics of this system is , in principle , the same as for , except that a part of the nonlinear component is replaced by the evolution of , namely the parameter ] , , and , the ikeda delay map is obtained from eq .( [ ik ] ) with , , and .the ikeda delay map is not to be confused with the discrete ikeda map , which is obtained from ( [ ik ] ) by discretizing time in units of in the singular limit where the delay - to - response time ratio diverges . in the following ,we focus the attention on the case .our numerical calculations are restricted to random initial condition in both for the master and the slave system . in fig .[ f1 ] we illustrate anticipated synchronization , for and .the slave system ( dashed line ) anticipates in steps the state of the master system ( solid line ) . after a transient, the difference ( dotted line ) decays to zero .we have verified that this behavior is independent of the value of .the critical value of the coupling strength , above which the synchronized state is stable , has been calculated from the analytical results of the previous section by means of a numerical evaluation of the matrix in eq .( [ u ] ) .since the delay is irrelevant to this calculation , we can take in eq .( [ mat ] ) . from a given initial condition for the master system ,the matrix is evaluated for successive values of . in order to avoid the calculation of its eigenvalues , which is specially troublesome for large , is multiplied at each step by a randomly generated vector , , of unitary modulus . if , for a given value of , the modulus of the product remains above a certain upper threshold for sufficiently large , for , synchronization is considered unstable and . if , on the other hand , for , where is a suitable lower threshold , synchronization is considered stable and . with this criterion , can be found by decimation within the interval up to a certain previously fixed precision . in our calculations ,we have taken , , and .figure [ f2 ] shows as a function of the delay and for several values of the parameter .note that for small specifically , for the critical value can vanish , which indicates that the master evolution is nonchaotic .notice , furthermore , that for some values of and we have plotted more than one value of .this is due to the fact that , in such cases , the master system is multistable with , typically , two chaotic attractors .the value of depends thus on the initial conditions for both and .whereas the behavior is quite irregular for small , for larger delays the critical coupling becomes practically independent of .though it has not been possible to prove this analytically , we conjecture that approaches a finite limit below unity as .the dependence on in the investigated interval is also moderately weak . as expected , since the dynamics of the ikeda delay map becomes more irregular as grows i.e .the lyapunov exponent is higher increases accordingly .the degree of anticipated of retarded synchronization can be quantified by calculating the similarity function , defined as : ^ 2\rangle } \over { [ \langle x_n^2\rangle \langle y_n^2\rangle ] ^{1/2 } } } .\ ] ] if and are independent time series with similar mean value and dispersion , the average of their square difference is of order , and thus . if , on the other hand , there is perfect anticipated or retarded synchronization , the difference vanishes , and . in fig .[ f3 ] , the similarity function is shown for the same parameters as in fig . [ f1 ] .figure [ f4 ] shows i.e .the minimum value of recorded during sufficiently long realizations of the evolution for a given set of parameters in the -plane , with all the other parameters as in fig .the region where anticipated synchronization occurs , , is clearly visible for large values of .the position of its boundary is in good qualitative agreement with the values of shown in fig .[ f2 ] for .we have extended the study of anticipated synchronization , advanced by voss for unidirectionally coupled differential equations with time delays , to delayed coupled chaotic maps . while the nature of anticipated synchronization of maps and differential equations is the same , delay discrete - time dynamics admits an analytical treatment which can not be carried out for continuous - time systems .in fact , ordinary differential equations with finite time delays constitute an infinite - dimensional problem . on the other hand ,since times delays in maps must be discrete , the dimensionality of the problem remains finite .taking advantage of this situation , we have analytically studied the stability of anticipated and retarded synchronization in a generic master - slave configuration . in the absence of coupling , master and slave dynamicsare identical and involve an intrinsic delay . coupling consists in the replacement of a part of the slave dynamics by that of the master system , with a delay .we have shown that the stability of synchronization is independent of .the structure of the linearized problem , eqs .( [ delta]-[u ] ) , suggest meanwhile a strong though not transparent dependence on the lyapunov exponent of the master system , as expected . in practice, the linearized problem has to be treated numerically , but it only involves the realization of the master system and the successive application of the linear - evolution operator , being thus a purely algebraic process .these results have been applied to the ikeda delay map , which derives from the application of the euler integration scheme to the ikeda delay - differential equation .we have calculated the critical coupling intensity above which synchronization is stable , as a function of the delay and of a parameter that controls the chaotic dynamics of the map .it is found that , whereas for small values of the critical coupling can vary considerably due to the irregular appearance and disappearance of chaotic and nonchaotic attractors the dependence for large is much smoother . in fact , as grows , the critical coupling seems to approach a constant value .the dependence with the dynamical parameter is also moderate in the considered range . in this workwe have focused the attention on exact anticipated synchronization .however , it has been previously shown that approximate anticipated synchronization is possible in coupled differential equations , even in the absence of intrinsic delays .the study of approximate anticipated synchronization in coupled maps constitutes therefore a line for future work . in particular , it would be interesting to investigate in detail the connection between the degree of synchronization , and the irregularity of the dynamics as measured by the lyapunov exponents . for delay differential equations , the number of positive lyapunov exponents and the fractal dimension increase linearly with the delay , while the metric entropy remains roughly constant .we therefore conjecture that the metric entropy might be a good indicator of the possibility of synchronizing with anticipation , and thus predicting , chaotic dynamics .this work was supported by proyecto de desarrollo de ciencias bsicas ( pedeciba ) and by comisin sectorial de investigacin cientfica ( csic ) , uruguay .
we study the synchronization of two chaotic maps with unidirectional ( master - slave ) coupling . both maps have an intrinsic delay , and coupling acts with a delay . depending on the sign of the difference , the slave map can synchronize to a future or a past state of the master system . the stability properties of the synchronized state are studied analytically , and we find that they are independent of the coupling delay . these results are compared with numerical simulations of a delayed map that arises from discretization of the ikeda delay - differential equation . we show that the critical value of the coupling strength above which synchronization is stable becomes independent of the delay for large delays . , chaos synchronization , time - delayed systems 05.45.xt , 05.65.+b
early , confessional attempts aimed at establishing a consistent biblical chronology , such as the seder olam rabbah , strove to identify the timeline of events in the pentateuch purely from information contained in the text , such as genealogical information and narrative clues .while internally consistent , the resulting chronologies were ultimately incompatible with a growing body of scientific and historical knowledge : for instance , the seventeenth - century irish bishop james ussher calculated that god created the universe on october 22 , 4004 bce [ 1 ] , an untenable thesis in the light of modern physical cosmology .the pentateuch s own insistence on timekeeping suggests that a precise timeline for at least some of the events was available at the time the text was written .yet , dauntingly , the genealogical and biographical information contained in it proved insufficient to identify a unique , reliable chronology , and as a consequence none of those events could be provided with a firm historical basis .modern , non - confessional approaches have variously attempted to correlate the narratives with external information , such as historical records or archaeological evidence , in order to identify a reliable timeline for the events described in the pentateuch . yet, the introduction of external elements has often been carried out at the expense of consistency with internal textual constraints : for instance , the birth of agriculture in the near east according to archaeological evidence seems irreconcilable with the dating implied by the genealogies of genesis . as we shall seethe pentateuch does contain a complete and reliable internal timekeeping system , but its reckoning of time is not limited to time spans : it also involves astronomical information , reflecting the high status of the study of astronomy in judaism ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ r. simeon b. pazzi said in the name of r. joshua b. levi on the authority of bar kappara : he who knows how to calculate the cycles and planetary courses , but does not , of him scripture saith , but they regard not the work of the lord , neither have they considered the operation of his hands [ isa 5:12 ] .r. samuel b. nahmani said in r. johanan s name : how do we know that it is one s duty to calculate the cycles and planetary courses ? because it is written , for this is your wisdom and understanding in the sight of the peoples [ deu 4:6 ] : what wisdom and understanding is in the sight of the peoples ? say , that it is the science of cycles and planets .babylonian talmud , shab .75a _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the above passage from the babylonian talmud of course represents a much later tradition : its interest in our context comes from the fact that it emphasizes the duty to calculate , as opposed to simply observe or record , astronomical events .another element which also points to a calculational perspective is the observation that , in some manuscripts of the samaritan pentateuch , the word `` astrolabe '' appears in place of `` idols '' in the context of jacob s escape from laban s house . in our analysis of pentateuch chronologya calculational - as opposed to observational - perspective , as could have been afforded at the time by the use of an astrolabe , will turn out to be essential .as we shall strive to demonstrate , nearly all of the astronomical descriptions which are found in the text refer to solar eclipses at the northern ( also known as spring , vernal , or paschal ) equinox . in those descriptions ,the sun typically represents the `` glory of god '' : directly , or symbolically through the current protagonist of the narrative or his firstborn .the moon , which in a solar eclipse appears to partially or totally cover the sun , variously represents the `` hand of god '' , the hand of the protagonist , or that of a third party ; but also a rainment , the mouth of a well , or the egyptian bondage .the ecliptic , along which the constellations of the zodiac are found , is a caravan circle , a bow , a sword , or a ladder .eclipses which , from the point of view of an observer located in the near east , take place at sunrise or sunset are often used to identify a geographical feature at the visible horizon , for instance a town or mountain . in the northern hemisphere the vernal equinoxis closely associated with the renewal of life , and in several calendar systems of the near east marks the beginning of the new year . in particular , whenever it falls on a new moon , the vernal equinox also marks the beginning of the new year in the jewish religious calendar ( nisan 1 ) .solar eclipses only occur with a new moon , and hence in the jewish calendar they may only take place at the beginning of each lunar month .approximate alignments of the sun and the moon on the same calendar day follow the 19-year metonic cycle .the cycle was presumably well known to the author of the pentateuch , as the nineteenth verse in genesis 1 is the conclusion of the fourth day , in which the sun and the moon are created .also worth of mention , in the light of our proposed identification of the sun and the moon as the glory and the hand of god , respectively , is the opening of psalm 19 : `` the heavens declare the glory of god ; the skies proclaim the work of his hands '' .the analysis will proceed as follows .taking noah s flood as the starting point , and proceeding until the death of moses , we shall identify a sequence of passages in the text , and demonstrate that each passage can be associated to a specific solar eclipse , in such a way that : ( i ) all solar eclipses at the northern equinox in the reconstructed timelines of the narratives are accounted for ; ( ii ) all occurrences of the words `` appeared '' and `` covenant '' in the corresponding portions of the text are accounted for .next , we shall turn to the genealogies of the pre - flood patriarchs , which unlike those of their post - flood counterparts widely differ across manuscripts . for each of the three main extant manuscript families ( masoretic , septuagint and samaritan ) we test the hypotesis that the corresponding chronologies are independent of eclipse events .as we shall see , using a simple binomial model , while for the masoretic and septuagint chronologies independence can not be rejected at any interesting significance level , the chance that the dates in the samaritan chronology were set independently of eclipse events turns out to be lower than 1 in 10,000 .the origins of western astronomy can be traced to mesopotamia [ 2 ] . in the earliest babylonian star catalogues , dating from about 1200 bce, many star names appear in sumerian , suggesting a continuity reaching into the early bronze age . for convenience, we shall sometimes express dates in astronomical notation .in such notation the year -1000 corresponds to 1001 bce , reflecting the fact that there is no year 0 in the gregorian calendar .we use stellarium 0.10.4 ( available as a free download at www.stellarium.org ) to reconstruct the position of the celestial bodies at specific times and places .finally , we refer to all eclipses which admit a totality ( respectively , annularity ) path as total ( respectively , annular ) ; we refer to all other eclipses as partial . for each eventwe also report whether , from the point of view of an observer located in the near east , it occurred during daytime or nighttime . in almost all cases ,we conventionally take jerusalem as the reference point .william ryan and walter pitman suggested in 1999 that several related near eastern flood stories , including noah s , could be associated with the sudden and perhaps violent flooding of the black sea region in the middle of the 6th millennium bce [ 3,4 ] .the analysis of geologic and organic sediments on the black sea floor revealed ancient shorelines and deltas , and the abrupt disappearence around 5500 bce of freshwater mollusks , replaced by marine species . according to ryan and pitman , around that time the waters of the mediterranean, whose level had been increasing by over 50 meters since the beginning of the holocene , spilled over the bosphorus and flooded the vast plains and shorelines surrounding what at the time was a large freshwater lake ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ gen 6:12 and god looked upon the earth , and , behold , it was corrupt [ _ lit ._ : smeared ] ; for all flesh had corrupted [ _ lit ._ : smeared ] his way upon the earth . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ gen 6:18 but with thee will i establish my covenant ; and thou shalt come into the ark , thou , and thy sons , and thy wife , and thy sons wives with thee . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ -5504/4/30 nighttime , partial eclipse at the northern equinox .the sun in gemini ( the brotherhood of mankind , as the `` glory of god '' ) , following its path upon the surface of the earth , smears it with the shadow of the moon ( by its own hand ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the event takes place just before the flood , as noah begins to build the ark .it announces , via the 19-year metonic cycle , another nighttime , total eclipse which takes place on -5485/5/1 , shortly after the end of the flood . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ gen 9:13 - 14 i do set my bow in the cloud , and it shall be for a token of a covenant between me and the earth . and it shall come to pass , when i bring a cloud over the earth , that the bow shall be seen in the cloud ._ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the bow in the cloud not only symbolizes the rainbow , but also the ecliptic : in which case , whenever the moon ( as the `` hand of god '' ) with its shadow brings a cloud over the earth , it also appears to touch the ecliptic ( the bow ) .knowledge of the ecliptic allows for a lunisolar calendar , which is kept in tune by direct astronomical observation of the equinoxial points .the additional knowledge of the metonic cycle also affords some ability to predict the occurrence of eclipses , which in turn , as the passage suggests , come to be regarded as a symbol of covenant . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ gen 9:23 and shem and japheth took a garment , and laid [ it ] upon both their shoulders , and went backward , and covered the nakedness of their father ; and their faces [ were ] backward , and they saw not their father s nakedness . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ -5420/5/3 total eclipse at the northern equinox .the sun in gemini is covered by the moon .the twins appear to walk backward , escorting the sun along the ecliptic ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the equinoxial points slowly rotate with respect to the constellations of the zodiac , returning to the same position approximately every 25,800 years . as a consequence , no other daytime ( in the near east ) , total eclipse at the paschal equinox in the last 30,000 years had the sun standing just in the middle of gemini .note that , here and in other descriptions of astronomical events , the position of the relevant constellations was of course not directly observable as it was daytime , and could only be deduced through astronomical calculations .the genealogies of the post - flood patriarchs , like those of their pre - flood predecessors , are comprised of two figures per patriarch .[ cols= " < , < , < " , ]josephus on manetho ( contra apionem , book 1 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ manetho [ ... ] promised to interpret the egyptian history out of their sacred writings , and premised this : that `` our people had come into egypt , many ten thousands in number , and subdued its inhabitants ; '' and when he had further confessed that `` we went out of that country afterward , and settled in that country which is now called judea , and there built jerusalem and its temple . '' [ ... ] he mentions amenophis [ ... ] ; he then ascribes certain fabulous stories to this king [ ... ] .`` this king was desirous to become a spectator of the gods , as had orus , one of his predecessors in that kingdom , desired the same before him ; he also communicated that his desire to his namesake amenophis , who was the son of papis , and one that seemed to partake of a divine nature , both as to wisdom and the knowledge of futurities . ''manetho adds , `` how this namesake of his told him that he might see the gods , if he would clear the whole country of the lepers and of the other impure people ; that the king was pleased with this injunction , and got together all that had any defect in their bodies out of egypt ; and that their number was eighty thousand ; whom he sent to those quarries which are on the east side of the nile , that they might work in them , and might be separated from the rest of the egyptians .'' he says further , that `` there were some of the learned priests that were polluted with the leprosy ; but that still this amenophis , the wise man and the prophet , was afraid that the gods would be angry at him and at the king , if there should appear to have been violence offered them ; who also added this further , [ out of his sagacity about futurities , ] that certain people would come to the assistance of these polluted wretches , and would conquer egypt , and keep it in their possession thirteen years ; that , however , he durst not tell the king of these things , but that he left a writing behind him about all those matters , and then slew himself , which made the king disconsolate . ''after which he writes thus verbatim : `` after those that were sent to work in the quarries had continued in that miserable state for a long while , the king was desired that he would set apart the city avaris , which was then left desolate of the shepherds , for their habitation and protection ; which desire he granted them .now this city , according to the ancient theology , was typho s city .but when these men were gotten into it , and found the place fit for a revolt , they appointed themselves a ruler out of the priests of hellopolis , whose name was osarsiph , and they took their oaths that they would be obedient to him in all things .he then , in the first place , made this law for them , that they should neither worship the egyptian gods , nor should abstain from any one of those sacred animals which they have in the highest esteem , but kill and destroy them all ; that they should join themselves to nobody but to those that were of this confederacy .when he had made such laws as these , and many more such as were mainly opposite to the customs of the egyptians , he gave order that they should use the multitude of the hands they had in building walls about their city , and make themselves ready for a war with king amenophis , while he did himself take into his friendship the other priests , and those that were polluted with them , and sent ambassadors to those shepherds who had been driven out of the land by tefilmosis to the city called jerusalem ; whereby he informed them of his own affairs , and of the state of those others that had been treated after such an ignominious manner , and desired that they would come with one consent to his assistance in this war against egypt .he also promised that he would , in the first place , bring them back to their ancient city and country avaris , and provide a plentiful maintenance for their multitude ; that he would protect them and fight for them as occasion should require , and would easily reduce the country under their dominion .these shepherds were all very glad of this message , and came away with alacrity all together , being in number two hundred thousand men ; and in a little time they came to avaris . andnow amenophis the king of egypt , upon his being informed of their invasion , was in great confusion , as calling to mind what amenophis , the son of papis , had foretold him ; and , in the first place , he assembled the multitude of the egyptians , and took counsel with their leaders , and sent for their sacred animals to him , especially for those that were principally worshipped in their temples , and gave a particular charge to the priests distinctly , that they should hide the images of their gods with the utmost care .he also sent his son sethos , who was also named ramesses , from his father rhampses , being but five years old , to a friend of his .he then passed on with the rest of the egyptians , being three hundred thousand of the most warlike of them , against the enemy , who met them .yet did he not join battle with them ; but thinking that would be to fight against the gods , he returned back and came to memphis , where he took apis and the other sacred animals which he had sent for to him , and presently marched into ethiopia , together with his whole army and multitude of egyptians ; for the king of ethiopia was under an obligation to him , on which account he received him , and took care of all the multitude that was with him , while the country supplied all that was necessary for the food of the men .he also allotted cities and villages for this exile , that was to be from its beginning during those fatally determined thirteen years .moreover , he pitched a camp for his ethiopian army , as a guard to king amenophis , upon the borders of egypt .and this was the state of things in ethiopia . [ ... ] it was also reported that the priest , who ordained their polity and their laws , was by birth of heliopolis , and his name osarsiph , from osyris , who was the god of heliopolis ; but that when he was gone over to these people , his name was changed , and he was called moses . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i will inquire into what cheremon says . for healso , when he pretended to write the egyptian history , sets down the same name for this king that manetho did , amenophis , as also of his son ramesses , and then goes on thus : `` the goddess isis appeared to amenophis in his sleep , and blamed him that her temple had been demolished in the war . but that phritiphantes , the sacred scribe , said to him , that in case he would purge egypt of the men that had pollutions upon them , he should be no longer troubled .with such frightful apparitions . that amenophis accordingly chose out two hundred and fifty thousand of those that were thus diseased , and cast them out of the country : that moses and joseph were scribes , and joseph was a sacred scribe ; that their names were egyptian originally ; that of moses had been tisithen , and that of joseph , peteseph : that these two came to pelusium , and lighted upon three hundred and eighty thousand that had been left there by amenophis , he not being willing to carry them into egypt ; that these scribes made a league of friendship with them , and made with them an expedition against egypt : that amenophis could not sustain their attacks , but fled into ethiopia , and left his wife with child behind him , who lay concealed in certain caverns , and there brought forth a son , whose name was messene , and who , when he was grown up to man s estate , pursued the jews into syria , being about two hundred thousand , and then received his father amenophis out of ethiopia . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the names osarseph ( osiris - seph ) and peteseph ( ptah - seph ) are related to joseph through the association of osiris and ptah with the moon , and the phonetic analogy of iah ( the egyptian word for moon ) with jah ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` i have heard of the ancient men of egypt , that moses was of heliopolis , and that he thought himself obliged to follow the customs of his forefathers , and offered his prayers in the open air , towards the city walls ; but that he reduced them all to be directed towards sun - rising , which was agreeable to the situation of heliopolis ; that he also set up pillars instead of gnomons , under which was represented a cavity like that of a boat , and the shadow that fell from their tops fell down upon that cavity , that it might go round about the like course as the sun itself goes round in the other . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
some of the narratives in the pentateuch can be associated with known astronomical events to provide absolute dates for biblical chronology .
in the simplest random - effects model of meta - analysis involving , say , studies the data is supposed to consist of treatment effect estimators , which have the form here is an unknown common mean , is zero mean between - study effect with variance , , and represents the measurement error of the study , with variance .then the variance of is . in practice often treated as a given constant , , which is the reported standard error or uncertainty of the study .the considered here problem is that of estimation of the common mean and of the heterogeneity variance from the statistical decision theory point of view under normality assumption . if is known , then the best unbiased estimator of is the weighted means statistic , , with the normalized weights , .its variance has the form ^{-1}.\ ] ] when is unknown , to estimate the common practice uses a plug - in version of , so that an estimator of is required in the first place . usually such an estimator is obtained from a moment - type equation .for example , the dersimonian laird estimator of is with denoting the graybill deal estimator of .the popular dersimonian laird -estimator is obtained from ( [ es ] ) by using the positive part of .similarly the estimator of , leads to the hedges estimator of .the paper questions the wisdom of using under all circumstances the tradition of plugging in estimators to get estimators .indeed the routine of plug - in estimators may lead to poor procedures .for example , by replacing the unknown by in the above formula for , one can get a flagrantly biased estimator which leads to inadequate confidence intervals for .a large class of weighted means statistics is motivated by the form of bayes procedures derived in section [ qe ] .these statistics which typically _ do not _ admit the representation ( [ es ] ) induce estimators of the weights ( [ we ] ) which shows the primary role of -estimation .the main results of this work are based on a canonical representation of the restricted likelihood function in terms of independent normal random variables and possibly of some -random variables .an important relationship between the weighted means statistics with weights of the form ( [ we ] ) and linear combinations of s , which are shift invariant and independent , follows from this fact .our representation transforms the original problem to that of estimating curve - confined expected values of independent heterogeneous -random variables .this reduction makes it possible to describe the risk behavior of the weighted means statistics whose weights are determined by a quadratic form .we make use of the concept of permissible estimators which can not be uniformly improved in terms of the differential inequality in section [ inad ] .this inequality shows that the sample mean exhibits the stein - type phenomenon being an inadmissible estimator of under the quadratic loss when .a risk function for the weights in a weighted means statistic whose main purpose is -estimation is suggested in section [ rrisk ] .it is shown there that under this risk the sample mean is not even minimax .section [ mini ] discusses the case of approximately equal uncertainties , and section [ exa ] gives an example .the derivation of the canonical representation of the likelihood function is given in the ; the proof of theorem [ th1 ] is delegated to the electronic supplement .the setting with the common mean and the heterogeneity variance described in section [ in ] is a special case of a mixed linear model where statistical inference is commonly based on the restricted ( residual ) likelihood function .the ( negative ) restricted log - likelihood function ( , section 6.6 ) has the form .\ ] ] it is possible that some of are equal ; let have the multiplicity , so that . then with the index now taking values from to , .\ ] ] here , denotes the number of pairwise different , represents the average of s corresponding to the particular , and is their sample variance when . to simplify the notation, we write for , so that . in our problem and , ,form a sufficient statistic for and . throughout this paper ,we assume that .otherwise all -estimators reduce to the sample mean ( but see section [ mini ] where -estimation for equal uncertainties is considered ) .the results in the relate the likelihood function to the joint density of independent normal , zero mean random variables .the -dimensional normal random vector which is a linear transform of has zero mean ( no matter what is ) and the covariance matrix , , with larger than . to find these numbers , we introduce the polynomial of degree , and its minimal annihilating polynomial which has degree .define thus is a polynomial of degree which has only real ( negative ) roots , denoted by ( coinciding with the roots of different from ) .thus .note that .when , , and .according to ( [ def ] ) , so that by using ( [ co7 ] ) one gets \\[-8pt ] & & \hphantom{\mathcal{l } = \frac{1}{2 } \biggl[}+\sum _ i \frac{(\nu_i-1 ) u_i^2}{\tau^2+s_i^2 } + \sum_i ( \nu_i-1)\log \bigl(\tau^2+s_i^2 \bigr ) + \log n \biggr ] .\nonumber\end{aligned}\ ] ] the representation ( [ rl ] ) of the restricted likelihood function very explicitly takes into account one degree of freedom used for estimating , as it corresponds to independent zero mean , normal random variables with variances . in addition , this likelihood includes independent , each being a multiple of a -random variable with degrees of freedom . when , is an unbiased estimator of , . for , with probability one .according to the sufficiency principle , all statistical inference about involving the restricted likelihood can be based exclusively on and .their joint distribution forms a curved exponential family whose natural parameter is formed by ( and perhaps by some ) .evaluation of the restricted maximum likelihood estimator ( reml ) is considerably facilitated by employing and .indeed ( [ rl ] ) shows that this estimator can be determined by simple iterations as with as a good starting point , and truncation at zero if the iteration process converges to a negative number .thus , is related to a quadratic form whose coefficients are inversely proportional to the estimated variances of and of ( cf . , section 8) .the form of the likelihood function also motivates the moment - type equations based on general quadratic forms , with positive constants .the moment - type equation written in terms of random variables and is = \biggl [ \sum_j q_j + \sum _ i ( \nu_i-1)r_i\biggr ] \tau^2 + \sum_j q_j t_j^2 + \sum_i ( \nu_i-1)r_i s_i^2.\ ] ] then the estimator of by the method of moments is unless is large , the probability that takes negative values is non - negligible .non - negative statistics are used to get -estimators of the form ( [ es ] ) .the representations of two traditional statistics in section [ in ] easily follow , and a different method - of - moments procedure suggested by paule and mandel is based on solving the equation , which has a unique positive solution , , provided that . if this inequality does not hold , .because of ( [ co7 ] ) , the equation can be rewritten in terms of s and s as this representation allows for an explicit form of in some cases .indeed , when , ] , , which are shift invariant , for any real .( any function of is shift invariant . ) indeed the use of restricted likelihood is tantamount to the practice of weighted means statistics with invariant weights as estimators . , section 9.2 ) .formula ( [ rep ] ) in the gives with discussed in section [ ba ] .positive coefficients ( the diagonal elements of the diagonal matrix defined in lemma [ lem ] ) can be found from ( [ co6 ] ) or rather from ( [ co5 ] ) ; is the posterior mean of , with .thus positive is designed to estimate , , and as a function of , decreases .the inequalities , , and , are equivalent. if and the support of has at least two points , does not admit representation ( [ es ] ) which suggests a more general class of -estimators .namely , we propose to use weighted means statistics with weights .the bayes weights belong to a smaller part of this polyhedron , namely to the convex hull of the vectors with coordinates for . if is an estimate of , the weights corresponding to ( [ es ] ) , lie on the boundary of this convex hull . a corner point, , of the convex hull always is an inner point of the polyhedron .thus the focus in this paper is on estimators of , which admit the representation , with and as defined above .the last term in the right - hand side of ( [ ne ] ) can be viewed as an arguably necessary heterogeneity correction to .notice that ( [ ne ] ) does not need an estimate of as a prerequisite . since is an approximation to , when , the form of the reml in section [ ba ] suggests such an estimator : +/\sum w_j^2 ] .the proof of theorem 1 in shows that any permissible in our situation is of the form \ ] ] with some piecewise differentiable positive function .when and for a positive quadratic form , one gets ] , so that its -risk , which vanishes when , grows quadratically in .the next result gives a large class of estimators with bounded -risk improving on when .[ th1 ] under notation of section [ ba ] , let for , be a quadratic form with positive coefficients . if has the form ( [ ne ] ) such that for , then \quad\quad\ \ \\ & \geq&\frac{2}{n-1 } , \nonumber\end{aligned}\ ] ] where independent standard normal are independent of . equal coefficients ( and only they ) provide the asymptotically optimal quadratic form . if , the optimal choice is .the sample mean is not -minimax , any estimator ( [ ne ] ) with weights ( [ wco ] ) improves on itif \sum_j b_j q_j } { \max [ \max_j q_j^2 t_j^ 4 , \max_{i : \nu_i \geq2 } r_i^2 s_i^4 ] \sum_j b_j q_j^2}.\ ] ] theorem [ th1 ] shows that the traditional weights ( [ weig ] ) with are not asymptotically optimal unless the quadratic form coincides ( up to a positive factor ) with , and .only then ( [ isr ] ) is an equality .thus , the hedges estimator for which and , is not asymptotically optimal albeit its performance is the best when is large . for the mandel paule estimator from section [ ba ] , as well as for the reml , ( [ isr ] ) also holds with the same quadratic form and the same .the dersimonian laird estimator is defined by the quadratic form with .therefore , these three statistics are not optimal for large either .the case when was studied in . then is admissible ( so that it is automatically minimax under ) .any estimator ( [ ne ] ) has the form ( [ es ] ) with some , and its -risk grows linearly in , for , as , ( see electronic supplement ) . by analogy with the stein phenomenon , admissibility of the sample mean when is expected . when , the minimax value , , ( which does not exceed one since ) can not be smaller than .indeed for any estimator , this fact can be proven by constructing a sequence of proper prior densities for such that the corresponding sequence of the bayes -risks converges to .thus for large , the estimators ( [ ne ] ) with , , can not be improved upon .the most natural of these statistics , say , has the form ( [ ne ] ) with another modified hedges estimator , , has the form ( [ es ] ) with + ] .the dersimonian laird rule , + /(n-1 ) + s^2 \ } ^{-1 } , \alpha = n-1 ] . the modified hedges estimator with + ] . ) the -risk of at can be larger than . indeed where is the distribution function of . with ^{1/(n+1)} ] is an increasing function of , }{n-1 } \\ & & { } + \frac { ( n-3)a^2[1-g_{n-3}(a(n-3))]}{n-1}.\end{aligned}\ ] ] this inequality shows that , if , where is monotonically increasing to in for small , can not have its risk at the origin smaller than .for example , when , if and only if , i.e. , iff .the dersimonian laird estimator with , has its -risk at of the form ^ 2 \,\mathrm{d}g_{n+1}(v)\ ] ] with ] .its risk at , is always smaller than that of .but is also competitive against .indeed if and only if , that is , iff thus provided that } ] , shows that which implies ( [ co2 ] ) . to prove ( [ co3 ] ) , observe that for , the element of the matrix has the form , =- \frac{\nu_i \nu _ k}{n}.\end{aligned}\ ] ] here we used the facts that , and . to prove ( [ co10 ] ) for fixed , multiply ( [ coo ] ) by , divide by , and sum up over to get the following expression for the element of the matrix , ,\ ] ] where is the kronecker symbol ( , if otherwise ) .it is easy to see that , unless there are at least two equal indices among .when all three of these indices coincide , ^ 3 } \sum_i \frac{\nu_i } { ( s_i^2-t_j^2)^3 } \\ & = & -\frac { m(-t_j^2 ) [ q^{\prime\prime } ( -t_j^2 ) m(-t_j^2)- 2 q^\prime ( -t_j^2 ) m^\prime ( -t_j^2 ) ] } { 2 [ q^\prime ( -t_j^2)]^3 } \\ & = & \frac { b_j [ q^{\prime\prime } ( -t_j^2 ) m(-t_j^2)- 2 q^\prime ( -t_j^2 ) m^\prime ( -t_j^2 ) ] } { 2 [ q^\prime ( -t_j^2)]^2 } = - \frac { b_j^2 q^{\prime\prime } ( -t_j^2)+ 2 b_j m^\prime ( -t_j^2)}{2 q^\prime ( -t_j^2)}.\end{aligned}\ ] ] if , say , , ^ 2 q^{\prime}(-t_\ell^2 ) } \sum_i \frac{\nu _i } { ( s_i^2-t_j^2)^2 ( s_i^2-t_\ell^2 ) } \\ & = & - \frac { m^2(-t_j^2 ) m(-t_\ell^2 ) } { [ q^{\prime } ( -t_j^2)]^2 q^{\prime}(-t_\ell^2 ) ( t_j^2-t_\ell^2 ) } \sum_i \frac{\nu_i } { ( s_i^2-t_j^2)^2 } \\ & = & \frac { m(-t_j^2 ) m(-t_\ell^2 ) } { q^{\prime } ( -t_j^2 ) q^{\prime } ( -t_\ell^2 ) ( t_j^2-t_\ell^2 ) } = \frac { b_j b_\ell}{t_j^2-t_\ell^2}.\end{aligned}\ ] ]the last formula shows that off - diagonal elements of , for have the form \\ & & \quad= \biggl(\sum_i \frac{\nu_i}{\tau^2+s_i^2 } \biggr)\frac{b_j b_\ell } { ( \tau^2 + t_j^2 ) ( \tau^2 + t_\ell^2)},\end{aligned}\ ] ] that is , ( [ co10 ] ) holds for the off - diagonal elements . define the polynomial by the formula , then the degree of is , and this polynomial is determined by its values at : , and .it follows that indeed , the polynomial in the right - hand side has degree . since it assumes the same values as at , which establishes ( [ co10 ] ) . because of ( [ co3 ] ) and ( [ coe ] ) , + ( { \rho}^{\mathrm{t } } a^{\mathrm{t } } x ) e $ ] .thus the quadratic form in the left - hand side of ( [ co7 ] ) can be written as ^{\mathrm{t } } c^{-1 } \bigl[j^{-1}a \bigl(a^{\mathrm{t } } j^{-1}a\bigr)^{-1 } a^{\mathrm{t } } x+ e { \rho}^{\mathrm{t } } a^{\mathrm{t } } x\bigr ] \\ & & \quad = y^{\mathrm{t } } \bigl [ \bigl(a^{\mathrm{t } } j^{-1}a \bigr)^{-1/2 } a^{\mathrm{t } } j^{-1}+\bigl(a^{\mathrm{t } } j^{-1}a\bigr)^{1/2 } { \rho}e ^{\mathrm{t}}\bigr ] \\ & & \qquad{}\times c^{-1}\bigl [ j^{-1}a \bigl(a^{\mathrm{t } } j^{-1}a \bigr)^{-1/2 } + e { \rho}^{\mathrm{t } } \bigl(a^{\mathrm{t } } j^{-1}a\bigr)^{1/2 } \bigr]y \\ & & \quad= y^{\mathrm{t } } \operatorname{diag}({\rho})y,\end{aligned}\ ] ] where the second equality follows from ( [ co10 ] ) and ( [ co9 ] ) . the following important representation for is a consequence of lemma [ lem ] . here are independent normal , zero mean random variables with the variances . indeed the normal random vector has the covariance matrix . since , and independent implying independence of and in section [ inad ] .the coefficients provide a simple expression for .indeed , by dividing ( [ coo ] ) by and multiplying it by , one gets after summing up over all and and using ( [ co0 ] ) , ( [ co2 ] ) , ^{-1 } -\frac { \tau^2+s^2}{n } = -\sum_{i , j } \frac{a_{ij}^2}{\nu_i ( \tau^2 + t_j^2)}.\end{aligned}\ ] ] this formula can be written in the form , which provides the representation of the left - hand side of ( [ co5 ] ) as a ratio of two polynomials of degree and , respectively and which allows numerical evaluation of s without calculating .
in the random - effects model of meta - analysis a canonical representation of the restricted likelihood function is obtained . this representation relates the mean effect and the heterogeneity variance estimation problems . an explicit form of the variance of weighted means statistics determined by means of a quadratic form is found . the behavior of the mean squared error for large heterogeneity variance is elucidated . it is noted that the sample mean is not admissible nor minimax under a natural risk function for the number of studies exceeding three .
the inverse problem is described in figure [ pbinv ] . the radar cross section quantifies the scattering power of an object , at a given incidence and wave frequency .it is defined as the ratio between the radar transmitted power and the incident power density ( in plane wave ) .rcs measurement process is schematically presented on the right part of figure [ pbinv ] .the object or mock - up is illuminated by a quasi - planar monochromatic wave , inside an anechoic chamber where interferences are limited.the acquisitions are realized at successive discrete frequencies ( ) , for different incidence angles ( by piloting motorized rotating support ) . from this raw data ,a signal processing is performed , mainly consisting of calibration and filtering . at the end, the measurement provides an evaluation of the calibrated complex ( amplitude and phase ) scattering coefficient , for each frequency and incidence .a metallic axi - symmetric object is recovered with areas ( see figure [ freq - fix ] ) , each area corresponding to a material with its associated isotropic radioelectric properties , i.e. the complex parameters of permittivity and permeability .the em inverse problem can be expressed as : is it possible to extract some local information on the material properties of each area from the global scattering measurement ?at a given frequency , the system state can be defined by ( omitting to lighten the notations ) : ^t ] . considering all frequencies , vectors and respectively define complete system state and observation .the 2d - axisymetric maxwell solver software ( ) can predict the observation from the system state . assuming a multidimensional gaussian measurement uncertainty model, it leads to the following likelihood model ( at a given frequency ) , where is the covariance matrix ( assumed known ) . to avoid numerous and heavy computations ,we have developed the following global approach .first , the high dimension state space and the associated system response are explored randomly around expected properties ( prior knowledge ) , computations being massively distributed on hpc machines .let the training data composed of couples .it is then processed by n - d statistical techniques ; sensibility analysis and model reduction techniques can possibly reduce the state space dimension . applying multidimensional regression, it turns out that the model is approximatively linear . according to the studied cases , the linearity errors , evaluated by residue analysis and bootstrap techniques , are significant but much lesser than the rcs measurement uncertainties .finally , the following linear gaussian model can be considered ( at a given frequency ) : where the deterministic part of the linear model is given by the learned matrix and the vector .let us fix a frequency .we model our a priori knowledge on with a gaussian distribution .the object is divided in blocks of areas , each of them composed of a rather homogeneous material .the location of these blocks is known exactly .prior mean value is defined with reference values of , , and for each of theses blocks .then , for any component , we define a variance as a mix between absolute and relative uncertainty . to take into account the spatial local homogeneity , covariance matrix is defined block by block independently ( and separately for each of the , , , ) by correlation relations between s block - sharing components : where ] , and . * : depends on the material ( block ) .it is -dimensional , ^{n_b}$ ] , and giving to each line ( component of ) its associated component of . * : depends on material and , , , .it is -dimensional , and is the diagonal matrix composed with -type blocks .conditionnaly to frequential correlation parameter , the problem of the determination of given the measurements can be expressed as a classic linear gaussian hidden dynamic markov process observed at `` times '' ( ) : where and are independant gaussian noises with known parameters , and a known matrix depending on ( see ( [ mod - obs ] ) and ( [ mod - ap ] ) ) .parameter , intuitively representing the inter - frequency regularity , is unknown and to be estimated . in respect to the probabilistic point of vue ,it is probabilized , and given a prior distribution .the posterior distribution can be decomposed as : . conditionally to , the system is linear gaussian : the conditional distributions can be straightforwardly computed by kalman filtering , including in this off - line context backward kalman smoothing . on the other hand ,the term can be evaluated ( up to a normalising constant ) for a given using the likelihood term provided by the kalman filter and prior distribution .consequently , in order to exploit this conditional structure of the system , kalman smoothers are applied and integrated in an interacting particle approach .this idea of mixing analytic integration ( here kalman evaluation of ) with stochastic sampling is a variance reduction approach , known as rao - blackwellisation . similarly to , we choose to implement an efficient interacting particle approach , in order to estimate the marginal distribution .sequential monte carlo ( smc ) is a stochastic algorithm to sample from complex high - dimensional probability distributions .the principle ( e.g. , ) is to approximate a sequence of target probability distributions by a large cloud of random samples termed particles , being called the state space . between `` times '' and ,the particles evolve in state space according to steps : * a * selection * step : every particle is given a weight defined by a selection function ( ) . by resampling ( stochastic or deterministic ) , low - weighted particles vanish and are replaced by replicas of high - weighted ones . * a * mutation * step : each selected particle move , independently from the others , according to a markov kernel .{cl}% \zeta_{n-1}^{1 } & \\ \vdots & \\ \zeta_{n-1}^{i } & \\ \vdots & \\ \zeta_{n-1}^{n_p } & \end{array } \right ] \underbrace { \overset{g_n } % { -\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!\longrightarrow } } _ { \textmd{selection}}\left [ \begin{array } [ c]{cl}% \widehat{\zeta}_{n-1}^{1 } & \overset{m_{n}}% { -\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!\longrightarrow } \\ \vdots & \\ \widehat{\zeta}_{n-1}^{i } & -\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!\longrightarrow \\ \vdots & \\ \widehat{\zeta}_{n-1}^{n_p } & \underbrace{-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!\longrightarrow}_{\textmd{mutation } } \end{array } \right . \left . \begin{array } [ c]{cl}% \zeta_{n}^{1 } & \\ \vdots & \\ \zeta_{n}^{i } & \\ \vdots & \\ \zeta_{n}^{n_p } & \end{array } \right]\ ] ] evolving this way , the cloud of particles , and more precisely the occupation distribution ( sum of dirac distributions ) , approximates for each the theoretical distribution defined recursively by the feynman - kac formulae , associated with the potentials and kernels .+ back to our objective of sampling from , we then define the sequence of distributions : where is a sequence of number increasing from to , so that : is prior distribution , easy to sample , is target distribution and sequence admits a feynman - kac type structure with calculable selection functions and markov kernels chosen so that ( metropolis - hastings for example ) .the distribution is then interpreted as being the last distribution of a feynman - kac sequence , on which smc can be performed , the estimator of being the occupation distribution extracted from the last cloud of particles .the occupation distribution which approximates can be represented dimension by dimension via histograms ( see figure [ hist - rau ] ) . for each frequency ,this approximation , associated with the theoretical conditioning relations \\ & cov(\mathbf{x}_k | \mathbf{y } ) = e\left [ cov(\mathbf{x}_k | \rho,\mathbf{y } ) | \mathbf{y } \right ] + cov\left [ e(\mathbf{x}_k | \rho,\mathbf{y } ) | \mathbf{y } \right]\end{aligned}\ ] ] can deliver estimators of respectively the mean and the covariance matrix of .roughly speaking , the posterior estimation is performed by randomly picking a from the final cloud of particles and computing associated samples of by a kalman smoother conditionally to .it is illustrated in figure [ freq - fix ] , with a good agreement between the true state and estimated state .moreover , for any fixed area , the method provides estimators of the mean and marginal variance for every frequency , so that the results can be presented as frequential profiles , with marginal uncertainties ( see figure [ ann - fix ] ) .even when the true ( simulated ) em property profiles are chosen markedly divergent from the prior ar - type model , the method turns out to be robust .it results from the adaptive estimation of which provides small values of ( i.e. weak correlation of em properties for close frequencies ) in the case of highly irregular true profiles .an efficient statistical inference approach has been applied to estimate local material radioelectric properties from global em scattering measurements .it combines intensive computations , meta - modeling and advanced sequential monte carlo techniques dedicated to frequency dynamic estimation .9 e. f. knott , _ radar cross section measurements _ , scitech publishing , 2006 . s. m. pandit , s - m wu , _ time series and system analysis with applications _ , john wiley & sons , 1983 . j. s. liu and r. chen , _ sequential monte carlo for dynamic systems _ , journal of the american statistical association , 93 , 1032 - 1044 , 1998 .p. del moral , a. doucet and a. jasra , _ sequential monte carlo methods for bayesian computation _ , bayesian statistics 8 , oxford university press , 2006 .del moral , _feynman - kac formulae , genealogical and interacting particle approximations _ , springer new york ( series : probability and applications ) , 2004 .
the following electromagnetism ( em ) inverse problem is addressed . it consists in estimating local radioelectric properties of materials recovering an object from the global em scattering measurement , at various incidences and wave frequencies . this large scale ill - posed inverse problem is explored by an intensive exploitation of an efficient 2d maxwell solver , distributed on high performance computing ( hpc ) machines . applied to a large training data set , a statistical analysis reduces the problem to a simpler probabilistic metamodel , on which bayesian inference can be performed . considering the radioelectric properties as a dynamic stochastic process , evolving in function of the frequency , it is shown how advanced markov chain monte carlo methods , called sequential monte carlo ( smc ) or interacting particles , can provide estimations of the em properties of each material , and their associated uncertainties .
in the middle of the last century , in the wake of observations made by navy officers during world war ii , according to which the radar echo of a plane flying near horizon above the ocean is modulated by interference fringes , australia was the home of the founding fathers of radio interferometry and the site of pioneering observations using the so - called `` sea - cliff interferometer '' .the principle of the method was to observe a radio source as it rises above the horizon with a single antenna located on top of a cliff above the ocean ; the direct wave and its reflection on the water surface interfere and produce interference fringes that allow for considerably improved angular resolution with respect to what was possible at that time .observations of solar spots , soon followed by observations of various radio sources , have then been reported .the present work is an illustration of the same mechanism causing correlations between apparent solar oscillations simultaneously detected by two distant observatories respectively located in ha noi ( viet nam ) and learmonth ( australia ) using radio telescopes operated at 1.415 ghz . in this case , oscillations are not observed on the rising sun but at large elevations : the reflected wave reaches the antenna in one of its side lobes , at large angle with respect to the beam . as a result , the oscillations have amplitudes of a few per mil , rarely exceeding 1% .they occur on the ground surrounding the antenna and their periods are in the range of a few minutes .the intriguing existence of correlations between the ha noi and learmonth observations had first been considered as an argument against a possible instrumental effect .it is now clearly established that the cause of the correlation is purely instrumental .the following sections develop this argument , a brief preliminary account of which has been presented elsewhere .interferences between the direct plane wave emitted by a radio source and detected in an antenna , and its specular reflection on a horizontal surface flat ground or ocean have been known to produce oscillations since the first days of radio interferometry .writing the difference in path length between the interfering waves , the frequency and the wavelength , here cm , the time difference between the two interfering waves is . introducing a parameter to account for the attenuation resulting from the reflection on ground and from the lesser gain of the side lobe in which the reflected wave is detected, the dependence on time of the detected signal reads in a time interval centred on ( where ) the time dependence of the oscillation reads therefore \}\ ] ] describing the oscillation as a sine wave , ] . and having image in the ground mirror .lower panel : departure from exact specular reflection ( mean ray ) , definition of the angles and .,scaledwidth=38.0% ] an important consequence of the above mechanism is the existence of a correlation between the periods of simultaneous oscillations independently detected by two distant observatories such as ha noi and learmonth . consider two radio telescopes operated at a same frequency at heights and above a flat ground in observatories located at respective longitudes and latitudes and .writing , the periods read ] . here spans the 20 min interval while is the time of measurement .parameters and measure the relative amplitude and phase of the oscillation respectively .the problem being linear in and allows for an easy explicit calculation of the parameters .figure 4 illustrates the procedure .significant oscillations are defined as having a small value , a large value ( meaning that the oscillation accounts for a major fraction of the signal fluctuation in the 20 min interval ) and a large value ( meaning that the amplitude of the oscillation exceeds noise level ) .different selection criteria have been tried and the robustness of the corresponding conclusions has been ascertained .distributions of time versus period using sensible selection criteria display very clear patterns that are illustrated in figure 5 and follow the same trend as expected from specular reflection on ground .examples of the evolution of the phase of the oscillations from one day to the next are displayed in figure 6 .here again , the dependence on season , hemisphere and time in the day is as expected from specular reflection on ground . and ( black ) .the lower panel compares data ( blue ) and fit ( red ) after subtraction of and division by .,title="fig:",scaledwidth=35.0% ] and ( black ) .the lower panel compares data ( blue ) and fit ( red ) after subtraction of and division by .,title="fig:",scaledwidth=32.5% ] m ( roof ) and m ( ground ) .lower panel : learmonth data ( red ) collected in the 10 central days of may 2012 .the blue lines are ground specular reflection multipath predictions for m.,title="fig:",scaledwidth=30.0% ] m ( roof ) and m ( ground ) .lower panel : learmonth data ( red ) collected in the 10 central days of may 2012 .the blue lines are ground specular reflection multipath predictions for m.,title="fig:",scaledwidth=30.0% ]having illustrated by a few examples in figures 5 and 6 the good qualitative agreement between the observed oscillations and the predictions of a multipath model assuming perfect specular reflection on ground , we now attempt a more quantitative analysis of the effect .the measurements of the period and phase of the oscillations provide independent evaluations of the altitude of the antenna above ground . from one obtains value).,scaledwidth=48.0% ] relation 6 relates the height of the antenna above the reflective surface to the measured value of the period of the oscillations under the hypothesis of specular reflection .figure 7 displays the distributions of obtained this way for the oscillations observed in learmonth and ha noi .the former is dominated by ground reflections while the latter displays a more complex structure revealing reflections from the observatory roof in the morning and late afternoon and from ground in the early afternoon .knowing , it is easy to map the impact coordinates on ground , and .they do not display any particular structure in the learmonth case , which is consistent with a flat ground , while revealing clearly distinct regions in the ha noi case , that are unambiguously associated with the topography of the environment as shown in figure 3 . using reasonable values for the parameters describing a small departure from exact specular reflection on ground, it is easy to obtain a good description of the distributions as illustrated in figure 7 where a common value of has been used for while takes values between and and .the quality of the data does not allow to measure these parameters precisely and other combinations of their values can be found that give also acceptable results .however , in all cases , the departure from exact specular reflection that can be accommodated remains small , not exceeding .an effect of the inclusion of such departure is to significantly lower the estimate of with respect to exact specular reflection : from 8.3 m to 7.7 m in the case of learmonth and , in the case of ha noi , from 6.2 m to 5.7 m for the roof and from 26 m to 21 m for ground , improving significantly the agreement with real dimensions ( respectively 7.5 m , 5.6 m and 17.7 m ) .distributions obtained from the learmonth ( upper panel ) and ha noi ( lower panel ) data in november - december 2013 .the blue lines show model predictions allowing for small departures from exact specular reflections ( see text ) ., title="fig:",scaledwidth=30.0% ] distributions obtained from the learmonth ( upper panel ) and ha noi ( lower panel ) data in november - december 2013 .the blue lines show model predictions allowing for small departures from exact specular reflections ( see text ) ., title="fig:",scaledwidth=30.0% ] the correlation expected between oscillations observed in the morning at learmonth from ground and in ha noi from reflections on the observatory roof is illustrated in figure 2 ( lower panel ) .the agreement with observation is remarkable given the crudeness of the model used . for oscillations having amplitudes in excess of for learmonth ( upper panel ) and ha noi ( lower panel ) data .the ha noi distributions display separately ground reflections ( black ) and roof reflections ( blue in the morning and red in the afternoon ) .the scale of ordinate is linear for learmonth and logarithmic for ha noi.,title="fig:",scaledwidth=30.0% ] for oscillations having amplitudes in excess of for learmonth ( upper panel ) and ha noi ( lower panel ) data .the ha noi distributions display separately ground reflections ( black ) and roof reflections ( blue in the morning and red in the afternoon ) .the scale of ordinate is linear for learmonth and logarithmic for ha noi.,title="fig:",scaledwidth=30.0% ] relation 3 , which relates independent measurements of the period and phase of the observed oscillations , offers a crucial test of their multipath nature .note that the path difference between the direct and reflected waves is of the order of magnitude of for the rather large values of the sun elevation associated with the observed oscillations , meaning several tens of wavelengths .as remarked earlier , the phase of the oscillation is measured up to an integer multiple of and it is not possible to measure directly from a single phase value but is simply evaluated from the phase difference between two successive measurements .the distribution of obtained this way is displayed in figure 8 for learmonth and ha noi oscillations having amplitudes in excess of , giving in both cases very strong evidence for the multipath nature of the effect .mean(rms ) values of 1.01(0.11 ) and 1.00(0.13 ) are obtained for learmonth and ha noi respectively , giving strong evidence for the multipath origin of the observed oscillations .note that ha noi data mix reflections from ground and from the observatory roof , spanning a broad range of values .the learmonth data cover the whole year and the simple topography implies reflections from a flat ground with a well defined value , 8.24 m on average .it fluctuates by only m over the year and the rms value of its distribution is 1.21 m on average . comparing the summer months ( november to february ) with the winter months ( may to august ) for the retained oscillations , the mean elevation of the sun is found to vary from to and the mean amplitude of the oscillations from to while the width of the distribution of remains constant to better than 10% of its value .the number of retained oscillations is nearly twice as large in winter than in summer .when observing the sun , multipath effects between the direct wave reaching the antenna in the main lobe and its reflection on ground reaching the antenna in a side lobe have been shown to produce correlations between the periods of oscillations observed independently by two distant radio telescopes .the case of observations made at 1.4 ghz in ha noi ( viet nam ) and learmonth ( australia ) has been studied in some detail .strong evidence for the multipath origin of the observed oscillations has been obtained from the relation between their periods and their phases .good agreement between observations and model predictions has been obtained and the departure from exact specular reflection that the data can accommodate has been shown to be small .the oscillations have periods and phases that are remarkably simple functions of time and are well described by the model .their amplitudes , at the level of a few per mil , are consistent with the gain drop expected between the main and side lobes of the antenna pattern .indeed , for a 65% aperture efficiency we expect a gain of some 30 dbi for the main lobe compared to some dbi for a typical side lobe , namely a voltage ratio of some 3 .the existence of a correlation between independent observations from two distant observatories , together with the large values of the elevation at which the oscillations were observed , had been used earlier as arguments against an instrumental explanation .it is now clear that the effect is of purely instrumental nature , making a search for genuine solar oscillations in this range of periods and amplitudes unfeasible with such instruments .we are deeply indebted to the learmonth solar observatory staff , who are making their data available to the public , and particularly to dr owen giersch for having kindly and patiently answered many of our questions related to such data and , in particular , for having first mentioned a possible contribution of multipathing .we are grateful to dr alain maestrini , dr pierre lesaffre and dr alan rogers and the anonymous referee for very useful comments .we acknowledge financial support from the vietnam national foundation for science and technology development ( nafosted ) under grant number 103.08 - 2012.34 , the institute for nuclear science and technology , the world laboratory , the odon vallet foundation and the rencontres du vietnam .bolton , j.g .1948 , nature , 162 , 141 ; bolton , j.g . , &stanley , g.j .1948 , nature , 161 , 312 ; bolton , j.g . , &stanley , g.j .1948 , austral .res . , a1 , 58 ; bolton , j.g ., & stanley , g.j .1949 , austral .res . , a2 , 139 ; bolton , j.g . ,stanley , g.j . , & slee , o.b .1949 , nature , 164 , 101 .bolton , j.g .1982 , proc .astron . soc .australia , 4 , 349 .hiep , n.v . ,2013 , sol .289 , 3 , 939 .mccready , l.l ., pawsey , j.l . , & payne - scott , r. 1947 , proc .a190 , 357 .phuong , n.t . , et al .2014 , _ multipath generated correlations between apparent solar oscillations observed by two distant radio telescopes _ , submitted for publication in sol .phuong , n.t . , et al .2014 , _ the vatly radio telescope : performance study _ , submitted for publication in comm .sullivan iii , w.t .1991 , iau coll .131 , asp conference series , vol . 19 , t.j .cornwell and r.a .perley ( eds . ) ;
a multipath mechanism similar to that used in australia sixty years ago by the sea - cliff interferometer is shown to generate correlations between the periods of oscillations observed by two distant radio telescopes pointed to the sun . the oscillations are the result of interferences between the direct wave detected in the main antenna lobe and its reflection on ground detected in a side lobe . a model is made of such oscillations in the case of two observatories located at equal longitudes and opposite tropical latitudes , respectively in ha noi ( viet nam ) and learmonth ( australia ) , where similar radio telescopes are operated at 1.4 ghz . simple specular reflection from ground is found to give a good description of the observed oscillations and to explain correlations that had been previously observed and for which no satisfactory interpretation , instrumental or other , had been found . radio detection multipath solar oscillations
the world airline network ( wan ) has massively increased the speed and scope of human mobility . this boon for humanityhas also created an efficient global transport network for infectious disease .pandemics can now occur more easily and more quickly than ever before .the accelerating emergence of novel pathogens exacerbates the situation .better understanding of global dispersal dynamics is a major challenge of our century .rapid assessment of an emerging outbreak s dissemination potential is critical to response planning .we do not know where the next pandemic threat might emerge .mexico was not a prime candidate for an influenza outbreak , nor west africa for ebola .preemptively mapping the pandemic influence of individual airports could contribute substantially to monitoring and response plans .while exact relationships between the wan and pandemic spread are difficult to model , simulation studies suggests that topological descriptors which describe epidemic outcomes on network models also have explanatory power for relationships between the topology of the wan and pandemic spread .observational studies of influenza , malaria , and dengue fever support this conclusion .given the topology of a network , the minimal disease transmission rate which allows epidemics is given by the inverse of the spectral radius of a network s adjacency matrix , and the typical outcome and time course of an epidemic follow a closed - form solution governed by the degree distribution of the network .the wan s topological structure is well characterized .it is a small - world , scale - free network with strong community structure , imposed partly by spatial constraints .the majority of airports ( 70% ) serve as bridges which connect a densely interconnected core of 73 major transport hubs ( 2% ) to regional population centers and peripheral airports ( 28% ) .nodes which connect communities can be distinct from high - degree nodes within communities . since the wan is designed to optimize passenger flow, the network s temporal structure has little effect at time scales relevant for pandemic spread .topological descriptors of epidemic dynamics , however , can only describe typical outcomes .they do not describe the structure of the variation around the typical outcome , which is dismissed as stochastic when mentioned at all . even within the constraints of a simple branching process model , empirical estimates of the probability of epidemicshow substantial variation around the analytically derived solution , see figure [ fig : branching ] .actual outcomes of emergent infectious diseases are crucially shaped by chance events in the early phases of their emergence .clear understanding of how seed location influences global outcomes would substantially improve public health planning .the development of sophisticated , parameter - rich epidemic simulators provides powerful tools for exploring relationships between seed location and epidemic outcomes .common frameworks encompass demographic and mobility characteristics via either metapopulation or agent - based assumptions .careful tuning of these models has produced results which well match the spread of the 2009 influenza epidemic . yet the complex interactions between model structure , input parameters , and estimation methods makes interpretation of model - based results challenging , especially when attempting to generalized to future outbreaks for which epidemic parameters are fundamentally unknowable . if , however , two radically different modeling approaches result in such high agreement both with each other and with reality then the principal driver of outcomes should be expressible with a small parameter set .evidence suggests that simple probabilistic models incorporating local incidence , travel rates , and basic transmission parameters are sufficient to predict outcomes of complex metapopulation based simulations .recent theoretical work suggests that the apparent stochasticity in the early phases of a network - mediated epidemic process can be explained by the expectation of the force of infection of epidemic processes seeded from that node .the aim of this study is to evaluate if this finding generalizes to realistic scenarios of wan - mediated pandemic disease spread .our model of the wan is based on the 2014 release of the open flights database .we selected all airports serviced by regularly scheduled commercial flights , resulting in a list of 3,458 airports connected by 68,820 routes served by 171 different aircraft types .we simplify the network by replacing multiple routes between airports by a single edge whose weight is the sum of the available seats on all routes connecting the two airports , under the assumption that the aircraft type reflects the airline s best judgment of the importance of the route . aircraft seating capacity was estimated based on aircraft descriptions on worldtrading.net and airliners.net , using airlinecodes.co.uk to translate the iata aircraft codes into aircraft type .the expected force of a network node is defined as the expectation of the force of infection generated by an epidemic process seeded from the node into an otherwise fully susceptible network , after two transmission events and no recovery .the force of infection in a network is directly proportionate to the current number of infected - susceptible edges or in a weighted network , the sum of all such edge weights .its expectation after two transmission is given by the entropy of the distribution of this sum over all possible ways the first two transmission could occur .applied to the wan , where is * * a**irport i s * * e**xpected * * f**orce , enumerates all possible ways to observe two transmissions seeded from , is the weighted degree of the transmission pattern multiplied by the probability that this pattern is observed given , and is the normalization of .we here further normalize aef values to the range ] per 100,000 inhabitants and to replacing the `` three regions '' criteria with `` 100 cities .'' results for each airport are reported in terms of the median over 20 runs ( the maximum number supported by the public gleamvis client ) . if the threshold is not passed after 365 days ( the maximum length supported by the public gleamvis client ) , we declare that no pandemic occurred . for an outbreak to become a pandemic ,its basic reproductive number must surpass the basic epidemic threshold needed to establish a disease in a local population by a sufficient amount to also overcome finite subpopulation size effects and diffusion rates to neighboring populations .a branching process approximation suggests that invasion thresholds in metapopulation models depend on the outbreak s value , the variance of the network s degree distribution , and the mobility rate between subpopulations .the gleamviz model specifies the last two values , reducing invasion thresholds to a function of .however , as shown in figure 1 , even a pure branching process shows substantial variability around the theoretical probability of achieving a large outbreak . for pandemics mediated by the wan , the question of interestis how the invasion threshold varies for different airports .we empirically observe invasion thresholds on the wan as follows .ten seed airports are selected , one from each decile of the range of aef values , see table [ tab : seeds ] .the basic reproductive number is defined as , the transmission to recovery ratio . keeping fixed , we vary over the range [ 0.4 , 0.5 ] , and observe which seeds trigger a pandemic at each value under the simulation framework described above . for ,no simulations reached pandemic status , and for , all simulations resulted in a pandemic .often , diseases of concern are known to be competent of invading the network . here , the outcome of interest is not if a pandemic occurs , but rather how long until an outbreak reaches pandemic status .we measure relationships between aef and time to pandemic status as follows .one hundred world airports were chosen such that they evenly cover the range of measured aef values .the simple seir model used previously is extended to the full model used by the gleamvis group to replicate the 2009 pandemic .this estimates transmission rate as , well above the invasion threshold of determined empirically above .further , the infected compartment of the seir model is divided into three categories : asymptomatic , symptomatic travelers , symptomatic non - travelers .these categories affect the mobility model , and non - symptomatic individuals have reduced transmissibility . for each seed location, we observe both the number of days until pandemic status is reached and the number of days until peak global incidence . both outcomesare highly correlated , since once pandemic status is achieved further disease development is determined by network topology .the purpose of measuring peak global incidence is that this measure is unambiguous , while any definition of `` first day of pandemic status '' is somewhat arbitrary .a shapiro - wilks test of the observed times to peak global incidence suggests that this data is approximately normally distributed ( under the null hypothesis that the data is normally distributed ) , while the distribution of observations of first day of pandemic status is right - skewed ( ) .relationships between outcomes and aef are measured by pearson correlation .we additionally test correlations to weighted and unweighted versions of each airport s betweenness , degree , and eigenvalue centralities , and also to verma et al s t - core , a variant of the k - core which counts triangles .the robustness of aef values is examined by observing their relative change while progressively degrading the model wan from which they are derived .the network is degraded by removing from one to 15 percent of u.s .airports from the network along with their associated edges .community - based analysis of the wan suggest that us airports form one large community .the aef of all remaining world airports is computed .three different random removal schemes were used : uniform over all airports , selection weighted by airport degree , selection weighted by aef .the resulting aef values are compared with the original aef values .we record the number of airports whose degraded aef departs from its original aef by more than 1% and by more than 5% .reported results are the average over ten runs , and show the amount of degradation for both u.s . and non - u.s .the aef of the seed location is strongly predictive of an outbreak s invasive threshold as shown in figure [ fig : fitbeta ] and table [ tab : seeds ] .the correlation between aef and the minimal observed transmission rate at which it first became pandemically competent was 0.90 ( 95% confidence interval : 0.98,0.62 ) .tokyo was a notable outlier , achieving pandemic competence earlier than predicted from its aef value .aef was also strongly correlated with the delay until an outbreak became a pandemic .correlation was to the day pandemic status was achieved , and to the day of peak global incidence , see figure [ fig : ttp ] .aef is significantly and more strongly correlated to either epidemic outcome than any of the comparison network centrality measures , see table [ tab : cor ] and figure [ fig : mcomp ] .the aef proved robust to incomplete sampling .degradation was most severe when airports were preferentially removed based on degree .still , only three percent of non - u.s . airports showed more than 1% change in their computed exf values when applying this scheme at the highest noise level . even within the united states ,only 22% of aef values changed by more than 5% .see figure [ fig : deg ] .in all cases , aef explains much of the variation in epidemic outcomes , suggesting that the early development of a pandemic is not stochastic , but rather strongly structured by the local connectivity of the seed location .the ability of the aef to summarize this connectivity contributes substantially to our understanding of the role of individual airports in pandemic diffusion .these results are in harmony with other recent work claiming that relative arrival times of wan - mediated pandemics are independent of disease - specific parameters and that a simple branching process model describes early developments as well as complex metapopulation simulations .degradation of the network had , in general , limited effect on airport aef values .wrong information regarding a specific node could , however , produce a misleading aef value for that airport .epidemics seeded from airport pbj ( paama island , vanuatu ) took longer than expected to achieve pandemic status .this airport is probably mischaracterized in the open flights database , as flights to this simple grass strip are not shown on the vanuatu airlines online booking system ( ` http://www.airvanuatu.com/ ` , last visited 23 march 2015 ) . in the opposite direction , narita airport ( nrt , tokyo , japan ) showed significantly greater pandemic risk than predicted by its aef. this could be due to japan s intense population density combined with high local mobility , factors captured in the gleamvis simulator but not the open flights database .two outliers highlight a structural blind spot of the aef metric .epidemics seeded from airports zrj ( round lake , canada ) and pvh ( porto velho , brazil ) took longer than expected to achieve pandemic status .zrj is part of a small but locally dense community of airports serving first nation communities in canada .this community has limited connectivity to the rest of the wan , and zrj is three flights distant from any airport outside this community ( winnipeg s james armstrong richardson airport ywg , chicago midway mdw , toronto pearson yyz ) .likewise , pvh is two flights from any of brazil s international transport hubs .the aef is here derived from an airport s two - hop neighborhood , meaning for certain airports it is unaware of these network community boundaries .this limitation could perhaps be overcome by instead computing aef based on a three - hop neighborhood .given , however , that the wan s effective diameter is four hops , and the general good performance of the aef , it is not clear that such an extension would substantially improve results globally . airport expected force summarizes the size , density , and diversity of each airport s neighborhood in the wan .it combines features of degree , neighbor degree , and betweenness centrality in a statistically coherent manner .airport degree is not a good descriptor of pandemic outcomes , since it does not account for a neighbor s onward connectivity .guimera et al noted that high degree does not well correlate to high centrality . nor does low degree correlate to an airport s connection to the wider network , as illustrated by comparing sweden s linkping city airport ( lpi ) to alaska s huslia airport ( hsl ) .hsl has four outbound routes which connect to other rural alaskan airports .lpi has only one outbound route , which connects to amsterdam schipol .verma et al propose instead characterizing airports based on the number of network triangles they take part in , the t - core .plotting airport t - core against epidemic outcomes shows that its ability to explain epidemic outcomes is a result of its ability to successfully segment the wan into core and periphery , see figure [ fig : mcomp ] .thus t - core and aef capture complementary aspects of an airport s role in the wan .the applicability of the aef could be extended by modifying it to allow for varying transmissibility at individual airports .such an extension would allow it to express differences in i.e. competent vector species populations or health care system readiness at different world locations .since the aef is the expectation of the force of infection , such an extension merely requires modifying the calculation of each transmission pattern s force of infection along with the probability of that specific pattern occurring .both criteria can be met by adjusting edge weights in the underlying network model , implying that this extension could be implemented using the same framework as outlined in the current work .it would also be interesting to apply the expected force framework to disease spread through the world shipping network , a major transport system for several vector born pathogens along with their vector species .the approach could also be tested on more local transmission network models , such as contacts in a hospital ward or city - wide mobility data acquired from i.e. mobile phones .an outbreak s debut location is highly influential in its ability to become a pandemic threat .the aef metric succinctly captures this influence , and can help inform monitoring and response strategies .these investigations pave the way for the development of simple , robust models capable of informing preparedness planning and policy directives .the max planck society has filed for a patent on the use of the expected force metric to assess spreading risk on the world airline network .gl concieved and carried out the experiments and wrote the manuscript .we thank the gleamvis team for providing public access to their simulator with the only requirement being appropriate citation .trivik verma provided measures of airport t - core .28 # 1isbn # 1#1#1#1#1#1#1#1#1#1*#1*#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1prebibitemshook , , : , ( ) .doi : : ( ) , ( ) .doi : , , , , , , : ( ) , ( ) .doi : , : ( ) , doi : , , , , : ( ) , ( ) .doi : , , , : ( ) , ( ) .doi : , , : . , ( ) , , , , , , , , , , , : , ( ) .doi : , : , ( ) .doi : , , , , , , , , : ( ) , ( ) .doi : , , : .( ) , ( ) .doi : : ( ) , ( ) : .( ) , ( ) .doi : , , : . ,( ) , , : , ( ) .doi : , , , : ( ) , ( ) .doi : , : path lengths , correlations , and centrality in temporal networks .e * 84*(016105 ) ( 2011 ) , , , : ( ) , ( ) , , , , , , , , : , ( ) .doi : , , , , , : , ( ) .doi : , , , : ( ) , ( ) .doi : , , , , , , , : , ( ) .doi : , , , , , : ( ) , ( ) .doi : : , ( ) .doi : : openflights .openflights.org , , , , , : , ( ) .doi : , , , , , , , , : ( ) , ( ) .doi : , , , , , , , : ( ) , ( ) ., shown above as the solid blue line , where is the base reproductive number of the disease process .the black dots show empirically observed probabilities from simulations of the same model .each dot is the observed fraction of major outbreaks out of 100 simulated outbreaks for a given value of . for each value of ,100 dots are generated.,scaledwidth=90.0% ] .,scaledwidth=90.0% ] a[fig : deg ].seed locations . the following airports were selected as seed locations for testing relationships between aef and invasion risk .the table additionally reports the number of days for an outbreak to reach pandemic status ( `` pand . '' ) at the minimal observed transmission rate ( ) for which a pandemic occured , along with each airport s t - core , ( un)weighted degree , and ( un)weighted eigenvalue centralities . [ cols=">,<,<,<,>,>,>,>,>,>,>,>",options="header " , ]
massive growth in human mobility has dramatically increased the risk and rate of pandemic spread . macro - level descriptors of the topology of the world airline network ( wan ) explains middle and late stage dynamics of pandemic spread mediated by this network , but necessarily regard early stage variation as stochastic . we propose that much of early stage variation can be explained by appropriately characterizing the local topology surrounding the debut location of an outbreak . based on a model of the wan derived from public data , we measure for each airport the expected force of infection ( aef ) which a pandemic originating at that airport would generate . we observe , for a subset of world airports , the minimum transmission rate at which a disease becomes pandemically competent at each airport . we also observe , for a larger subset , the time until a pandemically competent outbreak achieves pandemic status given its debut location . observations are generated using a highly sophisticated metapopulation reaction - diffusion simulator under a disease model known to well replicate the 2009 influenza pandemic . the robustness of the aef measure to model misspecification is examined by degrading the underlying model wan . aef powerfully explains pandemic risk , showing correlation of 0.90 to the transmission level needed to give a disease pandemic competence , and correlation of 0.85 to the delay until an outbreak becomes a pandemic . the aef is robust to model misspecification . for 97% of airports , removing 15% of airports from the model changes their aef metric by less than 1% . appropriately summarizing the size , shape , and diversity of an airport s local neighborhood in the wan accurately explains much of the macro - level stochasticity in pandemic outcomes .
a vast number of astronomical observations suggests that magnetic fields play a crucial role in the dynamics of many phenonema of relativistic astrophyics , either on stellar scales , such as for pulsars , magnetars , compact x - ray binaries , short and long / gamma - ray bursts ( grbs ) and possibly for the collapse of massive stellar cores , but also on much larger scales , as it is the case for radio galaxies , quasars and active galactic nuclei ( agns ) .a shared aspect in all these phenomena is that the plasma is essentially electrically neutral and the frequency of collisions is much larger than the inverse of the typical timescale of the system .the mhd approximation is then an excellent description of the global properties of these plasmas and has been employed with success over the several decades to describe the dynamics of such systems well in their nonlinear regimes .another important common aspect in these systems is that their flows are characterized by large magnetic reynolds numbers , where and are the typical sizes and velocities , respectively , while is the magnetic diffusivity and is the electrical conductivity . for a typical relativistic compact object , and , under these conditions , the magnetic field is essentially advected with the flow , being continuosly distorted and possibly amplified , but also essentially not decaying .we note that these conditions are very different from those traditionally produced in the earth s laboratories , where , and the resistive diffusion represents an important feature of the magnetic - field evolution .a particularly simple and yet useful limit of the mhd approximation is that of the _ `` ideal - mhd '' _ limit .this is mathematically defined as the limit in which the electrical resistivity vanishes or , equivalently , by an infinite electrical conductivity .it is within this framework that many multi - dimensional numerical codes have been developed over the last decade to study a number of phenomena in relativistic astrophysics and in fully nonlinear regimes .the ideal - mhd approximation is not only a convenient way of writing and solving the equations of relativistic mhd , but it is also an excellent approximation for any process that takes place over a dynamical timescale . in the case of an old and `` cold '' neutron star , for example , the electrical and thermal transport properties of the matter are mainly determined by the transport properties of the electrons , which are the most important carriers of charge and heat . at temperatures above the crystallization temperature of the ions , the electrical ( and thermal ) conductivitiesare governed by electron scattering off ions and an approximate expression for the electrical conductivity is given by , where and are the stellar temperature and mass density g and temperatures in the range k , but provides a reasonable estimate also at larger temperatures of k [ _ cf . ] . ] . even for a magnetic field that varies on a length - scale as small as , ( where is the stellar radius )the magnetic diffusion timescale is .clearly , at these temperatures and densities , ohmic diffusion will be neglible for any process taking place on a dynamical timescale for the star , _i.e. _ , , and thus the conductivity can be considered as essentially infinite .however , catastrophic events , such as the merger of two neutron stars , or of a neutron star with a black hole , can produce plasmas with regions at much larger temperatures ( _ e.g. _ , ) and much lower densities ( _ e.g. _ , .in such regimes , all the transport properties of the matter will be considerably modified and non - ideal effects , absent in perfect - fluid hydrodynamics ( such as bulk viscosity ) and ideal mhd ( such as ohmic diffusion on a much shorter timescale ) will need to be taken into account .similar conditions are likely not limited to binary mergers but , for instance , be present also behind processes leading to long grbs , thus extending the range of phenomena for which resistive effects could be important .note also that these non - ideal effects in hydrodynamics ( mhd ) are proportional not only to the viscosity ( resistivity ) of the plasma , but also to the second derivatives of the velocity ( magnetic ) fields .hence , even in the presence of a small viscosity ( resistivity ) , their contribution to the overall conservation of energy and momentum can be considerable if the velocity ( magnetic ) fields undergo very rapid spatial variations in the flow .a classical example of the importance of resistive mhd effects in plasmas with high but finite conductivities is offered by _current sheets_. these phenomena are often observed in the solar activity and are responsible for the reconnection of magnetic field lines and changes in the magnetic field topology . while these phenomena are behind the emission of large amounts of energy , they are strictly forbidden within the ideal - mhd limit due to magnetic flux conservation and socan not be studied employing this limit . besides having considerably smaller conductivities , low - density higly magnetized plasmas are present rather generically around magnetized objects , constituting what is referred to as the `` magnetosphere '' . in such regionsmagnetic stresses are much larger than magnetic pressure gradients and can not be properly balanced ; as a result , the magnetic fields have to adjust themselves so that the magnetic stresses vanish identically .this scenario is known as the _ force free _ regime ( because the lorentz force vanishes in this case ) and while the equations governing it can be seen as the low - inertia limit of the ideal - mhd equations , the force - free limit is really distinct from the ideal - mhd one .this represents a considerable complication since it implies that it is usually not possible to decribe , within the same set of equations , both the interior of compact objects and their magnetospheres .theoretical work to derive a fully relativistic theory of non - ideal hydrodynamics and non - ideal mhd has been carried out by several authors in the past and is particularly simple in the case of the resistive mhd description .the purpose of this work is indeed that of proposing the solution of the relativistic resistive mhd equations as an important step towards a more realistic modelling of astrophysical plasmas .there are a number of advantages behind such a choice .first , it allows one to use a single mathematical framework to describe both regions where the conductivity is large ( as in the interior of compact objects ) and small ( as in magnetospheres ) , and even the vacuum regions outside the compact objects where the mhd equations trivially reduce to the maxwell equations .second , it makes it possible to account self - consistently for those resistive effects , such as current sheets , which are energetically important and could provide a substantial modification of the whole dynamics .last but not least , the numerical solution of the resistive mhd equations provides the only way to control and distinguish the physical resistivity from the numerical one .the latter , which is inevitably present and proportional to truncation error , is also completely dependent on the specific details of the numerical algorithm employed and on the resolution used for the solution .as noted already by several authors , the numerical solution of the ideal - mhd equations is considerably less challenging than that of the resistive mhd equations . in this latter case , in fact ,the equations become mixed hyperbolic - parabolic in newtonian physics or hyperbolic with stiff relaxation terms in special relativity .the presence of stiff terms is the natural consequence of the fact that the diffusive effects take place on timescales that are intrinsically larger than the dynamical one. stated differently , in such equations the relaxation terms can dominate over the purely hyperbolic ones , posing severe constraints on the timestep for the evolution .while considerable work has already been made to introduce numerical techniques to achieve efficient implementations in either regime , the use of these techniques in fully three - dimensional simulations is still difficult and expensive . in order to benefit from the many advantages discussed above in the use of the resistive mhd equations , we here present a novel approach for the solution of the relativistic resistive mhd equations exploiting the properties of implicit - explicit ( imex ) runge kutta methods .this approach represents a simple but effective solution to the problem of the vastly different timescales without sacrificing either simplicity in the implementation or the numerical efficiency . by examining a number of testswe illustrate the accuracy of our approach under a variety of conditions and demonstrate its robustness . in addition, we also compare it with the alternative method proposed by for the solution of the same set of relativistic resistive mhd equations .this latter approach employs strang - splitting techniques and the analytical integration of a reduced form of ampere s law . while it works well in a number of cases , it has revealed to be unstable when applied to discontinuous flows with large conductivities ;such difficulties were not encountered when solving the same problem within the imex implementation . because our approach effectively treats within a unified framework both those regions of the flow which are fluid - pressure dominated and those which are instead magnetic - pressure dominated , it could find a number of applications and serve as a first step towards a more realistic modeling of relativistic astrophysical plasmas .our work is organized as follows . in sect .[ section2 ] we present the system of equations describing a resistive magnetized fluid , while in section [ section3 ] we discuss the problems related to the numerical evolution of this system of equations and the numerical approaches developed to solve them . in particular , we introduce the basic features of the imex runge - kutta schemes and recall their stability properties . in sect [ section4 ] we instead explain in detail the implementation of the imex scheme to the resistive mhd equations . finally , in sect .[ section5 ] we present the numerical tests carried out either in one or two dimensions and that span several prescriptions for the conductivity .section [ section5 ] is also dedicated to the comparison with the strang - splitting technique .the conclusions and the perspectives for future improvements are presented in sect .[ section6 ] , while appendix [ appendixb ] reviews our space discretization of the equations . hereafter we will adopt gaussian units such that and employ the summation convention on repeated indices .roman indices are used to denote spacetime components ( _ i.e. _ , from to ) , while are used to denote spatial ones ; lastly , bold italics letters represent vectors , while bold letters represent tensors .an effective description of a fluid in the presence of electromagnetic fields can be made by considering three different sets of equations governing respectively the electromagnetic fields , the fluid variables and the coupling between the two .in particular , the electromagnetic part can be described via the maxwell equations , while the conservation of energy and momentum can be used to express the evolution of the fluid variables .finally , ohm s law , whose exact form depends on the microscopic properties of the fluid , expresses the coupling between the electromagnetic fields and the fluid variables . in what followswe review these three sets of equations separately , discuss how they then lead to the resistive mhd description , and how the latter reduces to the well - known limits of ideal - mhd and of the maxwell equations in vacuum .our presentation will be focussed on the special - relativistic regime , but the extension to general relativity is rather straightforward and will be presented elsewhere .the special relativistic maxwell equations can be written as where and are the maxwell and the faraday tensor respectively and is the electric current 4-vector .a highly - ionized plasma has essentially zero electric and magnetic susceptibilities and the faraday tensor is then simply the dual of the maxwell tensor .this tensor provides information about the electric and magnetic fields measured by an observer moving along any timelike vector , namely we are considering to be the time - like traslational killing vector field in a flat ( minkowski ) spacetime , so and the levi - civita symbol is non - zero only for spatial indices .note that the electromagnetic fields have no components parallel to ( _ i.e. _ , ) . by using the decomposition of the maxwell tensor ( [ maxwell_tensor ] ) ,the equations ( [ maxwell_covariant1])([maxwell_covariant2 ] ) can be split into directions which are parallel and orthogonal to to yield the familiar maxwell equations where we have decomposed also the current vector , with being the charge density , the convective current and the conduction current satisfying .the current conservation equation follows from the antisymmetry of the maxwell tensor and provides the evolution of the charge density which can be obtained also directly by taking the divergence of ( [ maxwell_clasic3 ] ) when the constraints ( [ maxwell_clasic1])([maxwell_clasic2 ] ) are satisfied .the evolution of the matter follows from the conservation of the stress - energy tensor and the conservation of baryon number where is the rest - mass density ( as measured in the rest frame of the fluid ) and is the fluid 4-velocity .the stress - energy tensor describing a perfect fluid minimally coupled to an electromagnetic field is given by the superposition where here is the enthalpy , with the pressure and the specific internal energy .the conservation law ( [ conservation_tmunu ] ) can be split into directions parallel and orthogonal to to yield the familiar energy and momentum conservation laws where we have introduced the conserved quantities , which are essentially the energy density the energy flux density , and whose expressions are given by here is the velocity measured by the inertial observer and is the lorentz factor .the fluxes can then be written as { \bf g } \ , .\label{def_fs } \end{aligned}\ ] ] finally , the conservation of the baryon number ( [ conservation_baryons ] ) reduces to the continuity equation written as where we have introduced another conserved quantity and its flux . as mentioned above ,maxwell equations are coupled to the fluid ones by means of the current 4-vector , whose explicit form will depend in general on the electromagnetic fields and on the local fluid properties .a standard prescription is to consider the current to be proportional to the lorentz force acting on a charged particle and the electrical resistivity to be a scalar function .ohm s law , written in a lorentz invariant way , then reads with being the electrical conductivity of the medium . expressing ( [ ohm_relativistic_covariant ] ) in terms of the electric and magnetic fields one obtains the familiar form of ohm s law in a general inertial frame + q~{\boldsymbol v } \ , .\ ] ] note that the conservation of the electric charge ( [ current_conservation_clasic ] ) provides the evolution equation for the charge density ( _ i.e. _ , the projection of the 4-current along the direction ) , while ohm s law provides a prescription for the ( spatial ) conduction current ( _ i.e. _ , the components of orthogonal to ) .it is important to recall that in deriving expression ( [ ohm_relativistic ] ) for ohm s law we are implicitly assuming that the collision frequency of the constituent particles of our fluid is much larger that the typical oscillation frequency of the plasma .stated differently , the timescale for the electrons and ions to come into equilibrium is much shorter than any other timescale in the problem , so that no charge separation is possible and the fluid is globally neutral .this assumption is a key aspect of the mhd approximation .the well - known ideal - mhd limit of ohm s law can be obtained by requiring the current to be finite even in the limit of infinite conductivity ( ) . in this limit ohm s law ( [ ohm_relativistic ] )then reduces to projecting this equation along one finds that the electric field does not have a component along that direction and then from the rest of the equation one recovers the well - known ideal - mhd condition stating that in this limit the electric field is orthogonal to both and .such a condition also expresses the fact that in ideal mhd the electric field is not an independent variable since it can be be computed via a simple algebraic relation from the velocity and magnetic vector fields .summarizing : the system of equations of the relativistic resistive mhd approximation is given by the constraint equations ( [ maxwell_clasic1])([maxwell_clasic2 ] ) , evolution equations ( [ maxwell_clasic3])([current_conservation_clasic ] ) , ( [ fluid_tau])([fluid_s ] ) and ( [ fluid_baryons ] ) , where the fluxes are given by eqs . ( [ def_fe])([def_fs ] ) and the 3-current is given by ohm s law ( [ ohm_relativistic ] ) .these equations , together with a equation of state ( eos ) for the fluid and a reasonable model for the conductivity , completely describe the system under consideration provided consistent initial and boundary data are defined .at this point it is useful to point out some properties of the relativistic resistive mhd equations discussed so far , to underline their purely hyperbolic character and to contrast them with those of other forms of the resistive mhd equations which contain a parabolic part instead . to do this within a simple example, we adopt the newtonian limit of ohm s law ( [ ohm_relativistic ] ) , \ , , \ ] ] where we have neglected terms of order , obtaining the following potentially stiff equation for the electric field \ , .\label{maxwell_stiff}\ ] ] assuming now a uniform conductivity and taking a time derivative of eq .( [ maxwell_clasic4 ] ) , we obtain the following hyperbolic equation with relaxation terms ( henceforth referred simply as hyperbolic - relaxation equation ) for the magnetic field = [ \partial_t { \boldsymbol b } - \nabla \times ( { \boldsymbol v } \times { \boldsymbol b})]\,.\ ] ] if the displacement current can be neglected , _i.e. _ , , equation ( [ relativistic_rmhd ] ) reduces to the familiar parabolic equation for the magnetic field where the last term is responsible for the diffusion of the magnetic field .it is important to stress the significant difference in the characteristic structure between equations ( [ relativistic_rmhd ] ) and ( [ newtonian_rmhd ] ) .both equations reduce to the same advection equation in the ideal - mhd limit of infinite conductivity ( ) indicating the flux - freezing condition .however , in the opposite limit of infinite resistivity ( ) eq . ( [ newtonian_rmhd ] ) tends to the ( physically incorrect ) elliptic laplace equation while eq . ( [ relativistic_rmhd ] )reduces to the ( physically correct ) hyperbolic wave equation for the magnetic field .the set of maxwell equations described above can also be cast in an extended fashion which includes two additional fields , and , introduced to control dynamically the constraints of the system , _i.e. _ , eqs ( [ maxwell_clasic1 ] ) and ( [ maxwell_clasic2 ] ) .this _ `` augmented '' _ system reads clearly , the standard maxwell equations ( [ maxwell_covariant1])([maxwell_covariant2 ] ) are recovered when and we are in this way extending the space of solutions of the original maxwell equations to include those with non - vanishing .the evolution of these extra scalar fields can be obtained by taking a partial derivative of the augmented maxwell equations ( [ maxwell_augmented1])([maxwell_augmented2 ] ) and using the antisymmetry of the maxwell and faraday tensors together with the conservation of charge to obtain it is evident that these represent wave equations with sources for the scalar fields , which propagate at the speed of light while being damped if . in particular , for any positive , they decay exponentially over a timescale to the trivial solution and the augmented system then reduces to the standard maxwell equations , including the constraints ( [ maxwell_clasic1 ] ) and ( [ maxwell_clasic2 ] ) .this approach , named hyperbolic divergence cleaning in the context of ideal mhd , was proposed as a simple way of solving the maxwell equations and enforcing the conservation of the divergence - free condition for the magnetic field . adopting this approach and following the formulation proposed by , the evolution equations of the augmented maxwell equations ( [ maxwell_augmented1])([maxwell_augmented2 ] ) can then be written as the system of equations ( [ maxwell_aug1])([maxwell_aug4 ] ) , together with the current conservation ( [ current_conservation_clasic ] ) , is the one we will use for the numerical evolution of the electromagnetic fields within the set of relativistic resistive mhd equations .while the ideal - mhd equations are well suited to an efficient numerical implementation , the general system of relativistic resistive mhd equations brings about a delicate issue when the conductivity in the plasma undergoes very large spatial variations . in the regions with high conductivity ,in fact , the system will evolve on timescales which are very different from those in the low - conductivity region .mathematically , therefore , the problem can be regarded as a hyperbolic one with stiff relaxation terms which requires special care to capture the dynamics in a stable and accurate manner . in the next sectionwe discuss a simple example of a hyperbolic equation with relaxation which exhibits the problems discussed above and then introduce implicit - explicit ( imex ) runge kutta methods to deal with these kind of equations .in essence , these methods treat the advection character of the system with strong - stability preserving ( ssp ) explicit schemes , while the relaxation character with an l - stable diagonally implicit runge kutta ( dirk ) scheme . after presenting the scheme , its properties and some examples ,we discuss in detail its application to the resistive mhd equations .a prototypical hyperbolic equation with relaxation is given by where is the _ relaxation time _( not necessarily constant either in space or in time ) , gives rise to a quasilinear system of equations ( _ i.e. _ , depends linearly on first derivatives of ) , and does not contain derivatives of . in the limit ( corresponding for the resistive mhd equations to the case of vanishing conductivity ) the system is hyperbolic with propagation speeds bounded by .this maximum bound , together with the length scale of the system , define a characteristic timescale of the hyperbolic part . in the opposite limit ( corresponding to the case of infinite conductivity ) ,the system is instead said to be _ stiff _ , since the timescale of the relaxation ( or stiff ) term is in general much larger than the timescale of the hyperbolic part .in such a limit , the stability of an explicit scheme is only achieved with a timestep size .this requirement is certainly more restrictive than the courant - lewy - friedrichs ( cfl ) stability condition for the hyperbolic part and makes an explicit integration impractical .the development of efficient numerical schemes for such systems is challenging , since in many applications the relaxation time can vary by several orders of magnitude across the computational domain and , more importantly , to much beyond the one determined by the speed . when faced with this issueseveral strategies can be adopted .the most straightforward one is to consider only the stiff limit , where the system is well approximated by a suitable reduced set of conservation laws called _ `` equilibrium system '' _ such that where is a reduced set of variables .this approach can be followed if the resulting system is also hyperbolic .this is precisely the case in the resistive mhd equations for vanishing resistivity ( or ) . in this case, the equations reduce to those of ideal mhd and describe indeed an `` equilibrium system '' in which the magnetic field is simply advected with the flow . as discussed earlier , this limit is often adequate to describe the behaviour of dense astrophysical plasmas , but it may also stray away in the magnetospheres. a more general approach could consist of dividing the computational domain in regions in each of which a simplified set of equations can be adopted . as an example , the ideal - mhd equations could be solved in the interior of compact objects , the force - free mhd equations could be solved in the magnetosphere , and finally the maxwell equations for the vacuum regions outside the compact object .however , this approach requires the overall scheme to suitably match the different regions so as to obtain a global solution .this task , unfortunately , is far from being straightforward and , to date , it lacks a rigorous definition .an alternative approach consists of considering the original hyperbolic - relaxation system in the whole computational domain and then employ suitable numerical schemes that work for all regions . among such schemes is the strang - splitting technique , which has been recently applied by for the solution of the ( special ) relativistic resistive mhd equations .the strang - splitting scheme provides second - order accuracy if each step is at least second - order accurate , and this property is maintained under suitable assumptions even for stiff problems . in practice , however , higher - order accuracy is difficult to obtain even in non - stiff regimes with this kind of splitting . moreover , when applied to hyperbolic systems with relaxation , strang - splitting schemes reduce to first - order accuracy since the kernel of the relaxation operator is non - trivial and corresponds to a singular matrix in the linear case , therefore invalidating the assumptions made by to ensure high - order accuracy . avoided this problem by solving analytically the stiff part in a reduced form of ampere s law .although this procedure works well for smooth solutions , our implementation of the method has revealed problems when evolving discontinuous flows ( shocks ) for large - conductivities plasmas .moreover , it is unclear whether the same procedure can be adopted in more general configurations , where an analytical solution may not be available . as an alternative approach to the methods solving the relativistic resistive mhd equations on a single computational domain, we here introduce an imex runge - kutta method to cope with the stiffness problems discussed above .these methods , which are easily implemented , are still under development and have few ( relatively minor ) drawbacks .the most serious one is a degradation to first or second - order accuracy for a range of values of the relaxation time .however , since high - resolution shock - capturing ( hrsc ) schemes usually employed for the solution of the hydrodynamic equations already suffer from similar effects at discontinuities , the possible degradation of the imex schemes does not spoil the overall quality numerical solution when employed in conjunction with hrsc schemes .the next sections review in some detail the imex schemes and our specific implementation for the relativistic resistive mhd equations .the imex runge - kutta schemes rely on the application of an implicit discretization scheme to the stiff terms and of an explicit one to the non - stiff ones . when applied to system ( [ stiff_equation ] )it takes the form where are the auxiliary intermediate values of the runge - kutta scheme .the matrices and are matrices such that the resulting scheme is explicit in ( _ i.e. _ , for ) and implicit in .an imex runge - kutta scheme is characterized by these two matrices and the coefficient vectors and . since simplicity and efficiency in solvingthe implicit part at each step is important , it is natural to consider diagonally implicit runge - kutta ( dirk ) schemes ( _ i.e. _ , for ) for the stiff terms .a particularly convenient way of describing an imex runge - kutta scheme is offered by the butcher notation , in which the scheme is by a double tableau of the type where the index indicates a transpose and where the coefficients and used for the treatment of non - autonomous systems are given by the accuracy of each of the runge - kutta is achieved by imposing restrictions in some of the coefficients of their respective butcher tableaus .although each of them separately can have an arbitrary accuracy , this does not ensure that the combination of the two schemes will preserve the same accuracy .in addition to the above conditions for each runge - kutta scheme , there are also some additional conditions combining terms in the two tableaus which must be fulfilled in order to achieve a global accuracy order for the complete imex scheme . since the details of these methods are not widely known , we first consider a simple example to fix ideas .a second - order imex scheme can be written in the tableau form given in table [ ssp2 - 222 ] .the intermediate and final steps of this imex runge - kutta scheme would then be written explicitly as \, , \nonumber \\ & & { \boldsymbol u}^{n+1 } = { \boldsymbol u}^n + \frac{\delta t}{2 } [ f({\boldsymbol u}^{(1 ) } ) + f({\boldsymbol u}^{(2 ) } ) ] \nonumber \\ & & \hskip 1.8 cm + \frac{\delta t}{2 \epsilon } [ r({\boldsymbol u}^{(1 ) } ) + r({\boldsymbol u}^{(2 ) } ) ] \ , . \nonumber\end{aligned}\ ] ] note that at each sub - step an implicit equation for the auxiliary intermediate values must be solved .the complexity of inverting this equation will clearly depend on the particular form of the operator .stable solutions of conservation - type equations are usually analyzed in terms of a suitable norm being bounded in time . with the solution vector at the time , then a sequence is said to be _`` strongly stable '' _ in a given norm provided that for all .c c c c & & & + & & & + & & & + c c c c & & & + & & & + & & & + [ ssp2 - 222 ] the most commonly used norms for analyzing schemes for nonlinear systems are the total - variation ( tv ) norm and the infinity norm . a numerical scheme that maintains strong stability at the discrete levelis called strong stability preserving ( ssp ) ( see for a detailed description of optimal ssp schemes and their properties ) .because of the stability properties of the imex schemes , it follows that if the explicit part of the imex scheme is ssp , then the method is ssp for the equilibrium system in the stiff limit .this property is essential to avoid spurious oscillations during the evolution of non - smooth data .the stability of the implicit part of the imex scheme is ensured by requiring that the runge - kutta is `` l - stable '' and this represents an essential condition for stiff problems . in practice , this amounts to requiring that the numerical approximation is bounded in cases when the exact solution is bounded .a more strict definition can be derived starting from a linear scalar ordinary differential equation , namely in this case it is easy to define the stability ( or amplification ) function as the ratio of the solutions at subsequent timesteps , where . a runge - kutta scheme is then said to be _l - stable _ if ( _ i.e. _ , it is bounded ) and .there are a number of imex runge - kutta schemes available in the literature and we report here only some of the second and third - order schemes which satisfy the condition that in the limit , the solution corresponds to that of the equilibrium system ( [ equilibrium_system ] ) .these are given in their butcher tableau form in table [ ssp2 - 322 ] and are taken from . in all these schemesthe implicit tableau corresponds to an l - stable scheme .the tableaus are reported in the notation ssp , where denotes the order of the ssp scheme and the triplet characterizes respectively the number of stages of the implicit scheme ( ) , the number of stages of the explicit scheme ( ) , and the order of the imex scheme ( ) .ssp2 0.125 cm c c c c c & & & & + & & & & + & & & & + & & & & 0.2 cm c c c c c & & & & + & & & & + & & & & + & & & & + [ ssp2 - 322 ] 0.5 cm ssp3 .125 cm c c c c c & & & & + & & & & + & & & & + & & & & + 0.2 cm c c c c c & & & & + & & & & + & & & & + & & & & + 0.5 cm ssp3 0.125 cm c c c c c c & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + .2 cm c c c c c c & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + reviewed the main properties of the imex schemes , we now apply them to the particular case of the special relativistic resistive mhd equations . our goal is to consider a numerical implementation of the general system that can deal with standard hydrodynamic issues ( like shocks and discontinuities ) as well as those brought up by the stiff terms discussed in the previous section .hence , we adopt high - resolution shock - capturing algorithms ( see appendix [ appendixb ] ) together with imex schemes . because the first ones involve the introduction of conserved variables in order to cast the equations in a conservative form , we first discuss how to implement the imex scheme within our target system and subsequently how to perform the transformation from the conserved variables to the primitive ones . for our target system of equationsit is possible to introduce a natural decomposition of variables in terms of those whose evolution do not involve stiff terms and those which do .more specifically , with the electrical resistivity playing the role of the relaxation parameter , the vector of fields can be split in two subsets , with containing the stiff terms , and the non - stiff ones . following the prototypical eq .( [ stiff_equation ] ) , the evolution equations for the relativistic resistive mhd equations can then be schematically written as where the relaxation parameter is allowed to depend also on the non - stiff fields .the vector can be evolved straightforwardly as it involves no stiff term .we further note that for our particular set of equations , it is convenient to write the stiff part as as a result , the procedure to compute each stage of the imex scheme can be performed in two steps : 1 .compute the explicit intermediate values from all the previously known levels , that is where we have defined and in eq .( [ first_stepb ] ) is a simple division and not a contraction on dummy indices .2 . compute the implicit part , which involves only , by solving note that the implicit equation , with the previous assumption ( [ stiff_part ] ) , can be inverted explicitly ^{-1 } \ , , \end{aligned}\ ] ] since the form of the matrix ] .in addition , the initial data parameters have been chosen so that and , thus yielding , with a full period being achieved at . 1.0 cm fig .[ alfven ] confirms this expectation by reporting the component after one period and thus overlapping with the initial one ( at ) for the highest resolution .this test shows clearly that in the limit of very high conductivity the resistive mhd equations tend to a solution which is very close to the same solution obtained in the ideal - mhd limit .the convergence rate measured for the different fields is consistent with the second - order spatial discretization being used as expected for smooth flows ( see appendix [ appendixb ] ) .the details of this test are described by , so again we provide here only a short description for completeness .we assume that the magnetic pressure is much smaller than the fluid pressure everywhere , with a magnetic field given by , where changes sign within a thin current layer of width .provided the initial solution is in equilibrium ( ) , the evolution is a slow diffusive expansion of the layer due to the resistivity and described by the diffusion equation [ _ cf . _ eq .( [ newtonian_rmhd ] ) with as the system expands , the width of the layer becomes much larger than and it evolves in a self - similar fashion . for ,the analytical exact solution is given by where and `` '' is the error function .this solution can be used for testing the moderate resistive regime . following , and in order to avoid the singular behaviour at , we have chosen as initial data the solution at with , , and .the domain covers the region ] with a resolution .the initial data is such that inside the radius the pressure is set to while the density to . in the intermediate region the two quantities decrease exponentially up to the exterior region , where the ambient fluid has .the magnetic field is uniform with only one nontrivial component .the other fields are set to be zero ( _ i.e. _ , ) , which is consistent within the ideal - mhd approximation .the evolution is performed with a high conductivity in order to recover the solution from the ideal - mhd approximation . as shown in fig .[ explosion_bs_2d ] , which reports the magnetic field components ( left panel ) and ( right panel ) at time , we obtain results that are qualitatively similar to those published in different works . while a strict comparison with an exact solution is not possible in this case , the solution found matches extremely well the one obtained with another 2d code solving the ideal mhd equations .most importantly , however , the figure shows that the solution is regular everywhere and that similar results can be obtained also with smaller values of the conductivity ( _ e.g. _ , no significant difference was seen for ) .we next consider a toy model for a star , thought as an infinite column of fluid aligned with the -axis but with compact support in other directions . because of the symmetry in the -direction , for all the fields and the problem is therefore two - dimensional .more specifically , we consider initial data given by where is the cylindrical radial coordinate .the other fields can be computed at the initial time by using the polytropic eos , the ideal - mhd expression ( [ ef_imhd ] ) for the electric field , and the electric charge from the constraint equation .we have chosen , , and .an atmosphere ambient fluid with is added outside the cylinder .finally , the resolution is and the domain is ] . in the limiting case solution corresponds to a wave propagating at the speed of light ( _ i.e. _ , the solution of the maxwell equations in vacuum ) , while for large values of the solution is stationary ( as expected in the ideal - mhd limit ) .the behaviour observed in the left panel fig .[ star_bz ] is also the expected one : the higher the conductivity , the closer the solution is to the stationary solution of the ideal - mhd limit . for low conductivities , on the other hand, there is a significant diffusion of the solution , which is quite rapid for and for this reason those values are not plotted here .we note that values of the conductivity larger than lead to numerical instabilities that we believe are coming from inaccuracies in the evolution of the charge density , and which contains spatial derivatives of the current vector .in addition , the stiff quantity is seen to converge only to an order .this can be due to the `` final layer '' problem of the imex methods , which is known to produce a degradation on the accuracy of the stiff quantities .luckily , this does not spoil the convergence of the non - stiff fields , which are instead second - order convergent .it is possible that the use of stiffly - accurate schemes can solve this degradation of the convergence and this is an issue we are presently exploring . 1.0 cmwe finally consider the same test , but now employing the non - uniform conductivity given by eq .( [ def_cond ] ) with and different values for .the results are presented in the right panel of fig .[ star_bz ] , which shows that the magnetic fields inside the star are basically the same in all the cases , stressing the fact that the interior of the star will not be significantly affected by the exterior solution , which has much smaller conductivity .however , the electromagnetic fields outside the star do change significantly for different values of , underlining the importance of a proper treatment of the resistive effects in those regions of the plasma where the ideal - mhd approximation is not a good one .we have introduced implicit - explicit runge - kutta schemes to solve numerically the ( special ) relativistic resistive mhd equations and thus deal , in an effective and robust way , with the problems inherent to the evolution of stiff hyperbolic equations with relaxation terms . since for these methods the only limitation on the size of the timestepis set by the standard cfl condition , the approach suggested here allows to solve the full system of resistive mhd equations efficiently without resorting to the commonly adopted limit of the ideal - mhd approximation .more specifically , we have shown that it is possible to split the system of relativistic resistive mhd equations into a set of equations that involves only non - stiff terms , which can be evolved straightforwardly , and a set involving stiff terms , which can also be solved explicitly because of the simple form of the stiff terms .overall , the only major difficulty we have encountered in solving the resistive mhd equations with imex methods arises in the conversion from the conserved variables to the primitive ones . in this case ,in fact , there is an extra difficulty given by the fact that there are four primitive fields which are unknown and have to be inverted simultaneously .we have solved this problem by using extra iterations in our 1d newton - raphson solver , but a multidimensional solver is necessary for a more robust and efficient implementation of the inversion process . with this numerical implementation we have carried out a number of numerical tests aimed at assessing the robustness and accuracy of the approach , also when compared to other equivalents ones , such as the strang - splitting method recently proposed by .all of the tests performed have shown the effectiveness of our approach in solving the relativistic resistive mhd equations in situations involving both small and large uniform conductivities , as well as conductivities that are allowed to vary nonlinearly across the plasma .furthermore , when compared with the strang - splitting technique , the imex approach has not shown any of the instability problems that affect the strang - splitting approach for flows with discontinuities and large conductivities . while the results presented here open promising perspectives for the implementation of imex schemes in the modelling of relativistic compact objects , at least two further improvements can be made with minor efforts .the first one consists of the generalization of the ( special ) relativistic resistive mhd equations with a scalar isotropic ohm s law to the general relativistic case , and its application to compact astrophysical bodies such a magnetized binary neutron stars .the solution of the resistive mhd equations can yield different results not only in the dynamics of the magnetosphere produced after the merger , but also provide the possibility to predict , at least in some approximation , the electromagnetic radiation produced by the merger of these objects .the second improvement consists of considering a non - scalar and anisotropic ohm s law , so that the behaviour of the currents in the magnetosphere can be described by using a very high conductivity along the magnetic lines and a negligibly small one in the transverse directions .such an improvement may serve as a first step towards an alternative modelling of force - free plasmas .we would like to thank eric hirschmann , serguei komissarov , steve liebling , jonathan mckinney , david neilsen and olindo zanotti for useful comments and bruno giacomazzo for comments and for providing the code computing the exact solution of the riemann problem in ideal mhd .ll and cp would like to thank famaf ( unc ) for hospitality .cp is also grateful to lorenzo pareschi for the many clarifications about the imex schemes .this work was supported in part by nsf grants phy-0326311 , phy-0653369 and phy-0653375 to louisiana state university , the dfg grant sfb / transregio 7 , conicet and secyt - unc .we are generically interested in solving hyperbolic conservation laws of the form where is the vector of the evolved fields , are their fluxes and contains the sources terms . the semi - discrete version of this equation , in one dimension , is simply given by where are consistent numerical fluxes evaluated at the interfaces between numerical cells .these consistent fluxes are computed by using hrsc methods , which are based on the use of riemann solvers .more specifically , we have implemented a modification of the local lax - friedrichs approximate riemann solver introduced by , which only needs the spectral radius ( _ i.e. _ , the maximum eigenvalue ) of the system . in highly relativistic cases , like the ones we are interested in ,the spectral radius is close to the light speed and so the local lax - friedrichs reduces to the simpler lax - friedrichs flux \,,\end{aligned}\ ] ] where are the reconstructed solutions on the left and on the right of the interface and their corresponding fluxes .the standard procedure is then to reconstruct the solution by interpolating with a polynomial and then compute the fluxes and . in our implementationwe first recombine the fluxes and the solution as then , using a piecewise linear reconstruction , these combinations can be computed on the left / right of the interface as where are just the slopes used to extrapolate to the interfaces . finally , the consistent flux is computed by a simple average \,.\end{aligned}\ ] ] for a linear reconstruction the slopes can be written as so that it is trivial to check that the standard lax - friedrichs ( [ lax - friedrichs ] ) is recovered when .the choice of these slopes becomes crucial in the presence of shocks or very sharp profiles , while the use of some nonlinear operators preserves the total variation diminishing ( tvd ) condition on the interpolating polynomial . in this way, the tvd schemes capture accurately the dynamics of strong shocks without the oscillations which appear with standard finite - difference discretizations .monotonicity is typically enforced by making use of slope limiters and we have in particular implemented the monotonized centered ( mc ) limiter ~{\rm min}(2|x|,2|y|,\frac{1}{2}|x+y| ) \,,\ ] ] which provides a good compromise between robustness and accuracy . note that with linear reconstruction the scheme is second - order accurate in the smooth regions , although it drops to first order near shocks and at local extrema .
many astrophysical processes involving magnetic fields and quasi - stationary processes are well described when assuming the fluid as a perfect conductor . for these systems , the ideal - magnetohydrodynamics ( mhd ) description captures the dynamics effectively and a number of well - tested techniques exist for its numerical solution . yet , there are several astrophysical processes involving magnetic fields which are highly dynamical and for which resistive effects can play an important role . the numerical modeling of such non - ideal mhd flows is significantly more challenging as the resistivity is expected to change of several orders of magnitude across the flow and the equations are then either of hyperbolic - parabolic nature or hyperbolic with stiff terms . we here present a novel approach for the solution of these relativistic resistive mhd equations exploiting the properties of implicit - explicit ( imex ) runge kutta methods . by examining a number of tests we illustrate the accuracy of our approach under a variety of conditions and highlight its robustness when compared with alternative methods , such as the strang - splitting . most importantly , we show that our approach allows one to treat , within a unified framework , both those regions of the flow which are fluid - pressure dominated ( such as in the interior of compact objects ) and those which are instead magnetic - pressure dominated ( such as in their magnetospheres ) . in view of this , the approach presented here could find a number of applications and serve as a first step towards a more realistic modeling of relativistic astrophysical plasmas . [ firstpage ] relativity mhd plasmas methods : numerical
the estimation of density derivatives has full potential for applications .this has been noted even in the first seminal papers on density estimation , as , which was also concerned with the estimation of the mode of a unimodal distribution , the value that makes zero the first density derivative . in the multivariate case ,the pioneering work of showed how the estimation of the gradient vector can also be used for clustering and data filtering , leading to a substantial amount of literature on the subject , under the name of the _ mean shift algorithm_. looking further afield , made use of the mean shift idea for image analysis , and the highly - cited paper by showed how these techniques can be useful for low - level vision problems , discontinuity preserving smoothing and image segmentation .a further very popular use of the mean shift algorithm is for real - time object tracking , as described in . from the perspective of statistical data analysis , in the multidimensional context the estimation of the first and second derivatives ofthe density is crucial to identify significant features of the distribution , such as local extrema , valleys , ridges or saddle points . in this sense , developed methods for determining and visualizing such features in dimension two , extending previous work on scale space ideas introduced in for the univariate case ( the sizer approach ) , and the same authors also explored the application of this methodology to digital image analysis in . generalized these results for multivariate data in arbitrary dimensions and provided a novel visualization for three - dimensional data .these techniques have been widely applied recently in the field of flow cytometry ; see , or .another relatively new problem that is closely related to gradient estimation is that of finding filaments in point clouds , which has applications in medical imaging , remote sensing , seismology and cosmology .this problem is rigorously stated and analyzed in .filaments are one - dimensional curves embedded in a point process , and it can be shown that steepest ascent paths ( i.e. , the paths from each point following the gradient direction ) concentrate around them , so gradient estimation appears as a useful tool for filament detection .in this paper we focus on kernel estimators of multivariate density derivatives of arbitrary order , formally defined in section [ sec:2 ] below . as for any kernel estimator, it is well known that the crucial factor that determines the performance of the estimator in practice is the choice of the bandwidth matrix . in the multivariate settingthere are several levels of sophistication at the time of specifying the bandwidth matrix to be used in the kernel estimator ( see * ? ? ?* chapter 4 ) .the most general bandwidth type consists of a symmetric positive definite matrix ; it allows the kernel estimator to smooth in any direction whether coordinate or not .this general class of bandwidth matrices can be constrained to consider positive definite diagonal matrices , allowing for different degrees of smoothing along each of the coordinate axis , or even further to consider a bandwidth matrix involving only a positive scalar multiple of the identity matrix , meaning that the same smoothing is applied to every coordinate direction . as noted in in the density estimation context, the single - parameter class should not be used for unscaled data or , as stated by in terms of feature space analysis , to use this bandwidth class at least the validity of an euclidean metric for the feature space should be previously checked .the simpler parameterizations are in general more widely used than the unconstrained counterpart for two reasons : first , in practice they need less smoothing parameters to be tuned , and second , due to the difficulties encountered in the mathematical analysis of estimators with unconstrained bandwidths .however , provided a detailed error analysis of kernel density derivative estimators with unconstrained bandwidths and showed that the use of the simpler parameterizations can lead to a substantial loss in terms of efficiency , and that this problem becomes more and more important as the order of the derivative to be estimated increases . also proposed an optimal bandwidth selector for the normal case , but they did not develop more sophisticated data - driven choices of the bandwidth matrix with applicability to more general densities , which is crucial to make density derivative estimation useful in practice . along the same lines , argue that most existing bandwidth selection methods for the mean shift algorithm , all of them for the single - parameter class of bandwidths , are based on empirical arguments .in the univariate case there exist some approaches to bandwidth selection for density derivative estimation : introduced a cross validation method and showed its optimality ; derived the relative rate of convergence of this method and also for a plug - in proposal ; studied two root selectors in the fourier domain , and more recently focused on the smoothed cross validation bandwidth selector for the density derivative . in the multivariate case , however , the issue of automatic bandwidth selection for density derivative estimation has remained largely unexplored .given the smaller body of multivariate density estimation research as compared to their univariate cousins , it is not surprising that multivariate density derivative estimation suffers equally ( if not more so ) from a lack of solid results . to the best of our knowledge , in the literaturethe only published approaches to bandwidth selection for multivariate kernel estimation of density derivatives are the recent papers and , but both focus exclusively on the first derivative .this paper proposes three new methods for unconstrained bandwidth matrix selection for the multivariate kernel density derivative estimator , and explores their applicability to other related statistical problems . in section [ sec:2 ], we introduce the mathematical framework for the analysis of multivariate derivatives . in section [ sec:3 ]we show that the relative rate of convergence of these unconstrained selectors is the same as for the classes of simpler bandwidth matrices , so that from an asymptotic point of view our methods can be as successful as ( and more flexible than ) those needing less smoothing parameters .the finite - sample behaviour of the new bandwidths is investigated in section [ sec:5 ] , and their application to develop new data - driven nonparametric clustering methods via the mean shift algorithm is explored in section [ sec : ms ] , and for feature significance in section [ sec : feature ] .finally , the proofs of the results are given in an appendix .the problem of estimating the -th derivative of a multivariate density is considered in this section . from a multivariate point of view, the -th derivative of a function is understood as the set of all its partial derivatives of order , rather than just one of them .notice that , for instance , in a multivariate taylor expansion of order all of the partial derivatives of order are needed to compute the -th order term . or , in another related example , all the second order partial derivatives are involved in the computation of the hessian matrix .all the -th partial derivatives can be neatly organized into a single vector as follows : if is a real -variate density function and , denote by the first derivative ( gradient ) operator . all the second orderpartial derivatives can be organized into the hessian matrix , and the hessian operator can be formally written as if the usual convention is taken into account . for , however , it is not that clear how to organize the set containing all the partial derivatives of order .here we adopt the unified approach used in or ( * ? ? ? *section 1.4 ) , where the -th derivative of is defined to be the vector . in the previous equation denotes the -th kronecker power of the operator ; see , e.g. , for the definition of the kronecker product . naturally , , and , for example , , where denotes the operator which concatenates the columns of a matrix into a single vector . herewe study the problem of estimating the -th derivative from a sample of independent and identically distributed random variables with common density .the usual kernel estimator of is defined as , where the kernel is a spherically symmetric density function , the bandwidth is a symmetric positive definite matrix and .thus , the most straightforward estimator of is just the -th derivative of , given by , where the roles of and can be separated for implementation purposes by noting that , as shown in , where for any matrix it is understood that and .see , however , for other possible estimators in the univariate context . for the density estimation case ( ) , showed that the use of bandwidths belonging to the class , with the identity matrix , or the class , may lead to dramatically less efficient estimators than those based on bandwidth matrices drawn from , the space of all positive definite symmetric matrices .moreover showed that the issue of efficiency loss is even more severe for .so the development of unconstrained bandwidth selectors for density derivative estimation , which is achieved in this paper , may also represent an important improvement in practice . to measure the error committed by the kernel estimator for the sample at hand it is natural to consider the integrated squared error ( ise ) , defined as where denotes the usual euclidean norm .this quantity depends on the data , so it is common to consider the mean integrated squared error ] and -{\mathsf{d}}^{\otimes r}f({{\boldsymbol x}})\|^2d{{\boldsymbol x}} ] , they proposed the kernel estimator using a kernel with pilot bandwidth , possibly different from and . for the selection of the pilot bandwidth matrix , the same authors showed that the leading term of the mean squared error ] , where is shorthand for .furthermore , the relative rate of convergence of is if ({{\bf h}}_{\mise , r } ) \rbrace \operatorname{\boldsymbol{\mathbb{e}}}\lbrace { \mathsf{d}}_{{\bf h}}[\widehat{\mathrm{mise}}_r-\mathrm{mise}_r]({{\bf h}}_{\mise , r } ) \rbrace^\top \\ & + \operatorname{var}\lbrace { \mathsf{d}}_{{\bf h}}[\widehat{\mathrm{mise}}_r-\mathrm{mise}_r]({{\bf h}}_{\mise , r } ) \rbrace = o(n^{-2\alpha } { \mathbf{j}}_{d^2 } ) { \operatorname{vec}}{{\bf h}}_{\mise , r } { \operatorname{vec}^\top}{{\bf h}}_{\mise , r}.\end{aligned}\ ] ] the convergence rates of the three bandwidth selectors considered here are given in the following theorem , whose proof is deferred to the appendix .[ reroc ] suppose that assumptions ( a1)(a5 ) given in the appendix hold .the relative rate of convergence to is for the cross validation selector , and for the plug - in selector and the smoothed cross validation selector when . computed the relative rate of convergence for the cv and pi selectors for the estimation of a single partial derivative , using a single - parameter bandwidth matrix ( i.e. , ) .the previous theorem shows that the unconstrained cv bandwidth attains the same rate as its constrained counterpart , yet with added flexibility that should be captured in the constant coefficient of the asymptotic expression , although the computation of an explicit form for this coefficient does not seem possible in general .the convergence rate of the pi selector is within the single - parameter bandwidth class , yielding a slightly faster convergence to the optimal constrained bandwidth .as explained in for the density case , this is due to the fact that the very special cancellation in the bias term which is achievable when using a single - parameter bandwidth is not possible in general for the unconstrained estimator .nevertheless , the aforementioned papers showed that this slight loss in convergence rate terms is negligible in practice as compared with the fact that the targeted constrained optimal bandwidth is usually much less efficient than the unconstrained one ( see also section [ sec:5 ] below ) .theorem [ reroc ] also shows that the similarities noted in about the asymptotic properties of the pi and scv methods for the density estimation problem persist for , since both selectors exhibit the same relative rate of convergence . exemplified how slow is the rate of the cv selector for , by noting that has to be as large as so that . in the same spirit , to compare the rates obtained in theorem [ reroc ] , table [ rates - ratio ] shows the values of ( cv ) and ( pi and scv ) divided by , that is the rate for the cv selector which is used as a base case , for all the different combinations of , and .ratios which are lower than 1 indicate the rate is faster than the base case , and ratios greater than 1 a slower rate . for ,these ratios in table [ rates - ratio ] tend to be greater than 1 , indicating that using this sample size will lead to a deteriorating convergence rate . on the other hand for the larger sample sizes , , these ratios tend to be less than 1 .this implies that convergence rates better than the cv selector for bivariate density estimation can be attained , even with higher dimensions and higher order derivatives , provided that sufficiently large ( although still realistic ) sample sizes are used .of course this comparison only takes into account the asymptotic order of the convergence rates by ignoring the associated coefficients since explicit formulas for the latter are not available for .the finite sample behaviour of the bivariate case for moderate sample sizes is examined more closely in the next section ..comparison of the relative rate of convergence for the cv , pi and scv selectors . for each combination of , and in the table ,the left entry in the corresponding cell shows ( cv selector ) and the right entry ( pi and scv selectors ) divided by ( i.e. the rate for the cv selector with ) . [ cols="<,<,^,^,^,^,^,^,^,^ " , ] in view of table [ tab : ari ] , none of the methods compared is uniformly the best . in the group of the mean shift procedures, the use of the pi bandwidth seems to exhibit the best overall performance .the cv choice can be rated second best , with similar or even slightly ( but not significantly ) better average ari in some cases .the scv bandwidth shows an unexpectedly inferior performance for the normal mixture models , but it has an acceptable behaviour for the models with non - standard cluster shapes. finally , the normal scale rule nr is clearly inferior in four out of the five models , but it performs surprisingly well for the broken ring model ; since it is the least intensive method in computational terms , it could be useful at least to provide a quick initial analysis , especially in higher dimensions . the comparison with the parametric method mclust followed the expected guidelines : for the normal mixture models mclust showed good results , especially for the difficult quadrimodal density , but it seems unable to adapt itself to the non - standard cluster shape situations .on the contrary , clues is not very powerful for a standard setup with ellipsoidal clusters , but seems to performs reasonably well for non - standard problems .finally , pdfc shows remarkable results in the simulation study , in spite of the ad hoc choice of the bandwidth in which it is based , and its performance is comparable to that of the best mean shift procedure , with the only exception of the eye model .surely a more careful study of the bandwidth selection problem would improve the quality of the pdfc method further .the mean shift algorithm in conjunction with the new proposed bandwidth selection rules was also applied to some real data sets .it is well - known that the kernel density estimator tends to produce spurious bumps ( i.e. , unimportant modes caused by a single observation ) in the tails of the distribution , and that this problem seems enhanced in higher dimensions , due to the empty space phenomenon and the curse of dimensionality ( see , for instance , * ? ? ?* chapter 4 ) . for real data sets, this may result in a number of data points forming singleton clusters after applying the mean shift algorithm .furthermore , in some applications the researcher may be interested in forming more homogeneous groups so that , say , insignificant groups of size less than of the biggest group are not allowed in the outcome of the clustering algorithm .this goal can be achieved as follows : apply the mean shift algorithm to the whole data set and identify all the data points forming groups of size less than of the biggest group , then leave those singular data points out of the estimation process in the mean shift algorithm and re - compute the data - based bandwidth and the density and density gradient estimators in ( [ eq : meanshift ] ) using only non - singular data points . since the mean shift algorithm produces a partition of the whole space , these left - out data points can be naturally assigned to any of the corresponding newly obtained clusters .if this new assignment again contains insignificant clusters then iterate the process until the eventual partition satisfies the desired requirements .this correction is similar ( although a little different ) to the stage called merging clusters based on the coverage rate " in , and will be referred henceforth as _correction for insignificant groups_. the _ e. coli _ data set is provided by the uci machine learning database repository .the original data were contributed by kenta nakai at the institute of molecular and cellular biology of osaka university .the data represent seven features calculated from the amino acid sequences of e.coli proteins , classified in eight classes according to their localization sites , labeled iml ( 2 observations ) , oml ( 5 ) , ims ( 2 ) , om ( 20 ) , pp ( 52 ) , imu ( 35 ) , i m ( 77 ) , cp ( 143 ) . a more detailed description of this data set can be found in . since two of the original seven features are binary variables , only the remaining five continuous variables ( ) , scaled to have unit variance , were retained for the cluster analysis .the number of groups identified by the mean shift procedure with correction for insignificant groups ( using as a default ) was 5 for pi and scv bandwidths , which is the natural choice if the insignificant clusters iml , oml and ims are merged into bigger groups .the mean shift algorithm found 6 groups using the nr bandwidth and 7 with the cv bandwidth . since in this examplethe true cluster membership is available from the original data , it is also possible to compare the performance of the methods using the ari .the aris for these configurations were 0.63 ( nr bandwidth ) , 0.671 ( cv ) , 0.667 ( pi ) and 0.559 ( scv ) .in contrast , clues and pdfc indicated a severely underestimated number of groups in the data , namely 3 and 2 , respectively , and whereas clues obtains a remarkably high ari anyway ( 0.697 ) , the performance of pdfc is poor for this data set in ari terms ( 0.386 ) .mclust also gives a reasonable answer , with 6 groups and an ari of 0.642 .these data were introduced in , and consist of eight chemical measurements on olive oil samples from three regions of italy .the three regions r1 , r2 and r3 are further divided into nine areas , with areas a1 ( 25 observations ) , a2 ( 56 ) , a3 ( 206 ) and a4 ( 36 ) in region r1 ( totalling 323 observations ) ; areas a5 ( 65 ) and a6 ( 33 ) in region r2 ( totalling 98 ) ; and areas a7 ( 50 ) , a8 ( 50 ) and a9 ( 51 ) in region r3 ( totalling 151 ) .detailed cluster analyses of this data set are given in and .taking into account the compositional nature of these data , they were transformed following the guidelines in the latter reference , first dealing with the effect of rounding zeroes when the chemical measurement was below the instrument sensitivity level and then applying the additive log - ratio transform to place the data in a 7-dimensional euclidean space ( see * ? ? ? * for a recent monograph on compositional data ) .then , cluster analysis was carried out over the first five principal components of the scaled euclidean variables .the results of the analysis indicated that whereas some methods seemed to target the partition of the data into major regions , others tried hard to discover the sub - structure of areas .this was clearly recognized when the aris of the groupings were computed either with respect to one classification or the other . naturally , if a method produced a grouping which was accurate with respect to major regions , it had lower ari with respect to the division into areas .clues , pdfc and the mean shift algorithm using the nr bandwidth clearly favoured grouping the data into major categories .the pdfc method obtained a remarkable ari of 0.841 by clustering the data into 3 groups , whereas clues only found 2 groups resulting in an ari of 0.680 . using the nr bandwidth the mean shift algorithm achieved an ari of 0.920 with respect to the true grouping into major regions; it correctly identified all the data points in regions r1 and r2 , although region r3 appeared divided into several subregions .in contrast , mclust and the mean shift algorithm combined with all the more sophisticated bandwidth selectors tended to produce groupings closer to the assignment into smaller areas .mclust showed the existence of 8 groups and achieved an ari of 0.739 with respect to the true distribution into areas .the mean shift analyses with the cv , pi and scv bandwidths all found 7 groups , leading to aris of 0.741 ( cv bandwidth ) , 0.791 ( pi ) and 0.782 ( scv ) .it is not always easy to interpret visually estimates of multivariate derivatives . to assist us, we use the significant negative curvature regions of , defined as the set containing the values of such that the null hypothesis that the hessian is positive definite is significantly rejected .the appropriate kernel test statistic , null distribution and adjustment for multiple testing is outlined in and implemented in the ` feature ` library in ` r ` .significant negative curvature regions corresponds to a modal region in the density function , and hence a local maxima in data density .these authors focused on the scale space approach of smoothing and so did not develop optimal bandwidth selectors for their density derivative estimates . here , we compare the significant curvature regions obtained using a usual bandwidth selector to those with an optimal bandwidth in figure [ fig : earthquake ] on the earthquake data from .the recorded measurements are the latitude and longitude ( in degrees ) and depth ( in km ) of epicenters of 510 earthquakes . here , negative latitude indicates west of the international date line , and negative depth indicates distances below the earth s surface .the depth is transformed using log(depth ) .for these transformed data , we use pi selectors and and scv selectors and .as expected from asymptotic theory , bandwidths for hessian estimation are larger in magnitude than bandwidths for density estimation .moreover only the central modal region is present using , whereas with , the three local modal regions are more clearly delimited from the surrounding space , confirming the three modes obtained with subjective bandwidth selection by . .( upper right ) plug - in selector .( lower left ) scv selector .( lower right ) scv selector .the significant curvature regions or modal regions are more clearly delimited from the surrounding scatter point cloud with the selectors corresponding to second derivative.,title="fig:",scaledwidth=50.0% ] .( upper right ) plug - in selector .( lower left ) scv selector .( lower right ) scv selector .the significant curvature regions or modal regions are more clearly delimited from the surrounding scatter point cloud with the selectors corresponding to second derivative.,title="fig:",scaledwidth=50.0% ] + .( upper right ) plug - in selector .( lower left ) scv selector .( lower right ) scv selector .the significant curvature regions or modal regions are more clearly delimited from the surrounding scatter point cloud with the selectors corresponding to second derivative.,title="fig:",scaledwidth=50.0% ] .( upper right ) plug - in selector .( lower left ) scv selector .( lower right ) scv selector .the significant curvature regions or modal regions are more clearly delimited from the surrounding scatter point cloud with the selectors corresponding to second derivative.,title="fig:",scaledwidth=50.0% ] + * acknowledgments . *grant mtm2010 - 16660 ( both authors ) from the spanish ministerio de ciencia e innovacin , and various fellowships ( second author ) from the institut curie , france , and the institute of translational sciences , france have supported this work .henceforth the following assumptions are made : 1 . is a symmetric -variate density such that and all its partial derivatives up to order are bounded , continuous and square integrable .2 . is a density function with all its partial derivatives up to order bounded , continuous and square integrable .3 . is a sequence of bandwidth matrices such that all entries of and tend to zero as .these do not form a minimal set of assumptions , but they serve as useful starting point for the results that we subsequently develop . besides , in this section integrals without any integration limits are assumed to be integrated over the appropriate euclidean space .we also assume that suitable regularity conditions are satisfied so that the exchange of term - by - term integration and differentiation of taylor expansions are well - defined .reasoning as in lemma 1 in , it follows that is asymptotically equivalent to ^{-1}{\mathsf{d}}_{{\bf h}}[\widehat{\mathrm{mise}}_r-\mathrm{mise}_r]({{\bf h}}_{\mise , r}) ] . since =\mise_r({{\bf h}})-\operatorname{tr}{\mathbf{r}}({\mathsf{d}}^{\otimes r}f) ] , by standard -statistics theory the previous variance is of the same order as where \\ { \mathbf{\xi}}_2&=\operatorname{\boldsymbol{\mathbb{e}}}[\boldsymbol{\varphi}_{{\bf h}}({{\bf x}}_1-{{\bf x}}_2)\boldsymbol{\varphi}_{{\bf h}}({{\bf x}}_1-{{\bf x}}_2)^\top]\\ { \mathbf{\xi}}_0&=\operatorname{\boldsymbol{\mathbb{e}}}[\boldsymbol{\varphi}_{{\bf h}}({{\bf x}}_1-{{\bf x}}_2)]\operatorname{\boldsymbol{\mathbb{e}}}[\boldsymbol{\varphi}_{{\bf h}}({{\bf x}}_1-{{\bf x}}_2)]^\top\end{aligned}\ ] ] with of the order of , namely having all its entries of order . the following lemma provides an explicit expression for the function that will be helpful to evaluate . [lem : varphi ] the function can be explicitly expressed as where the function is given by and the matrices , are defined as \end{aligned}\ ] ] where we understand that . since , its differential is decomposed into three terms from , the differentials involved in the first two terms can be expressed as d { \operatorname{vec}}{{\bf h}},\end{aligned}\ ] ] where is a matrix such that .for the third term , since {\operatorname{vec}}{{\bf h}}^{\otimes -r}={\operatorname{vec}}\big({{\bf i}}_d[{\mathsf{d}}({\mathsf{d}}^\top)^{\otimes 2r}]{\operatorname{vec}}{{\bf h}}^{\otimes -r}\big)=({\operatorname{vec}^\top}{{\bf h}}^{\otimes - r}\otimes{{\bf i}}_d){\mathsf{d}}^{\otimes 2r+1} ] , where , which also depends on through and . besides , =\iint(\boldsymbol\varphi\boldsymbol\varphi^\top)({{\boldsymbol z}})f({{\boldsymbol y}})f({{\boldsymbol y}}+{{\bf h}}^{1/2}{{\boldsymbol z } } ) d{{\boldsymbol y}}d{{\boldsymbol z}}\sim r(f)\int\boldsymbol\varphi({{\boldsymbol z}})\boldsymbol\varphi({{\boldsymbol z}})^\top d{{\boldsymbol z}}\ ] ] which , in view of lemma [ lem : varphi ] , leads to . 1 . is a symmetric -variate density such that and all its partial derivatives up to order are bounded , continuous and square integrable .2 . is a sequence of bandwidth matrices such that all entries of and tend to zero as . to make use of lemma [ lem : asymhr ] once more ,notice that the difference between the mise and its estimate is so taking into account again , we come to &\sim(-1)^r \tfrac{m_2(k)^2}{2}(\operatorname{vec}^\top { { \bf i}}_{d^r}\otimes { { \bf i}}_{d^2 } \otimes \operatorname{vec}^\top { { \bf h}})(\hat{\boldsymbol{\psi}}_{2r+4}({{\bf g } } ) - \boldsymbol{\psi}_{2r+4}),\end{aligned}\ ] ] so that the performance of is determined by the performance of as an estimator of . from theorem 2 in the optimal pilot bandwidth for the estimator is of order , leading to =o(n^{-4/(d+2r+6)}) ] so finally we arrive to by applying lemma [ lem : asymhr ] . as in , it can be shown that the function can be replaced for everywhere in the asymptotic analysis , since the difference between their respective minimizers is of relative order faster than , which is the fastest attainable rate in bandwidth selection .so to apply lemma [ lem : asymhr ] it is also possible consider instead of , hence we focus on analyzing the difference at of the same order as . to begin with ,note that using a fourth order taylor expansion of results in { \mathsf{d}}^{\otimes 2r+p } \bar{l}({{\bf g}}^{-1/2}{{\boldsymbol x } } ) \ , d{{\boldsymbol z}}\\ & = \tfrac{1}{4 } m_2(k)^2 |{{\bf g}}|^{-1/2 } ( { { \bf g}}^{-1/2})^{\otimes 2r } [ { { \bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } ( { { \bf g}}^{-1/2})^{\otimes 4 } ] { \mathsf{d}}^{\otimes 2r+4 } \bar{l}({{\bf g}}^{-1/2}{{\boldsymbol x}})\\ & = \tfrac{1}{4 } m_2(k)^2|{{\bf g}}|^{-1/2 } [ { { \bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } ] ( { { \bf g}}^{-1/2})^{\otimes ( 2r+4 ) } { \mathsf{d}}^ { \otimes 2r+4 } \bar{l}({{\bf g}}^{-1/2}{{\boldsymbol x}})\\ & = \tfrac{1}{4 } m_2(k)^2 [ { { \bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } ] { \mathsf{d}}^ { \otimes 2r+4 } \bar{l}_{{\bf g}}({{\boldsymbol x}}),\end{aligned}\ ] ] where we have made use of the fact that and , and that the entries of tend to zero as a consequence of ( a3 ) and ( a5 ) . this asymptotic approximation is then used to expand the terms in = ( -1)^r { \operatorname{vec}^\top}{{\bf i}}_{d^r } \bigg\ { n^{-1 } \bar{\delta}_{{\bf h}}*{\mathsf{d}}^{\otimes 2r } \bar{l}_{{\bf g}}(0 ) \\ + ( 1-n^{-1})\operatorname{\boldsymbol{\mathbb{e}}}\big [ ( \bar{\delta}_{{\bf h } } * { \mathsf{d}}^{\otimes 2r } \bar{l}_{{\bf g}})({{\bf x}}_1 - { { \bf x}}_2)\big ] - \int \bar{\delta}_{{\bf h } } * { \mathsf{d}}^{\otimes 2r } f({{\boldsymbol x } } ) f({{\boldsymbol x } } ) \ , d{{\boldsymbol x}}\bigg\}.\end{gathered}\ ] ] precisely , for the first term we have ({{\bf g}}^{-1/2})^{\otimes 2r+4 } { \mathsf{d}}^{\otimes 2r+4 } \bar{l}(0),\end{aligned}\ ] ] and for the second term \\ & \sim \tfrac{1}{4}m_2(k)^2[{{\bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } ] \iint { \mathsf{d}}^{\otimes 2r+4 } \bar{l}_{{\bf g}}({{\boldsymbol x}}- { { \boldsymbol y } } ) f({{\boldsymbol x } } ) f({{\boldsymbol y } } ) \ , d{{\boldsymbol x}}d{{\boldsymbol y}}\\ & = \tfrac{1}{4}m_2(k)^2 [ { { \bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } ] \iint \bar{l}_{{\bf g}}({{\boldsymbol x}}- { { \boldsymbol y } } ) { \mathsf{d}}^{\otimes 2r+4}f({{\boldsymbol x } } ) f({{\boldsymbol y } } ) \ , d{{\boldsymbol x}}d{{\boldsymbol y}}\\ & \sim \tfrac{1}{4}m_2(k)^2 [ { { \bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } ] \iint \bar{l } ( { { \boldsymbol w } } ) \sum_{p=0}^2 \frac{(-1)^p}{p!}[{{\bf i}}_{d^{2r+4 } } \otimes ( { { \boldsymbol w}}^\top { { \bf g}}^{1/2})^{\otimes p } ] \\& \quad \times { \mathsf{d}}^{\otimes 2r+4+p } f({{\boldsymbol y } } ) f({{\boldsymbol y } } ) \ , d{{\boldsymbol w}}d{{\boldsymbol y}}\\ & = \tfrac{1}{4}m_2(k)^2 [ { { \bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } ] \sum_{p=0}^2 \frac{(-1)^p}{p!}[{{\bf i}}_{d^{2r+4 } } \otimes \{{{\boldsymbol \mu}}_p(\bar{l})^\top ( { { \bf g}}^{1/2})^{\otimes p}\}]{{\boldsymbol \psi}}_{2r+4+p}\\ & = \tfrac{1}{4}m_2(k)^2 [ { { \bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2}]{{\boldsymbol \psi}}_{2r+4 } + \tfrac{1}{4}m_2(k)^2 m_2(l)[{{\bf i}}_{d^{2r } } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } \otimes { \operatorname{vec}}^\top { { \bf g } } ] { { \boldsymbol \psi}}_{2r+6},\end{aligned}\ ] ] since and . finally , noting that and making use of the previously obtained expansion for , the third term is { { \boldsymbol \psi}}_{2r+4}.\end{aligned}\ ] ] thus, & \sim \tfrac{1}{4 } m_2(k)^2 n^{-1}|{{\bf g}}|^{-1/2}[{\operatorname{vec}}^\top{{\bf i}}_{d^r } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2}]({{\bf g}}^{-1/2})^{\otimes 2r+4 } { \mathsf{d}}^{\otimes 2r+4 } \bar{l}(0 ) \\ & \quad + \tfrac{1}{4}m_2(k)^2 m_2(l)[{\operatorname{vec}}^\top{{\bf i}}_{d^r } \otimes ( { \operatorname{vec}^\top}{{\bf h}})^{\otimes 2 } \otimes { \operatorname{vec}}^\top { { \bf g } } ] { { \boldsymbol \psi}}_{2r+6}\end{aligned}\ ] ] calculations in section [ sec:3 ] give is order , as for the plug - in selector , so substituting to this into the derivative of the previous equation yields \ } & = o ( [ n^{-1 } |{{\bf g}}|^{-1/2}(\operatorname{tr}{{\bf g}})^{-r-2 } + \operatorname{tr}{{\bf g } } ] { \mathbf{j}}_{d^2 } ) { \operatorname{vec}}{{\bf h}}\\ & = o(n^{-2/(2r+d+6 ) } { \mathbf{j}}_{d^2 } ) { \operatorname{vec}}{{\bf h}}.\end{aligned}\ ] ] lemma [ lem : asymhr ] shows that is asymptotically equivalent to ({{\bf h}}_{\mise , r}) ] is dominated by its squared bias term , then .forina m. , armanino c. , lanteri s. and tiscornia e. ( 1983 ) classification of olive oils from their fatty acid composition .in : h. martens and h. j. russwurm ( eds . ) , _ food research and data analysis _ , applied science publishers , london , pp .189214 .horton , p. and nakai , k. ( 1996 ) a probabilistic classification system for predicting the cellular localization sites of proteins .proceedings of _ intelligent systems in molecular biology ( ismb-96 ) _ , 109115 .zeng , q.t . ,pratt , j.p ., pak , j. , ravnic , d. , huss , h. and mentzer , s.j .( 2007 ) feature - guided clustering of multi - dimensional flow cytometry datasets ._ journal of biomedical informatics _, * 40 * , 325331 .
important information concerning a multivariate data set , such as clusters and modal regions , is contained in the derivatives of the probability density function . despite this importance , nonparametric estimation of higher order derivatives of the density functions have received only relatively scant attention . kernel estimators of density functions are widely used as they exhibit excellent theoretical and practical properties , though their generalization to density derivatives has progressed more slowly due to the mathematical intractabilities encountered in the crucial problem of bandwidth ( or smoothing parameter ) selection . this paper presents the first fully automatic , data - based bandwidth selectors for multivariate kernel density derivative estimators . this is achieved by synthesizing recent advances in matrix analytic theory which allow mathematically and computationally tractable representations of higher order derivatives of multivariate vector valued functions . the theoretical asymptotic properties as well as the finite sample behaviour of the proposed selectors are studied . in addition , we explore in detail the applications of the new data - driven methods for two other statistical problems : clustering and bump hunting . the introduced techniques are combined with the mean shift algorithm to develop novel automatic , nonparametric clustering procedures which are shown to outperform mixture - model cluster analysis and other recent nonparametric approaches in practice . furthermore , the advantage of the use of smoothing parameters designed for density derivative estimation for feature significance analysis for bump hunting is illustrated with a real data example . _ keywords : _ adjusted rand index , cross validation , feature significance , nonparametric kernel method , mean integrated squared error , mean shift algorithm , plug - in choice
average distance is one of the most important measurements characterizing complex networks , which is a subject attracting a lot of interest in the recent physics literature .extensive empirical studies showed that many , perhaps most , real networks exhibit remarkable small - world phenomenon , with their average distance grows as a function of network order ( i.e. , number of nodes in a network ) , or slowly .as a fundamental topological property , average distance is closely related to other structural characteristics , such as degree distribution , centrality , fractality , symmetry , and so forth .all these features together play significant roles in characterizing and understanding the complexity of networks .moreover , average distance is relevant to various dynamical processes occurring on complex networks , including epidemic spreading , target search , synchronization , random walks , and many more .in addition to the small - world behavior , other two prominent properties that seem to be common to real networks , especially biological and social networks , are scale - free feature and modular structure .the former implies that the networks obey a power - law degree distribution as with , while the latter means that the networks can be divided into groups ( modules ) , within which nodes are more tightly connected with each other than with nodes outside . in order to describe simultaneously the two striking properties , ravasz and barabsi ( rb ) presented a famous model , mimicking scale - free modular networks .many topological properties of and dynamical processes on the rb model have been investigated in much detail , including degree distribution , clustering coefficient , betweenness centrality distribution , community structure , random walks , among others .particularly , by mapping the networks onto a potts model in one - dimensional lattices , noh proved that the rb model is small - world . in this paper , we study the average distance in the rb model by using an alternative approach very different from the previous one . our computation method is based on the particular deterministic construction of the rb model .concretely , making use of the self - similar structure of the scale - free modular networks , we establish some recursion relations , from which we further derive the exactly analytical solution to the average distance .our obtained rigorous expression is compatible with the previous formula .we show that the rb model is small - world .we also show that the small - world behavior is a natural result of the scale - free and modular architecture of the networks under consideration .we first introduce the rb model for the scale - free modular networks , which are built in an iterative way .let stand for the network model after ( ) iterations ( i.e. , number of generations ) . initially ( ) , the model is composed ( ) nodes linked by edges forming a complete graph , among which a node ( e.g. , the central node in figure [ network ] ) is called hub ( or root ) node , and the other nodes are named peripheral nodes .at the second generation ( ) , replicas of are created with the peripheral nodes of each copy being connected to the root of the original . in this way, we obtained , the hub and peripheral nodes of which are the hub of the original and the peripheral nodes in the duplicates of , respectively .suppose one has , the next generation network can be obtained by adding copies of to the primal , with all peripheral nodes of the replicas being linked to the hub of the original unit .the hub of the original and the peripheral nodes of the copies of form the hub node and peripheral nodes of , respectively .repeating indefinitely the two steps of replication and connection , one obtains the scale - free modular networks .figure [ network ] illustrates a network for the particular case of . for the case of .the filled squares and circles represent the hub node and peripheral nodes , respectively . ] many interesting quantities of the model can be determined explicitly . in ,the network order , denoted by is ; the degree $ ] of the hub node is the largest among all nodes ; the number of peripheral nodes , forming a set , is ; and the average degree is approximately equal to a constant in the limit of infinite , showing that the networks are sparse .the model under consideration is in fact an extension of the one proposed in and studied in much detail in .it presents some typical features observed in a variety of real - world systems .its degree distribution follows a power - law scaling with a general exponent belonging to the interval .its average clustering coefficient tends to a large constant dependent on ; and its average distance grows logarithmically with the network order , both of which show that the model is small - world .in addition , the betweenness distribution of nodes also obeys the power - law behavior with the exponent regardless of the parameter .particularly , the whole class of the networks shows a remarkable modular structure .these peculiar structural properties make the networks unique within the category of complex networks .as shown in the introduction section , average distance is closely related to many topological properties of and various dynamical processes on complex networks . in what follows, we will derive analytically the average distance of the scale - free modular networks by applying an alternative method completely different from that in .we represent all the shortest path lengths of network as a matrix in which the entry is the distance between nodes and that is the length of a shortest path joining and .a measure of the typical separation between two nodes in is given by the average distance defined as the mean of distances over all pairs of nodes : where denotes the sum of the distances between two nodes over all couples . notice that in eq .( [ total01 ] ) , for a pair of nodes and ( ) , we only count or , not both .( color online ) schematic illustration of the means of construction of the scale - free modular networks . is obtained by joining replicas of denoted as , which are connected to one another by linking all the peripheral nodes of ( ) to the hub node ( denoted by ) of . ]we continue by exhibiting the procedure of determining the total distance and present the recurrence formula , which allows us to obtain of the generation from of the generation .the studied network has a self - similar structure that allows one to calculate analytically . by construction ( see figure [ labeling ] ) , network is obtained by joining copies of that are labeled as , , , . using this self - similar property , the total distance satisfies the recursion relation where is the sum over all shortest path length whose endpoints are not in the same branch .the paths that contribute to must all go through the hub node , where the copies of are connected .hence , to determine , all that is left is to calculate . the analytic expression for , referred to as the crossing path length , can be derived as below .let be the sum of the lengths of all shortest paths whose endpoints are in and , respectively .according to whether the two branches are one link long or two links long , we split the crossing paths into two categories : the first category composes of crossing paths ( ) , while the second category consists of crossing paths with , , and .it is easy to see that the numbers of the two categories of crossing paths are and , respectively .moreover , any two crossing paths in the same category have the same length .thus , the total sum is given by having in terms of the quantities of and , the next step is to explicitly determine the two quantities . to calculate the crossing distance and , we give the following notation . for an arbitrary node in network ,let be the smallest value of the shortest path length from to any of the peripheral nodes belonging to , and the sum of for all nodes in is denoted by .analogously , in let denote the distance from a node to the hub node , and let stand for the total distance between all nodes in and the hub node in , including itself . by definition , can be given by the sum +(m-1)\,\sum_{v\in h_{g}}f_v(g)\nonumber \\ & = & ( m-1)\,f_{g}+n_g+m_g\,,\end{aligned}\ ] ] and can be written recursively as \nonumber \\ & = & m_g+(m-1)(f_{g}+n_g)\,.\end{aligned}\ ] ] using , and considering and , the simultaneous equations ( [ bottom01 ] ) and ( [ hub01 ] ) can be solved inductively to obtain : \ ] ] and with above obtained results , we can determine and , which can be expressed in terms of these explicitly determined quantities . by definition , is given by the sum \nonumber \\ & = & \sum_{v \in h_{g}^{(2 ) } } \sum_{u \in h_{g}^{(1 ) } } h_u(g)+\sum_{u \in h_{g}^{(1)}}\sum_{v \in h_{g}^{(2)}}[1+f_v(g ) ] \nonumber \\&=&n_g\,m_g+(n_g)^2+n_g\,f_g\,.\end{aligned}\ ] ] inserting eqs .( [ bottom02 ] ) and ( [ hub02 ] ) into ( [ cross03 ] ) , we have \,.\ ] ] proceeding similarly , \nonumber \\&=&2\,m^{2g-4}\left[m^2+(2g-3)m-2g+4\right]\,.\end{aligned}\ ] ] substituting eqs .( [ cross04 ] ) and ( [ cross05 ] ) into ( [ cross01 ] ) , we get substituting eq .( [ cross06 ] ) into ( [ total01 ] ) and using the initial value , we can obtain the exact expression for the total distance \,.\ ] ] the expression provided by eq .( [ total03 ] ) is consistent with the result previously obtained .then the analytic expression for average distance can be obtained as average distance versus network order on a semi - logarithmic scale .the solid lines are guides to the eye . ]we have also checked our rigorous result provided by eq .( [ apl02 ] ) against numerical calculations for different and various . in all the caseswe obtain a complete agreement between our theoretical formula and the results of numerical investigation , see figure [ avedis ] .we continue to express the average distance as a function of network order , in order to obtain the scaling between these two quantities . recalling that , we have .hence eq .( [ apl02 ] ) can be rewritten as in the infinite network order limit , i.e. , thus , for large networks , the leading behavior of average distance grows logarithmically with increasing network order .the above observed small - world phenomenon that the leading behavior of average distance is a logarithmic function of network order can be accounted for by the following heuristic arguments based on the peculiar architecture of the networks . at first sight , this family of modular networks is not a very compact system , since in these networks , nodes with large degrees are not directly linked to one another , but connected to those nodes with small degree .however , this network family is made up of many small densely interconnected clusters , which combine to form larger but less compact groups connected by nodes with relatively high degrees .for node pairs in a small group , their shortest path length is very small because of the high cohesiveness of small modules . for the length of shortest paths between two nodes belonging to different large groups, it seems long because the groups that the nodes lie at are not adjacent to each other .but this is not the fact . by construction , although the relatively large groups are not directly adjacent , they are joined by some large nodes , which are connected to each other by a layer of intermediate small - degree nodes ( see figure [ network ] ) , such as the peripheral nodes or locally peripheral nodes .thus , different from conventional random scale - free networks , especially assortative networks , in the studied scale - free modular networks , although large - degree nodes are not connected to one another , they play the role of bridges linking different modules together , which is the main reason why the average distance of the networks is small .it deserves to be mentioned that , although the studied modular scale - free networks display small - world behavior , the logarithmic scaling of average distance with respect to network order is different from the sublogarithmic scaling for conventional non - modular stochastic scale - free networks with degree distribution exponent , in which the average distance behaves as a double logarithmic scaling with network order , namely , .thus , despite that the degree distribution exponent of the modular scale - free networks is smaller than 3 , their average distance is larger than that of their random counterparts with the same network order .the root of this difference may also lie with the modular structure , particularly the indirect connection of large nodes , as addressed above .the genuine reasons for this dissimilarity need further studies in the future .the determination and analysis of average distance is important to understand the complexity of and dynamic processes on complex networks , which has been a subject of considerable interest within the physics community . in this paper, we investigated analytically the average distance in a class of deterministically growing networks with scale - free behavior and modular structure , which exist simultaneously in a plethora of real - life networks , such as social and biological networks . based on the self - similar structure of the networks ,we derived the closed - form expression for the average distance .the obtained exact solution shows that for very large networks , they are small - world with their average distance increasing as a logarithmic function of network order . we confirmed the rigorous solution by using extensive numerical simulations .we also showed that the small - world behavior lies with the inherent modularity and scale - free property of the networks .we would like to thank xing li for his support .this research was supported by the national natural science foundation of china under grants no .60704044 , no . 60873040 , and no .60873070 , the national basic research program of china under grant no .2007cb310806 , shanghai leading academic discipline project no .b114 , the program for new century excellent talents in university of china ( grants no .ncet-06 - 0376 ) , and shanghai committee of science and technology ( grants no . 08dz2271800 and no .09dz2272800 ) .
various real - life networks of current interest are simultaneously scale - free and modular . here we study analytically the average distance in a class of deterministically growing scale - free modular networks . by virtue of the recursive relations derived from the self - similar structure of the networks , we compute rigorously this important quantity , obtaining an explicit closed - form solution , which recovers the previous result and is corroborated by extensive numerical calculations . the obtained exact expression shows that the average distance scales logarithmically with the number of nodes in the networks , indicating an existence of small - world behavior . we present that this small - world phenomenon comes from the peculiar architecture of the network family .
transient radio sky or time - variable radio sources in space have been recognized as one of the key science drivers for the square kilometre array ( ska ) what will be reflected in its design and operations .this science area is marked as `` exploration of the unknown '' what includes the likely discovery of new classes of objects and phenomena . yetthe statistical properties of transient radio sky remain largely unknown , what includes even the high - energy transients as they seem to be unfolding on very short time scales . in detection approach for the arrays such as mwa , lofar and ska we propose to acknowledge the need for reducing data transport and computational cost while searching for fast transients .therefore , we suggest that such detection needs to be done in stages with reduced data for storing and transporting at every consequent stage .firstly , the presence of the abrupt change in raw time - domain data needs to be identified .secondly , the evidences of the extraterrestrial nature of the detected signal need to be found .thirdly , if such evidence is present , more detailed analysis can be applied . in this paperwe present an attempt to develop a reliable and cost - effective technique for the first stage of detection using statistical signal detection approach .the proposed method of handling received time domain sequences is based on online abrupt changes detection scheme .the algorithm operates in time domain and is required to identify the onset of the signal by checking if the threshold of the detection process has been reached .let s consider the signal with the following structure : where _ t _ is time , _( _ t _ ) is a gaussian noise process with mean and variance , _s_(_t _ ) is a gaussian signal with mean and variance .let and since we assume that signal and noise are independent . and are hypothesis under which the signal of interest is either absent ( hypothesis ) or present ( ) along with the additive background noise n(t ) . as we demonstrate further , the proposed utilization of algorithm known as cumulative sum ( cusum ) introduced by allows for mean only parameter to be sufficient when building a detector. changes detection scheme , used ino transient identification algorithm , is based on two generic detection procedures : log likelihood ratio ( llr ) and cusum .llr test is based on using a ratio of two to build an indicator upon which a threshold can be applied .if the ratio exceeds given threshold , it indicates the prevalence of one of the hypothesis over another .lr for a gauss - distributed data using the model given in ( [ eq2 ] ) is : we consider identification successful when , where is a predefined threshold for identification .llr is obtained by taking logarithm of ( [ eq4 ] ) : equation ( [ eq5 ] ) can now be expressed through either mean- or variance - based detector . in the latter case ,assuming equal and zero mean , we obtain : assuming equal and unity variance , ( [ eq5 ] ) becomes : .thus , defining and rescaling the log likelihood expression by the proportionality value ( ) to obtain a threshold value .if the probability of type i error is fixed , the detection threshold can be expressed as .cusum is a detection procedure proposed by .cusum is a repeated llr test for a change from one known distribution to another .we assume that for known densities and there exists an unknown change point , where the input sequence if and if .the cusum statistics is the one that satisfies the following recursion : where is a likelihood ratio for evaluated at .the procedure raises an alarm at time for threshold , which is discussed later . if likelihood value from ( [ eq5 ] ) used with regard to variances, the cusum recursion would take the following form , based on the usual radiometric output being the averaged sum of the samples squares or the energy detector : with the parameter denoting variance - based coefficient .since it relies on the full knowledge of variances of the signal and noise , it is not always practical to use cusum algorithm with the recursion denoted in ( [ eq21 ] ) . applying ( [ eq13 ] ) cusum may be expressed as a procedure , which starts with , it recursively calculates : and stops as soon as exceeds threshold . generally speaking , equation ( [ eq23 ] ) can be rewritten as , where being a configurable parameter .choice of this parameter is ruled by the expected behavior of the procedure , as stated by : `` scoring is chosen so that the mean sample path on the chart when quality is satisfactory is downwards and is upwards when quality is unsatisfactory '' .as demonstrated by the optimal value for parameter r will be the largest acceptable mean value or the smallest unacceptable mean value .one of the benefits of cusum scheme for signal detection is its stability in the presence of regression behavior in input data , which is due to the gradual changes in the system temperature or changes of the overall sky temperature for large field of view or `` all - sky '' telescopes .choice between mean - based and variance - based cusum schemes depends on the type of the receiver used . for a linear output receiver ,sampled according to nyquist , the output represents voltage changes with time .both mean- and variance - based indicator function can be used .however , mean - based indicator is preferred because it does not rely on variance - based separation of noise and signal . for full power square - law detector, however , mean - based indicator function becomes unusable due to the loss of information on mean .therefore , variance - based cusum scheme should be used discounting the utilization of mean .cusum scheme requires a threshold value to be chosen , which , when crossed , identifies the point of abrupt change in the statistical characteristics of the signal .threshold value can be derived from wald test on the mean of a normal population . assuming that , the value of threshold is equivalent to a sequence of wald sequential tests with boundaries : where was interpreted by as a crude approximation to the proportion of samples that trigger false alarm .this value can also be interpreted as a probability of false alarm in a traditional sense . since we assumed that ( [ eq_threshold ] ) can be rewritten as .the algorithm was tested on the data obtained from parks radiotelescope .input data contained 1 second observation from the vela pulsar psr j0835 - 4510 .the observation was sampled at 1416 mhz with 64 mhz bandwidth .psr j0835 - 4510 has a period of 89 ms .figure [ fig : vela_det ] represents a portion of the observed data with the detected pulses marked by vertical red lines , with blue lines representing 0.1 second intervals . out of 10 impulses contained in the test data ,9 were identified .90% of pulses being correctly identifed prove the applicability of cusum procedure described above upon the raw , time - domain radio data .detectability of a signal is controled by and relied upon the preset threshold , which is calculated based on the assumed probability distribution of data ( [ eq_threshold ] ) . ,observed at parks telescope , central frequency mhz , bandwidth mhz , recorded with 2-bit vlbi recorder .the recording started at .short bars mark intervals .vertical red lines mark the beginning of detected pulses ., width=623,height=136 ]we have presented the algorithm , which provides reliable and computationally efficient detection of dispersed radio transients in time domain .the algorithm is well - suited for being implemented on fpga or gpu platforms for real time detection on arrays such as ska .statistical methods used in the algorithm provide easy staging of detection process across multiple handling points giving the opportunity for significant reduction of data volume at each consequent stage of detection .the authors would like to thank aidan hotan of curtin institute of radioastronomy for making pulsar data available to us .basseville , m. , & nikiforov , i. , _ detection of abrupt changes : theory and application _ ,englewood cliffs , nj , prentice hall , 1993 box , g. , & ramirez , j. , _ cumulative score charts _ , quality and reliability international , 8,17 - 27 , 1992 chang , j. t. , & fricker , r.d .jr , _ detecting when a monotonically increasing mean has crossed a threshold _ , journal of quality technology , 31,2 , 217 , 1999 cordes , j. m. , _ the square kilometre array as a radio synoptic survey telescope : widefield surveys or transients , pulsars and eti _ , ska memo .ska , 2007 ( rev .fridman , p. , _ a method of detecting radio transients _ , http://arxiv.org/pdf/1008.3152 , 2010 gan , f. f. , _ cusum control charts under linear drift _ , the statistician , 41 , 71 , 1992 koenig , r.,_candidate sites for world s largest telescope face first big hurdle _ , science , 313 , 5789 , 910 , 2006 lorden , g. ,_ procedures for reacting to a change in distribution_ , annals of mathematical statistics , 42 , 1897 , 1971 manly , b. f. j. , _ the choice of a wald test on the mean of a normal population _ , biometrika , 57 , 1 , 91 , 1970 moustakides , g. , _ optimal stopping times for detecting changes in distributions _ ,annals of mathematical statistics , 14 , 1379 , 1986 page , e. , _ continuous inspection schemes _ , biometrika , 41 , 100 , 1954 van dobben de bruyn , c.s . , _ cumulative sum tests - theory and practice _ , hafner publishing co , new york , 1968
computationally inexpensive algorithm for detecting of dispersed transients has been developed using cumulative sums ( cusum ) scheme for detecting abrupt changes in statistical characteristics of the signal . the efficiency of the algorithm is demonstrated on pulsar psr j0835 - 4510 . [ firstpage ]
the cumulative normal distribution , be it univariate or multivariate , has to be evaluated numerically .there are numerous algorithms available , many of these having been fine - tuned , leading to faster evaluation and higher accuracy but also to lack of mathematical transparency. for the univariate case , has proposed a very simple and intuitive but powerful alternative that is based on taylor expansion of mills ratio or similar functions . in this notewe will extend marsaglia s approach to the bivariate case .this will require two steps : reduction of the evaluation of the cumulative bivariate normal distribution to evaluation(s ) of a univariate function , i.e. , to the cumulative bivariate normal distribution on the diagonal , and taylor expansion of that function . note that a similar approach , but with reduction to the axes instead of the diagonals , has been proposed by .the resulting algorithm has to be compared with existing approaches . for overview on and discussion of the latter , cf . , , , and .most implementations today will rely on variants of the approaches of or of .improvements of the latter method have been provided by and .the method of , although less reliable , is also very common , mainly because it is featured in and other prevalent books .it will turn out that the algorithm proposed in this paper is able to deliver near double precision ( in terms of absolute error ) using double arithmetic .furthermore , implementation of the algorithm using high - precision libraries is straightforward ; indeed , a quad - double implementation has been applied for testing purposes .performance is competitive , and trade - offs between speed and accuracy may be implemented with little effort .in this section we are going to develop the algorithm . in order to keep the presentation leanwe will often refer to the author s recent survey . for further background on normal distributionsthe reader is also referred to text books such as , and .denote by the density and distribution function of the standard normal distribution .mills ratio is then defined as furthermore , denote by the density and distribution function of the bivariate standard normal distribution with correlation parameter .we will also write we are going to use the following properties : in the following we will assume that , .in this case the following bounds apply ( cf .5.2 ) ) : furthermore , as is proven implicitly in ( * ? ? ?a.2 ) , now we define starting with we find the recursion which we can use to recursively evaluate the taylor expansion of around zero .dividing by for convenience , we define using we derive the following recursion scheme : with initial values here we have used that we can now compute numerically via note that it would also have been possible to work with , e.g. , one of the functions instead .the resulting recursion schemes are in fact easier ( two summands instead of three ) but will be running into numerical problems ( cancellation , or lower accuracy for ) . in order to apply the results from section [ subsec_diagonal ] to the numerical evaluation of for general , and , we start with the symmetric formula ( cf .* eq . ( 3.16 ) ) ) where and from the axis to the diagonal we get by applying the formula ( cf .* eq . ( 3.18 ) ) ) specifically , we obtain with where in an implementation ( [ eq_ax_rho1 ] ) should be used for , and ( [ eq_ax_rhom1 ] ) for , in order to avoid catastrophic cancellation .note also that in a last step , if necessary to ensure and , we apply the formulas ( cf .* eq . ( 2.15 ) ) and ( * ? ? ?* eq . ( 3.27 ) ) ) specifically , we obtain where in an implementation ( [ eq_bx_rho1 ] ) should be used for , and ( [ eq_bx_rhom1 ] ) for , in order to avoid catastrophic cancellation. it will be favorable to work with instead of .if ( [ eq_minusrho ] ) has to be applied ( i.e. , if , which is equivalent with ) , correspondingly we will work with the following we will discuss implementation of the algorithm derived in section [ sec_theory ] .the c++ language has been chosen because it is the market standard in quantitative finance , one of the fields frequently requiring evaluation of normal distributions .source code ( in c++ ) for evaluation of as in section [ subsec_diagonal ] , for and , is provided in figure [ source_diag ] . in the followingwe will comment on some details of the implementation .equations ( [ eq_rec_1 ] ) - ( [ eq_start_6 ] ) show that it is reasonable to provide , instead of , as input for the evaluation of .moreover , cf .( [ eq_rho_1 ] ) and ( [ eq_rho_2 ] ) , double inversion ( i.e. , computation of instead of ) is to be avoided in the reduction algorithm .values for and for are also expected as input parameters .this makes sense because the values are needed by the reduction algorithm as well ( and hence should not be computed twice ) .evaluation of is to be avoided for and has been replaced ( without optimization of the cutoff point ) by note that has to be computed anyway .constants ( all involving ) have been pre - computed in double precision .the recursion stops if a new term does not change the computed sum .if the a priori bound for the absolute error , given by ( [ eq_bounds ] ) , is less than , the upper bound is returned ( relative accuracy on the diagonal may be increased by dropping this condition but overall relative accuracy will still be determined by the reduction to the diagonal , cf .section [ subsec_reduction_imp ] ) , and by the accuracy of the implementation of .the final result is always checked against the upper and lower bound .note that and have different sign but comparable order .bracketing them before summation can therefore reduce cancellation error . ' '' '' .... double phi2diag ( const double & x , const double & a , // 1 - rho const double & px , // phi ( x ) const double & pxs ) //phi ( lambda ( rho ) * x ) { if ( a < = 0.0 ) return px ; // rho = = 1 if ( a > = 1.0 ) return px * px ; // rho = = 0 double b = 2.0 - a , sqrt_ab = sqrt ( a * b ) ; double asr = ( a > 0.1 ?asin ( 1.0 - a ) : acos ( sqrt_ab ) ) ; double comp = px * pxs ; if ( comp * ( 1.0 - a - 6.36619772367581343e-001 * asr ) < 5e-17 ) return b * comp ; double tmp = 1.25331413731550025 * x ; double a_coeff = a * x * x / b ; double a_even = -tmp * a ; double a_odd = -sqrt_ab * a_coeff ; double b_coeff = x * x ; double b_even = tmp * sqrt_ab ; double b_odd = sqrt_ab * b_coeff ; double d_coeff = 2.0 * x * x / b ; double d_even = ( 1.0 - a ) * 1.57079632679489662 - asr ; double d_odd = tmp * ( sqrt_ab - a ) ; double res = 0.0 , res_new = d_even + d_odd ; int k = 2 ; while ( res != res_new ) { d_even = ( a_odd + b_odd + d_coeff * d_even ) / k ; a_even * = a_coeff / k ; b_even * = b_coeff / k ; k++ ; a_odd * = a_coeff / k ; b_odd * = b_coeff / k ; d_odd = ( a_even + b_even + d_coeff * d_odd ) / k ; k++ ; res = res_new ; res_new + = d_even + d_odd ; } res * = exp ( -x * x / b ) * 1.591549430918953358e-001 ; return max ( ( 1.0 + 6.36619772367581343e-001 * asr ) * comp , b * comp - max ( 0.0 , res ) ) ; } .... ' '' '' source code ( in c++ ) for evaluation of as in equation ( [ eq_2axis ] ) is provided in figure [ source_phi ] , and source code for evaluation of is provided in figure [ source_help ] . in the followingwe will comment on some details of the implementation .the special cases and are dealt with in phi2 ( ) .therefore , in phi2help ( ) there is no check against 1.0 - rho = = 0.0 , 1.0 + rho = = 0.0 or s = = 0.0 .it is assumed that sqr(x ) evaluates x*x .the cutoff points have been set by visual inspection and might be optimized . ' '' '' ....double phi2help ( const double & x , const double & y , const double & rho ) { if ( x = = 0.0 ) return ( y > = 0.0 ? 0.0 : 0.5 ) ; double s = sqrt ( ( 1.0 - rho ) * ( 1.0 + rho ) ) ; double a = 0.0 , b1 = -fabs ( x ) , b2 = 0.0 ; if ( rho > 0.99 ) { double tmp = sqrt ( ( 1.0 - rho ) / ( 1.0 + rho ) ) ; b2 = -fabs ( ( x - y ) / s - x * tmp ) ; a = sqr ( ( x - y ) / x / s - tmp ) ; } else if ( rho < -0.99 ) { double tmp = sqrt ( ( 1.0 + rho ) / ( 1.0 - rho ) ) ; b2 = -fabs ( ( x + y ) / s - x * tmp ) ; a = sqr ( ( x + y ) / x / s - tmp ) ; } else { b2 = -fabs ( rho * x - y ) / s ; a = sqr ( b2 / x ) ; } double p1 = phi ( b1 ) , p2 = phi ( b2 ) ; // cum .standard normal double q = 0.0 ; if ( a < = 1.0 ) q = 0.5 * phi2diag ( b1 , 2.0 * a / ( 1.0 + a ) , p1 , p2 ) ; else q = p1 * p2 - 0.5 * phi2diag ( b2 , 2.0 / ( 1.0 + a ) , p2 , p1 ) ; int c1 = ( y / x > = rho ) ; int c2 = ( x < 0.0 ) ; int c3 = c2 & & ( y > = 0.0 ) ; return ( c1 & & c3 ?q - 0.5 : c1 & & c2 ?q : c1 ?0.5 - p1 + q : c3 ?p1 - q - 0.5 : c2 ?p1 - q : 0.5 - q ) ; } .... ' '' '' ' '' '' .... double phi2 ( const double & x , const double & y , const double & rho ) { if ( ( 1.0 - rho ) * ( 1.0 + rho ) < = 0.0 ) //|rho| = = 1 if ( rho > 0.0 ) return phi ( min ( x , y ) ) ; else return max ( 0.0 , min ( 1.0 , phi ( x ) + phi ( y ) - 1.0 ) ) ; if ( x = = 0.0 & & y = = 0.0 ) if ( rho > 0.0 ) return phi2diag ( 0.0 , 1.0 - rho , 0.5 , 0.5 ) ; else return 0.5 - phi2diag ( 0.0 , 1.0 + rho , 0.5 , 0.5 ) ; return max ( 0.0 , min ( 1.0 , phi2help ( x , y , rho ) + phi2help ( y , x , rho ) ) ) ; } .... ' '' ''evaluation of as in section [ sec_implement ] will require ( at most ) four calls to an implementation of the cumulative standard normal distribution ( phi ( ) in the code ) .the actual choice may well determine both accuracy and running time of the algorithm .for testing purposes i have been using a hybrid method , calling the algorithm from ( * ? ? ?2 ) for absolute value larger than , and phi ( ) from else . besides phi ( ) , exp ( ) will be called two times , arcsin ( ) or arccos ( ) two times , and sqrt ( ) six times .everything else is elementary arithmetic . due to the reduction algorithm, the final result will be a sum .therefore , very high accuracy in terms of relative error can not be expected .consequently , evaluation of the diagonal aims at absolute error as well . the phi2diag ( ) function is behaving as it may be expected from an approximation by a taylor series around zero : ( absolute ) error increases with decreasing . for ( or or )the error bounds from ( [ eq_bounds ] ) are taking over , and absolute error decreases again .the maximum absolute error is obtained for , ( maximum error of the upper bound is obtained for , cf .5.2 ) ) . in general , assuming that all numerical fallacies in the reduction algorithm have been taken care of , the diagonal is expected to provide a worst case because the errors of the two calls to phi2diag ( ) will not cancel . with respect to the reduction algorithm ,the case , , implying , is most critical . in order to give an impression of the algorithm s behaviour, we will discuss the results of a simulation study .for each , , the value of has been computed via the phi2 ( ) function from figure [ source_phi ] where has been drawn from a uniform distribution on ] , and where has been drawn from a uniform distribution on $ ] as well . the c++ implementation from has been serving as a competitor . both functions have been evaluated against a quad - double precision version of phi2 ( ) , implemented using the qd library and quad - double precision constants .the diagram in figure [ diag_error ] is displaying , for , the 99% quantile and the maximum of the absolute difference between the double precision algorithms ( phi2 and west ) and the quad - double precision algorithm . apart from a shift due to subtractions for positive ,errors of phi2 are rather symmetric around zero .the peaks at are due to the taylor expansion around zero ; the peaks at are due to taylor expansion after transformation of the argument .the characteristics of the quantile , in particular the little peaks at , are already visible in the error of the function used .the maximum error of west almost always stays below the one of phi2 .note that the maximum error of west is determined by the case and might be reduced by careful consideration of that case . in the simulation study ,phi2 was a little slower than west : it took approximately five minutes and four minutes to perform the evaluations on a fairly standard office pc ( and it took two days to perform the corresponding quad - double precision evaluations ) .the number of recursion steps used by phi2diag is increasing with . because of the mathematical transparency of the algorithm it should be easy to find an appropriate trade - off between speed and accuracy by replacing the condition terminating the recursion .wang , m. , kennedy , w.j .( 1990 ) , _ comparison of algorithms for bivariate normal probability over a rectangle based on self - validated results from interval analysis _ ,journal of statistical computation and simulation 37(1 - 2 ) , pp .
we propose an algorithm for evaluation of the cumulative bivariate normal distribution , building upon marsaglia s ideas for evaluation of the cumulative univariate normal distribution . the algorithm is mathematically transparent , delivers competitive performance and can easily be extended to arbitrary precision .
we consider the coupled rc circuit as shown in fig . [ fig1 ] .two rc circuits of resistances and capacitances and , respectively , are coupled through a third capacitance .the two rc circuits are subject to constant driven currents and , and the voltage differences across the resistors are denoted as and , respectively .the equation of state of this circuit is where , and the two resistors are thermalized at temperature , while voltages across the resistors fluctuate due to the johnson - nyquist ( thermal ) noises and . the noises are assumed to be uncorrelated and gaussian white , and they satisfy the fluctuation - dissipation relation with for , where is boltzmann s constant . via the change of variables , the equation of state can be rewritten as which is mathematically identical to that of a non - driven circuit .the solution of the stochastic equation is in our work we only focus on the steady - state condition , where the first term in eq .[ v_sol ] damps out .using eq .[ thm_fd ] and the fact that is symmetric , time - correlation functions between voltage signals can be derived as and variances and covariances are just their special cases when . using the method of diagonalization, we then derive where and are the eigenvalues of ( is assigned as the larger one ) , , , , and are the element of the matrix . moreover , the correlations between and can be derived as ( no causality ) and .the fokker - planck equation of the full circuit can be shown to be + \frac12 \nabla \cdot \hat{\bf{m}}^{-1 } \hat{\bf{\gamma } } ( \hat{\bf{m}}^{-1})^{t } \nabla p(\vec{v},t)\ , , \ ] ] and the steady - state distribution of is for convenience we use the symbol `` '' to denote that the equality holds up to some normalizing constant that remains invariant in the time - reversal process .note that in the non - driven case the expression reduces to the boltzmann factor of the stored energy in capacitors .the transition probability of the complete description , under an infinitesimal change in time , can be derived as \ , , \label{p_f_all}\ ] ] where /dt ] .thus gaussianity is a sufficient condition of the ft - like behavior .ft is valid if ( in dimensionless units ; in cases where means entropy then ft validates if ) , and for the case where the observed slope is not equal to 1 , ft can be easily restored with the rescaled variable . ] to demonstrate that ft holds for any gaussian random variable whose ratio of variance over mean value is .moreover , with the aid of time - correlation functions , one can also demonstrate the validity of ft over finite - time processes , where , and .in the reduced descriptions , we neglect the signal intentionally , and we would like to check whether ft can still validate with the knowledge of only . note that since the current throught is not measured , the actual dissipation through the resistor is not known , while there exist many methods towards guessing an effective dissipation simply from the time series of . in this workwe adopt two methods . in description ( a ), we treat the time series of as that from a virtual single - rc circuit , and compute the current and therefore dissipation directly following the equation of this simplified circuit . and in description ( b ) , an effective dissipation can be derived using the ratio of forward and backward transition probabilities in over infinitesimal timesteps .we first derive the steady - state probability distribution in : where , and is the effective capacitance .based on eq .[ ss_v1 ] , we can develop a naive interpretation ( `` description ( a ) '' ) , where the masked circuit is treated as a single - rc circuit with capacitance and unmodified ad .thus and are neglected intentionally .this effective single - rc circuit can give the correct steady - state distribution in .alternatively , one can regard the time series of as that from a single - rc circuit , as the effective resistance and capacitance can be derived from its power spectrum , which can be shown to be identical with and , respectively . the total entropy change of this _ gedanken _ single - rc circuit , during an infinitesimal timestep , is where the first term on the rhs represents a `` virtual '' dissipation , as is the virtual current going through in this single - rc circuit . for the case of finite - time differencewe have again one finds to be gaussian , while is the average virtual dissipation from . using time - correlation functions , it is straightforward to derive its variance : \bigg\ } \ , .\label{variance_ds1_a}\end{aligned}\ ] ] therefore , ft fails with the adoption of such dissipation function , even at the small- limit . nevertheless , from eq .[ variance_ds1_a ] one finds that this deviation becomes less prominent at large .moreover , one can also show that the deviation from ft diminishes in the weak - coupling regime , as the deviation in variance from is proportional to .beside the above reduced description , one can define the effective dissipation function starting from the forward transition probability of over infinitesimal timesteps .it can be derived by tracing out the degree of freedom in : ^ 2 \right\ } \nonumber \\ & \equiv \exp [ - dt\ , ( \tilde{m } \dot{v_1 } + v'_1)^2 / ( 2 \tilde{\gamma}_1 ) ] \label{p_f_v1}\end{aligned}\ ] ] to the lowest nonvanishing order , where is a two - dimensional vector of elements and .note that this transition probability can be compared to that of a single - rc circuit with the aforementioned effective capacitance , , , and , where . and the renormalized noise amplitude parameter is which is smaller than .since = \exp ( - \tilde{m } v'^{\ , 2}_1 / \tilde{\gamma}_1 ) $ ] , this effective single - rc circuit also gives the correct steady - state probability distribution in . in this effective single - rc circuit , the reversed transition probabilityis derived simply by replacing with and with in eq .[ p_f_v1 ] .the net dissipation and total entropy change are \ , \ \\text{and } \\d\tilde{s}^{\rm{(b)}}_{1\rm{tot } } & = d\tilde{s}^{\rm{(b)}}_{1q } - k_b\ln \frac { p_{\rm{ss}}(v_1(t+dt ) ) } { p_{\rm{ss}}(v_1(t ) ) } = 2 k_b \ , dt v_{1d } ( v_1 - \tilde{m } \dot{v}_1 ) / \tilde{\gamma}_1\ , , \label{ds1_b}\end{aligned}\ ] ] respectively .since is linear in and , the total entropy change is gaussian .one can prove that ft is satisfied for .however , violation still occurs in finite - time processes , where and \bigg\ } \ , .\end{aligned}\ ] ] note that , on average , the reduced description ( b ) gives a larger total entropy change than description ( a ) .
in this work we perform theoretical analysis about a coupled rc circuit with constant driven currents . starting from stochastic differential equations , where voltages are subject to thermal noises , we derive time - correlation functions , steady - state distributions and transition probabilities of the system . the validity of the fluctuation theorem ( ft) is examined for scenarios with complete and incomplete descriptions .
the earth orientation is generally considered as ( i ) earth rotation axis movements in space ( precession - nutation ) , ( ii ) earth rotation axis movements in the earth ( polar motion ) , or ( iii ) earth rotation speed variations ( exces in the length of the day ) .these movements come from earth inside masses distributions .the earth gravity field can give us information about this distribution of masses because nowadays we can determine the variations of the earth gravity field by space geodetic techniques .hence , there is a link between the variations of the earth gravity field and the variations of the earth orientation parameters . andthe high accuracy now reached in the vlbi ( very long baseline interferometry ) earth orientation parameters ( eop ) determination requires looking further at the various geophysical contributions to variations in eop .so we investigate here if this variable gravity field can be valuable for the improving the modelisation of the earth rotation .the fundamental equations for the rotation of the earth in an inertial frame are euler s dynamical equations , based on the conservation of the angular momentum of the earth under an external torque ( lambeck 1980 ) : for a non - rigid earth , these equations in a rotating frame become : + \vec \omega \wedge \left [ i(t)~\vec \omega + \vec h(t)\right ] = \vec l\ ] ] where the inertia tensor is time dependent , as well as the relative angular momentum , and is the earth instantaneous rotation vector which direction is the one of the rotation axis and which norm is the rotation speed .it depends on the earth orientation parameters ( eop ) .the inertia tensor , which is symetric , can be written as : = \left [ \begin{array}{ccc } a+c_{11 } & c_{12 } & c_{13 } \\ c_{12 } & b+c_{22 } & c_{23 } \\ c_{13 } & c_{23 } & c+c_{33 } \end{array } \right]\ ] ] with the constant part and ( ) the variable part of the inertia tensor .the earth gravity field of the earth devived from the external gravitational potential which is expressed in a spherical harmonic expansion as ( lambeck 1980 ) : \ ] ] where is the geocentric distance , the latitude and the longitude of the point at which is detremined . is the gravitational constant , and are the mass and the equatorial radius of the earth , respectively . and are the stokes coefficients of degree and order , and are the legendre polynomials . hence the second - degree stokes coefficients can be directely related to the inertia tensor components ( lambeck , 1988 ) : just have shown that the earth rotation ( with and the eop ) could be related to the earth gravity field ( with the degree 2 stokes coefficients ) .then , we investigate now how we can link each eop with these coefficients .the exces in the length of the day ( with respect to a mean lod ) can be related to ( i ) the third component of the variable part of the inertia tensor and ( ii ) the third component of the relative angular momentum of the earth , ignoring the external torques : moreover , with the help of eq .( [ eq : coeffs_c_inertie ] ) , we can write : where is the variation in time of the sum of the diagonal elements of the inertia tensor .we can consider that it is equal to zero ( rochester & smylie 1974 ) .then , we can obtain : where the coefficient accounts for the loading effects and is the third moment of inertia of the earth s mantle ( barnes et al .then we have compared the obtained with eq .( [ eq : lod_fin ] ) and the data in fig .[ fig : bourda_fig2 ] with the one usually used but corrected from zonal tides , atmopheric wind effects ( ) and long terms ( see fig . [fig : bourda_fig1 ] ) .the study of the earth precession nutation angles variations influenced by the temporal variations of the coefficients of the geopotential is developped in the article of bourda & capitaine ( 2004 ) .it is based on the works of williams ( 1994 ) and capitaine et al .( 2003 ) which considered secular terms for the variations , whereas we consider also annual and semi - annual ones . the polar motion , where and are the components of the rotation axis in space can be theoretically related to the degree 2 and order 1 coefficients of the earth gravity field : where is related to and with eq .( [ eq : coeffs_c_inertie ] ) , and .the part of the length of the day obtained with the data corresponds to gravitational terms .then we have compared corrected from the movements terms ( as atmospheric ones ) , the zonal tides and the decadal terms ( from magnetic effects in the core - mantle boundary ) .but the residual term has an amplitude of the order of , whereas the better precision on these lod data is of the order of .barnes , r. t. h. , hide , r. , white , a. a. , & wilson , c. a. 1983 , proc .lond . , a 387 , 31 bourda , g. , & capitaine , n. 2004 , a&a , in press capitaine , n. , wallace , p. t. , & chapront , j. 2003 , a & a , 412 , 567 lambeck , k. 1980 , the earth s variable rotation ( cambridge : cambridge univ . press ) lambeck , k. 1988 , geophysical geodesy : the slow deformations of the earth ( oxford : oxford science publications ) rochester , m. g. , & smylie , d. e. 1974 , j. geophys .res . , 79 , 4948 williams , j. g. 1994 , astron . j. , 108(2 ) , 711
the determination of the earth gravity field from space geodetic techniques now allows us to obtain the temporal variations of the low degree coefficients of the geopotential , combining the orbitography of several satellites ( e.g. lageos1 , lageos2 , starlette ) . these temporal variations of the earth gravity field can be related to the earth orientation parameters ( eop ) through the inertia tensor . this paper shows these relations and discusses how such geodetic data can contribute to the understanding of the variations in eop .
in this paper we investigate the potential for the use of incoherent scattered data for 2d reconstruction in x - ray scanning applications .the use of scattered data for image reconstruction is considered in the literature , typically for applications in gamma ray imaging , where the photon source is monochromatic . however , in many applications ( e.g security screening of baggage ) a type of x - ray tube is often used that generates a polychromatic spectrum of initial photon energies ( see section [ phys ] for an example spectrum ) .there has been recent interest in the use of energy sensitive detectors in tomography , and in the present paper their application is key to the ideas presented .our main goal is to show that the electron density may be reconstructed analytically using the incoherent scattered data and to lay the foundations for a practical reconstruction method based on our theory .we apply our method to a machine configuration commonly used in x - ray ct . in addition , by use of the reconstructed density values in conjunction with an attenuation coefficient reconstruction , we show under the right assumptions that the atomic number of the target is uniquely determined . for a photon incident upon an electron compton ( incoherently ) scattering at an angle with initial energy , the scattered energy given by the equation : where is the electron rest energy .equation ( [ equ1 ] ) implies that remains fixed for any given and .so in the case of a monochromatic source , assuming only single scatter events , for every fixed measured energy ( possible to measure if the detectors are energy - resolved ) the locus of scattering points is a circular arc intersecting the source and detector in question .for example , refer to . in an x - ray tube a cathode is negatively charged and electrons are accelerated by a large voltage ( kv ) towards a positively charged target material ( e.g tungsten ) . a small proportion of the initial electron energy ( ) is converted to produce photons . due to energy conservation ,the resulting photon energies are no more than kev .so in the polychromatic source case , again assuming only single scatter events , for each given data set ( photon intensity recorded with energy ) , the set of scatterers lie on a collection of circular arcs intersecting the source and detector points . togetherthese form a toric section in which the photons scatter , with a maximum scattering angle given by : see figure [ fig1 ] below : ( -cos(30),0.5 ) coordinate ( s ) ; ( cos(30 ) , 0.5 ) coordinate ( d ) ; ( -0.5,0.86 ) coordinate ( w ) ; ( -0.13,1.22 ) coordinate ( a ) ; plot ( cos ( ) , sin ( ) ) ; ( s ) ( d ) ; ( s ) ( w ) ; ( w ) ( d ) ; ( w)(a ) ; at ( -0.95,0.48 ) ; at ( 0.95,0.5 ) ; at ( -0.25,0.88 ) ; at ( -0.3,0.4 ) t ; plot ( , -sqrt(1-pow(,2))+1 ) ; + in the present paper we consider a setup consisting of a ring of fixed energy sensitive detectors and a single rotating fan beam polychromatic source. see figure [ fig3 ] . with this setup we can measure photon intensity in the dark field .we image an electron density compactly supported within the detector ring ( the blue and green circle in figure [ fig3 ] ) , with .( 0,0 ) circle [ radius=0.7 ] ; ; ( -0.3,-0.2 ) rectangle ( 0.3,0.2 ) ; ( 0.651,-0.546 ) circle [ radius=0.8 ] ; at ( -0.05,-1.08 ) ; at ( 0.7,0.34 ) ; at ( 0.651 + 0.8,0 ) ; at ( -0.15,0.1 ) ; at ( 0.2,-0.4 ) r ; ( 0,-1)(0,0.7 ) ; ( 0,-1)(0.1,0.6928 ) ; ( 0,-1)(-0.1,0.6928 ) ; ( 0,-1)(0.2,0.6708 ) ; ( 0,-1)(-0.2,0.6708 ) ; ( 0,-1)(0.3,0.6324 ) ; ( 0,-1)(-0.3,0.6324 ) ; ( 0,-1)(0.4,0.5744 ) ; ( 0,-1)(-0.4,0.5744 ) ; ( 0.9,0.9)(0.3,0.6324 ) ; at ( 1.1,1 ) light field ; ( -0.9,-0.9)(-0.3,-0.6324 ) ; at ( -1.1,-1 ) dark field ; ( 0.03,-0.1)->(0.65,0.255 ) ; ( -0.05,-0.4)->(0.65,0.255 ) ; plot ( , sqrt(0.49-pow(,2 ) ) ) ; plot ( , -sqrt(1-pow(,2 ) ) ) ; if we assume an equal scattering probability throughout the region leaving only the electron density to vary , and if we assume that the majority of scattering events occur within r , then in this case the integral of over is approximately determined by the scattered intensity recorded at the detector with some fixed energy .see the appendix for an example application where these approximations are valid . with these assumptions and with suitable restrictions on the support of , we aim to reconstruct from its integrals over discs whose boundaries intersect a fixed point , namely the source at a given position along its scanning path . in section [ sec1 ], we present a disc transform and go on to prove our main theorem ( theorem [ th1 ] ) , which explains the relationship between our transform and the straight line radon transform . as a corollary to this theorem , with known results on the radon transform, we show that a unique solution exists on the domain of smooth functions compactly supported on an annulus centred at the origin . here based on the criterion of natterer in and using the theory of sobolev space estimates , we determine a measure for the ill posedness of our problem . in section [ phys ] , we discuss a possible means to approximate the physical processes such as to allow for the proposed reconstruction method .here we also present a least squares fit for the total cross section ( scattering plus absorbtion ) in terms of ( the atomic number ) . from this , we show that is uniquely determined by the attenuation coefficient and electron density .in section [ res ] we apply our reconstruction formulae to simulated data sets , with varying levels of added pseudo random noise .this is applied to the given machine configuration .we recover a simple water bottle cross section image ( a circular region of uniform density 1 ) and reconstruct the atomic number in each case using the curve fit presented in section [ phys ] . to give an example reconstruction of a target not of uniform density , we also present reconstructions of a simulated hollow tube cross section .in this section we aim to recover a smooth function compactly supported on an annulus centred at the origin from its integrals over discs whose boundaries intersect ( the given source position ) .let denote the set of points on the disc whose boundary intersects the origin , with centre given in polar coordinates as .see figure [ figure8 ] .let be the set of smooth functions on and let denote the set of smooth functions compactly supported on .let and for a function in the plane , let be defined as .then we define the disc transform as : ( -0.5,0)(1.5,0 ) ; ( 0.5,0.5 ) circle [ radius=0.707 ] ; ( 0,0)(1,1 ) ; at ( 0.6,0.5 ) ; at ( 0.17,0.08 ) ; at ( -0.1,-0.1 ) ; at ( 1.3,0.8 ) ; [ figure8 ] after making the change of variables : in equation ( [ equdef1 ] ) , we have : we now present further definitions which will be important in the following subsection ( section [ sob ] ) , where we provide our sobolev space estimates .let denote the unit cylinder in . then we define as follows : which is piecewise continuous as a function of .we can remove this discontinuity by adding the function : where .we define as : let be the set of points on a line. then we define the radon transform as : we are now in a position to prove our main theorem , where we give the explicit relation between and the radon transform for smooth functions on an annulus .[ th1 ] let be the annulus centred on with inner radius and outer radius .let for some and let be defined as . then .let and be defined as and . then from our definition of , we have .now we have : and hence .so the partial derivative of with respect to exists and is continuous for all , and .we now aim to prove injectivity of the disc transform on the domain of smooth functions compactly supported on an annulus .first we state helgason s support theorem .let be a compact convex set in and let be continuous on . if for all and such that and is rapidly decreasing , in the sense that : then for all . [ cor1 ]let for some , and let \} ] be such that \times s^1\right)}<\epsilon ] , we have : for any with for .we can interpret this last corollary to mean that given some erroneous data which differs in the least squares sense from absolutely by , the least squares error in our solution is bounded above by for some constant with the a - priori knowledge that .in natterer uses the value as a measure for the ill posedness of his problem and gives his criteria for a linear inverse problem to be modestly , mildly or severely ill posed .if we set close to , then based on these criteria the above arguments would suggest that our problem is mildly ill posed , but more ill posed than the inverse radon transform , which we would expect given that the disc transform is a degree smoother than .( 0,0 ) circle [ radius=0.7 ] ; ( -1.3,-1)(1.3,-1 ) ; at ( -0.1,-1.1 ) ; ( 0,-1)(0,1 ) ; at ( 0.1,1 ) ; at ( 1.3,-1.1 ) ; ( -1.3,0)(1.3,0 ) ; ( 0.04,-1)(0.04,-0.7 ) ; ( -0.04,-1)(-0.04,0.7 ) ; at ( -0.1,-0.1 ) ; at ( 0.1,-0.85 ) ; at ( 0.6,-0.6 ) ; ( 0,0)(0.3,0.632 ) ; at ( 0.13,0.08 ) ; at ( 0.4,0.7 ) ; another source of error in our solution can be due to limited sampling of the data . in practicethe number of detectors will be finite .let us parameterize the set of points on the detector ring in terms of a polar angle , and let the finite set of polar angles determine a finite set of detector positions .see figure [ fig0 ] .then for every ] let ] , then there exists a constant such that : let be defined as in theorem [ th1 ] and let be the seminorm defined in lemma [ lemma10 ] .let for some integer and .then we have : \times s^1\right)}&=\int_{s^1}\|\mathcal{d}f_{\phi}\|^2_{l^2\left([-1,1]\right ) } \mathrm{d}\phi\\ & \leq c^2 h^{2\alpha+3}\int_{s^1}|\mathcal{d}f_{\phi}|^2_{h^{\alpha+3/2}\left([-1,1]\right ) } \mathrm{d}\phi\\ & \leq c^2 h^{2\alpha+3}\int_{s^1}\iint_{[-1,1]\times [ -1,1]}\frac{|\frac{\partial^m}{\partial p^m } \mathcal{d}f_{\phi}-\frac{\partial^m}{\partial p^m } \mathcal{d}f_{\phi}|^2}{|x - y|^{n+2\sigma}}\mathrm{d}x\mathrm{d}y \mathrm{d}\phi\\ & = c^2 h^{2\alpha+3}\int_{s^1}\iint_{[-1,1]\times [ -1,1]}\frac{|\frac{\partial^{m-1}}{\partial p^{m-1 } } r\tilde{f}_{\phi}-\frac{\partial^{m-1}}{\partial p^{m-1 } } r\tilde{f}_{\phi}|^2}{|x - y|^{n+2\sigma}}\mathrm{d}x\mathrm{d}y \mathrm{d}\phi\\ & = c^2 h^{2\alpha+3}\int_{s^1}|r\tilde{f}_{\phi}|^2_{h^{\alpha+1/2}\left([-1,1]\right ) } \mathrm{d}\phi\\ & \leq c^2 h^{2\alpha+3}\int_{s^1}\|r\tilde{f}_{\phi}\|^2_{h^{\alpha+1/2}\left([-1,1]\right ) } \mathrm{d}\phi\\ & = c^2 h^{2\alpha+3}\|r\tilde{f}\|^2_{h^{\alpha+1/2}\left(z\right)}\\ & \leq c_1\left(\alpha\right)^2 h^{2\alpha+3}\|\tilde{f}\|^2_{h^{\alpha}\left(\mathbb{r}^2\right)}\\ & \leq c_2\left(\alpha\right)^2 h^{2\alpha+3}\|f\|^2_{h^{\alpha}\left(\mathbb{r}^2\right)}\\ & \leq c_2\left(\alpha\right)^2 h^{2\alpha+3}\rho^2\\ \end{split}\ ] ] for with . applying theorem [ the2 ], we have : \times s^1\right)}\\ & \leq c\left(\alpha\right)h^{\alpha}\rho \end{split}\ ] ] which completes the proof .this last result tells us that given a finite set of detectors with a disc diameter sampling determined by equation ( [ sample ] ) and with being a measure of the uniformity of the sample , the least squares error in our solution is bounded above by with the a - priori knowledge that for some .in this section we present an accurate physical model and a possible approximate model which allows for the proposed reconstruction method .we consider an intensity of photons scattering from a point as illustrated in figure [ figurep ] .( -1,-1)(0,0 ) ; ( 0,0)(1,1 ) ; ( 0,0)(1.3,-0.2 ) ; at ( 0.25,0.1 ) ; at ( -1.1,-1.1 ) ; at ( 1.4,-0.3 ) ; at ( -0.6,-0.3 ) ; at ( 1.3,0 ) ; at ( -0.1,0.1 ) ; at ( 1.15,1.1 ) ; the intensity of photons scattered from to with energy is : where is the initial intensity , which depends on the energy ( see figure [ figspec ] for an example polychromatic spectrum ) . is the linear attenuation coefficient , which is dependant on the energy and the atomic number of the target material . here is the number of electrons in a volume around the scattering point .so ( number of electrons per unit volume ) is the quantity to be reconstructed . and are the line segments connecting to and to respectively .the klein - nishina differential cross section , is defined by: where is the classical electron radius .this predicts the scattering distribution for a photon off a free electron at rest .given that the atomic electrons typically are neither free nor at rest , a correction factor is included , namely the incoherent scattering function . here is the momentum transferred by a photon with initial energy : scattering at an angle , where is planck s constant and is the speed of light .the scattering function also depends on the atomic number , so we set to some average atomic number as an approximation . for ( rhodium )we have the expression : to acquire equation ( [ equ16 ] ) we have extended the least squares fit given in to the values of given in .the solid angle subtended by and is defined : where , is the detector area and is the unit vector normal to the detector surface . given our machine geometry and proposed reconstruction method , it is difficult to include the more accurate model stated above as an additional weighting to our integral equations ( as in done in for example ) while allowing for the same inversion formulae .so we average equation ( [ equ17 ] ) over the scattering region , for each and .here and are as defined in section [ sec1 ] , where is fixed depending on the machine specifications .let . here depends on the scattering point and and as defined in section [ sec1 ] .when , and determine the detector position and the measured energy .we have : which gives the average of over . here denotes the area of .let be an example density with support contained in , and let : be the scattered intensity measured for a constant density over , where is the ( constant ) slice thickness .then if we assume that the scattering probability is constant and equal to throughout each scattering region , the absolute error in our approximation would satisfy : for all . here is the intensity of photons we measure .so provided that the range of the density values is small over the majority of scattering regions considered , the averaged model given above will have a similar level of accuracy to the more precise model given in equation ( [ equ17 ] ) . if the linear attenuation coefficient is known a - priori , then the exponential terms of equation ( [ equ17 ] ) may be included in .otherwise we may approximate : where is the line segment from to the detector in the forward direction ( see figure [ figurep ] ) .this is the approximation made in . by the beer - lambert law, we have : where is the recorded straight through intensity . to account for the physical modelling, we would divide the data by to calculate approximate values for and hence for . with the proposed machine configuration, we can show that the data collected in the light field determines the linear attenuation coefficient uniquely ( this is the standard 2d reconstruction problem ) . with the additional information provided by our theory , we show under the right assumptions that the atomic number of the target is determined uniquely by the full data ( light plus dark field ) .the electron density and the linear attenuation coefficient are related via the formula : where is the total cross section per electron .the cross section is continuous and monotone increasing as a function of on ] and for .these were calculated using the exact formula for the area of intersection of two discs .we approximate the derivative of with respect to as the finite difference : for a chosen step size . to reconstruct apply the matlab function iradon " , which filters ( choosing from a selection of filters pre - coded by matlab ) and backprojects the projection data to recover .we then make the necessary change in coordinates to produce our density image . in the absence of noise we find our results to be satisfactory .see figure [ figure7 ] .let us now perturb the calculated values of slightly such as to simulate random noise .we multiply each exact value of by a pseudo random number in the range ] ) .our results are presented in figures [ figure16 ] and [ figure15 ] . herewe see an improvement in the signal - to - noise - ratio . the rotational symmetry of about the centre of the circular region of s support is also recovered .let be the average of the non zero pixel values shown in the left hand image of figure [ figure16 ] and let be the effective atomic number for water .then we can calculate and using equation ( [ equ30 ] ) we can calculate the total cross section to be : for assuming no additional error . based on our curve fit for , this would yield a reconstructed atomic number of , which differs from the accepted value by . for the remaining averaged density reconstructions the and valuesare given in the figure caption .we have presented reconstructions of a density which is homogeneous where it is not known to be zero . to give an inhomogeneous example ,we have presented reconstructions with varying levels of added noise of a simulated hollow tube cross section in figures [ figure17 ] and [ figure18 ] .we can summarize our method as follows : 1 .measure the scattered intensity energy and divide by and the slice thickness to calculate values for .2 . smooth the data sufficiently and apply approximation ( [ app1 ] ) to calculate values for .3 . reconstruct by filtered backprojection and recover from the definition given in theorem [ th1 ] .4 . average over a number of source views to improve the image quality and set to outside its support . is simulated as a circular region of uniform density on the left .the function as defined in theorem [ th1 ] is shown on the right.,title="fig : " ] is simulated as a circular region of uniform density on the left .the function as defined in theorem [ th1 ] is shown on the right.,title="fig : " ] in the absence of added noise is shown on the left .we have backprojected from 180 views with the default ram - lak cropped filter .the corresponding pixel values of are presented on the right .both and are set to outside of their support ., title="fig : " ] in the absence of added noise is shown on the left .we have backprojected from 180 views with the default ram - lak cropped filter .the corresponding pixel values of are presented on the right .both and are set to outside of their support ., title="fig : " ] for with random noise added . on the rightwe have applied a simple moving average filter to the simulated data and taken a subsample of the smoothed data before interpolating as specified earlier .the exact values are presented alongside the fitted values in the right hand figure.,title="fig : " ] for with random noise added . on the right we have applied a simple moving average filter to the simulated data and taken a subsample of the smoothed data before interpolating as specified earlier .the exact values are presented alongside the fitted values in the right hand figure.,title="fig : " ] after smoothing with added noise .we have again backprojected from 180 views , although here we have multiplied the standard ramp filter by a hamming window to reduce the high frequency noise .the corresponding pixel values for are presented on the right.,title="fig : " ] after smoothing with added noise .we have again backprojected from 180 views , although here we have multiplied the standard ramp filter by a hamming window to reduce the high frequency noise .the corresponding pixel values for are presented on the right.,title="fig : " ] after smoothing with added noise .we have multiplied the ramp filter by a hamming window and backprojected from 180 views .the corresponding pixel values for are displayed on the right.,title="fig : " ] after smoothing with added noise .we have multiplied the ramp filter by a hamming window and backprojected from 180 views .the corresponding pixel values for are displayed on the right.,title="fig : " ] after smoothing with added noise is shown on the left .we have multiplied the ramp filter by a hamming window and backprojected from 180 views .the corresponding pixel values for are displayed on the right.,title="fig : " ] after smoothing with added noise is shown on the left .we have multiplied the ramp filter by a hamming window and backprojected from 180 views .the corresponding pixel values for are displayed on the right.,title="fig : " ] is shown with no noise added to each dataset before reconstruction .for the right hand image random noise was added to each dataset before reconstruction . in this case which gives an atomic number value of .,title="fig : " ] is shown with no noise added to each dataset before reconstruction .for the right hand image random noise was added to each dataset before reconstruction . in this case which gives an atomic number value of .,title="fig : " ] is shown with noise added to each dataset before reconstruction . here which gives an atomic number value of .for the right hand image random noise was added to each dataset before reconstruction . in this case which gives a reconstructed atomic number value of .,title="fig : " ]is shown with noise added to each dataset before reconstruction . here which gives an atomic number value of .for the right hand image random noise was added to each dataset before reconstruction . in this case which gives a reconstructed atomic number value of .,title="fig : " ] . on the rightis an averaged reconstruction of with no noise added to each dataset.,title="fig : " ] . on the rightis an averaged reconstruction of with no noise added to each dataset.,title="fig : " ] with and added noise in the left and right hand images respectively.,title="fig : " ] with and added noise in the left and right hand images respectively.,title="fig : " ]we have proposed a new fast method to determine the electron density in x - ray scanning applications , with a fixed energy sensitive detector machine configuration where it is possible to measure photon intensity in the dark field .we have shown that the density may be reconstructed analytically using the compton scattered intensity .this method does not require the photon source to be monochromatic as is the case in recent literature , which is important from a practical standpoint as it may not be reasonable to assume a monochromatic source in some applications .also if the source is monochromatic we can not gain any insight into the energy dependence of the attenuation coefficient , which would rule out the recent advances in image rendering , where a combination of multivariate and cluster analysis can be used to render a colour x - ray image . using sobolev space estimates , we have determined an upper bound for the least squares error in our solution in terms of the least squares error in our data .this work is based on the approach taken by natterer in .we have shown , under the right assumptions , that the atomic number of the target is determined uniquely by the full data . with this theory in place we intend to pursue a more practical means to reconstruct the atomic number , as the graph reading method used in the present paper was ineffective in giving an accurate reconstruction for . we summarize our method to recover the density image in section [ res ] and we reconstruct a simulated water bottle cross section via a possible practical implementation of this method . in this simple case the smoothing method ( simple moving average ) applied was effective and we were able to reconstruct a circular cross section of approximately uniform density .although in the presence of noise the pixel values of our reconstructed density image on average differed from the original values by as much as .we have also provided reconstructions of a simulated hollow tube cross section . in this casethe inner edge of the tube cross section appeared quite blurred in the reconstruction when noise was added to the simulated data .we performed a number of trial reconstructions with different randomly generated datasets .the results presented in this paper are typical of our trial results .we hope also to test our methods through experiment .for example , if we were to take an existing x - ray machine of a similar configuration to that discussed in the present paper , and attach energy sensitive detectors alongside the existing detectors or if we were to replace them , then we could see how closely our forward problem models the intensity of photons measured in the dark field in practice .i would like to thank my ph.d .supervisor prof william lionheart for his guidance and inspiration .the author is also grateful to prof robert cernik for his helpful comments and discussion regarding energy sensitive detectors , and to dr ed morton and dr tim coker of rapiscan systems for information on baggage scanning .this work has been funded jointly by the epsrc and rapiscan systems .the rtt80 ( real time tomography ) x - ray scanner is a switched source , offset detector ct machine designed with the aim to scan objects in real time . developed by rapiscan systems , the rtt80 is currently used in airport security screening of baggage .the rtt80 consists of a single fixed ring of polychromatic x - ray sources and multiple offset rings of detectors , with a conveyor belt and scanning tunnel ( within which the scanned object would be placed ) passing through the centre of both sets of rings .see figure [ figrtt ] .if the detectors are energy sensitive , then in this case we have the problem of reconstructing a density slice supported within the scanning tunnel from its integrals over toric sections , with tips at the source and detector locations .we wish to check whether it is reasonable to approximate a set of toric section integrals as integrals over discs whose boundaries intersect a given source point , as then we can apply our proposed reconstruction method to reconstruct the density slice analytically .let us refer to figure [ figrtt1 ] and let be defined as in section [ sec1 ] .we define the toric sections , , and .let denote the area of a set and let denote the set of points within our roi ( region of interest , i.e the scanning tunnel ) . for a large sample of discs , we will check for every disc in the sample , whether such that .let be defined as in section [ sec1 ] .then if we consider the machine specifications for the rtt80 , we can calculate and the difference in radius between the detector ring and the scanning tunnel to be .see figure [ figrtt ] . for our test, we consider a sample of 36000 discs with diameters for and for .we have chosen ] values in a range sufficient to determine a unique density slice image for densities supported on .refer to corollary [ cor1 ] . for each of our chosen and value pairs ,the difference : was found to be negligible .let be an example density slice with support contained in .then for any disc in our sample we have : which holds for some .so , the integral of over is equal to at least one of four toric section integrals over . assuming also that there is little error implied by our physical approximations ( these are discussed in detail in section [ phys ] ) , the integral ( [ intd ] )would be determined approximately by at least one of four data sets , namely the photon intensity measured for two possible energy levels at two possible detector locations ( or ) .thus , given that the inverse disc transform is only mildly ill posed ( this was determined to be the case in section [ sob ] , based on the criteria given by natterer in ) , it seems that we should be able obtain a satisfactory density image reconstruction in this application . in airport baggage screening , we are interested in identifying a given material as either a threat or non - threat .let be the electron density and let denote the effective atomic number .we define the threat space to be the set of materials with , where $ ] is the class of threat pairs . for a given suspect material, we can apply the methods presented in this paper to reconstruct and .then if , we can identify the suspect material as a potential threat .we note that although we failed to obtain an accurate reconstruction in the present paper , we aim to show that a more precise determination of is possible in future work . also , the reconstruction methods we have presented should be fast to implement as they are largely based on the filtered back - projection algorithm .this is important in an application such as airport baggage screening , as we require the threat detection method we apply to not only be accurate in threat identification , but to also be an efficient process .( 0,0 ) circle [ radius=0.7 ] ; ( -0.3,-0.2 ) rectangle ( 0.3,0.2 ) ; ( 0,-1)(0,0.7 ) ; ( 0,-1)(0.1,0.6928 ) ; ( 0,-1)(-0.1,0.6928 ) ; ( 0,-1)(0.2,0.6708 ) ; ( 0,-1)(-0.2,0.6708 ) ; ( 0,-1)(0.3,0.6324 ) ; ( 0,-1)(-0.3,0.6324 ) ; ( 0,-1)(0.4,0.5744 ) ; ( 0,-1)(-0.4,0.5744 ) ; ( 0.9,0.9)(0.3,0.6324 ) ; at ( 1.1,1 ) detector ring ; ( 0,0 ) circle [ radius=1 ] ; ( -1.1,-0.9)(-0.7,-0.71 ) ; at ( -1.1,-1 ) source ring ; ( -0.4,-0.3)(0.4,-0.3 ) ; ( -0.9,0.9)(-0.3,0 ) ; at ( -0.9,1 ) scanned object ; ( 1.1,-0.9)(0.3,-0.3 ) ; at ( 1.1,-1 ) conveyor belt ; circle [ radius=0.57 ] ; ( 1.3,0.3)(0.57,0 ) ; at ( 1.4,0.4 ) scanning tunnel ; ( -1.8,-1)(-1.8,-0.7 ) ; ( -1.8,-0.7)(-1.8,-0.57 ) ; ( -1.8,-0.57)(-1.8,0.57 ) ; at ( -1.9,-0.85 ) 1 ; at ( -2,-0.635 ) 0.375 ; at ( -1.9,0 ) 5 ; ( 0,0 ) circle [ radius=0.7 ] ; ; ( 0.651,-0.546 ) circle [ radius=0.8 ] ; at ( -0.05,-1.08 ) ; at ( 0.72,0.34 ) ; at ( 0.651 + 0.8,0 ) ; at ( -0.18,0.2 ) ; circle [ radius=0.57 ] ; ( -1.3,-1)(1.3,-1 ) ; at ( -0.2,-0.75 ) ; ( 0,-0.206 ) circle [ radius=0.8 ] ; ( -0.78,-01.14 ) circle [ radius=0.8 ] ; ( -1.1,0.4)(-0.78,0 ) ; ( -1.3,-0.1)(-0.9,-0.35 ) ; at ( -1.1,0.5 ) ; at ( -1.3,0 ) ; ( 0,0 ) circle [ radius=1 ] ; ( 0,-1)(1.302,-0.092 ) ; at ( 0.651,-0.5 ) ; at ( 0.18,-0.93 ) ; 10 v.p .palamodov , an analytic reconstruction for the compton scattering tomography in a plane " inverse problems 27 ( 2011 ) 125004 ( 8pp ) . v. maxim , m. frandes , r. prost , analytical inversion of the compton transform using the full set of available projections " inverse problems 25 ( 2009 ) 095001 ( 21pp ) .compton scattering tomography " j. appl .76 200715 ( 1994 ) . q. xu , h. yu , j. bennett , image reconstruction for hybrid true - color micro ct " ieee trans biomed eng .59(6 ) 17111719 ( 2012 ) .egan , s.d.m .jacques , r.j .cernik , multivariate analysis of hyperspectral hard x - ray images " wiley 42 151157 ( 2012 ) .helgason , s. gropes and geometric analysis " academic press , orlando - san diego - san francisco - new york - london - toronto - montreal - sydney - tokyo - sao paulo ( 1984 ) .palinkas , g. analytic approximations for the incoherent x - ray intensities of the atoms from ca to am " acta cryst .a29 , 10 ( 1973 ) .hubbell , j. h. et .al atomic form factors , incoherent scattering functions and photon scattering cross sections " j. phys .chem . ref .data , vol .3 , ( 1975 ) . wm .j. veigele , photon cross sections from 0.1kev to 1mev for elements to " atomic data tables , 5 , 51 - 111 ( 1973 ) .d. f. jackson , d. j. hawkes an accurate parametrisation of the x - ray attenuation coefficient " physbiol . 25 1167 ( 1980 ) .wadeson , n. , modelling and correction of scatter in a switched source multi ring detector ct machine " phd thesis , university of manchester uk , ( 2012 ) .f. natterer the mathematics of computerized tomography " siam ( 2001 ) .
we lay the foundations for a new fast method to reconstruct the electron density in x - ray scanning applications using measurements in the dark field . this approach is applied to a type of machine configuration with fixed energy sensitive ( or resolving ) detectors , and where the x - ray source is polychromatic . we consider the case where the measurements in the dark field are dominated by the compton scattering process . this leads us to a 2d inverse problem where we aim to reconstruct an electron density slice from its integrals over discs whose boundaries intersect the given source point . we show that a unique solution exists for smooth densities compactly supported on an annulus centred at the source point . using sobolev space estimates we determine a measure for the ill posedness of our problem based on the criterion given by natterer in . in addition , with a combination of our method and the more common attenuation coefficient reconstruction , we show under certain assumptions that the atomic number of the target is uniquely determined . we test our method on simulated data sets with varying levels of added pseudo random noise .
the promise of a qualitative advantage of quantum computers over classical ones in solving certain classes of problems has led to a massive effort in theoretical and experimental investigation of controlled , quantum - coherent systems .the standard circuit model ( cm ) of quantum computing is analogous to classical computing in the sense of requiring a sequence of logic gate operations .however , the requirement of precise time - dependent control of individual qubits in the quantum case is hard to achieve experimentally while still maintaining the quantum coherence of the system .a number of alternative approaches have been proposed , of which _ adiabatic quantum computing _ ( aqc ) is a promising example .this involves the evolution of a quantum system from a simple hamiltonian with an easily - prepared ground state to a hamiltonian that encodes the problem to be solved , and whose ground state encodes the solution .if the system is prepared in the initial ground state and the time evolution occurs slowly enough to satisfy the adiabatic theorem , the final state will have a large overlap with the ground state .measurement in the computational basis will then yield the desired solution with high probability .several authors have demonstrated polynomial equivalence between aqc and the cm , mapping the latter onto an aqc with -local interactions between qubits or -local interactions between -state qudits in two dimensions , or with -local interactions between qubits on a two - dimensional lattice ( but requiring two or more control hamiltonians ) . despite these proofs of equivalence between aqc and cm ,it is clear that there are classes of problems more suited to one or the other ; in addition , aqc is believed to be more robust against decoherence , although the effects of decoherence and noise imply an optimal computation time , beyond which errors increase .the type of problems most suited to aqc include optimization problems , where the requirement is to find the global minimum of a cost function , and the related decision problems , where the requirement is to demonstrate the existence of a good solution obeying for some specified .thus the existence proof of a polynomial - time aqc implementation of shor s prime factorization algorithm does not help practical implementation : one rather starts afresh and maps factorization onto an optimization problem , as in the recent nmr factorization of .simulations of the travelling salesman problem show faster decay of residual energy ( i. e. , tour length ) through aqc than through classical simulated annealing , although other classical algorithms are faster .applications have also been found in graph theory , most recently in the evaluation of ramsey numbers .the task of aqc is to find the ground state of a hamiltonian ; this hamiltonian encodes the problem under consideration and its ( unknown ) eigenvalues determine the cost function . a hamiltonian interpolates between a simple initial hamiltonian , , at time and the desired final hamiltonian , , at the end of the computation .many interpolation schemes have been considered , which may optimize final - state fidelity but require some knowledge of the energy - level structure or phase cancellation .we therefore restrict consideration to the simple linear interpolation where $ ] is the reduced time .the eigenvalues and eigenstates of the hamiltonian of an -qubit system are given by the instantaneous state of the system is given by , the solution of schrdinger s equation , which in reduced time ( and ) reads the system is prepared in the ( non - degenerate ) ground state of : .at the end of the evolution a suitable figure of merit is the closeness of the state vector , , to the desired result , .this is provided by the _ success probability _ the subscript , denoting the number of qubits , will be omitted except where a distinction needs to be made . in practical optimization problems , a low - cost solution that isnot necessarily the global optimum often suffices . herethe _ energy error _ is a suitable figure of merit .approximate adiabatic quantum computing ( aaqc ) aims to reduce this error . forsome purposes other characterizations of the final - state distribution may be more appropriate figures of merit . in the present work we shall concentrate on the success probability .we require parameters to specify the hamiltonian .one of the aims of this work is to investigate to what extent the success probability ( [ eq : p ] ) can be approximated as a function , where is a small ( -independent ) number of parameters characterizing the initial and final hamiltonians .the most important dependence is expected to be on the _ minimum gap _ between ground state and first excited state which occurs at the reduced time(s ) : while it has long been known that this probability tends to unity for slow evolution : the precise statement of this _ adiabatic theorem _ has been the subject of much debate in recent years .the original statement in the context of aqc was that the adiabatic condition where guarantees to be very close to .while this only considers transitions into the first excited state , such transitions will dominate in most situations .et al _ derived such a result , with taken as the maximum over all matrix elements to excited states .if is considered constant ( of the order of a typical eigenvalue of ) , determines the required .it is however sufficient ( see , for example , ref . ) to require an evolution time for all excited states .( in the present context we are restricting consideration to evolution of the ground state . )some authors have claimed counterexamples to the above criterion .however , these counterexamples include a resonant term , which is absent from our interpolating hamiltonian ( [ eq : aqch ] ) . for practical purposesthe knowledge that the success probability tends to unity in the infinite - time limit is of less interest than knowledge of parameters governing success for finite evolution times ; it is this question that motivates our study . the minimum gap is usually considered to be the dominant parameter determining the success probability for a given evolution time .these two variables , and , are both used in the literature to quantify the performance of a given computation , and are assumed to increase monotonically with each other .the question of how either of these variables varies with system size is an important one that is often addressed .however , the exact nature of the correlation between these two important figures of merit has not been fully explored .we explore the relationship between and by looking at the statistical distributions of these two variables over an ensemble of problem hamiltonians ( ) for fixed computation times .we start by considering a simple two - qubit system and show that a rich structure arises in the scatter plots of success probability against .we then go on to look at the scatter plots in three- , four- and five - qubit systems and find that , although some of the finer details of the structure are washed out , some remain .we wish to look at the distribution of the success probability and over a large set of problem hamiltonians .we use a simple , yet generic , model that is scalable and can be readily solved numerically .for we use a transverse field of unit magnitude acting on all the qubits : where denote the usual pauli matrices , is the number of qubits in the system ; the matrix acts on the qubit .the ( non - degenerate ) ground state of is an equal superposition of all computational basis states . for , we use a random - energy hamiltonian , diagonal in the computational basis , where all -axis couplings between the qubits are realized : here is the digit in the binary representation of . where there are non - zero bits in the binary representation of , the coupling constant represents a -local interaction ( a non - trivial interaction between qubits ) .the will be selected from a suitable random distribution ; we fix the trivial energy shift . is diagonal in the computational basis so that the binary - ordered set of states is a permutation of the energy - ordered set of states defined in eq .[ eq : eigen ] ( in the generic case where the latter are non - degenerate ) .a hamiltonian of this type can be used to encode any finite computational optimization problem ( minimization of a function ) by choice of the .it is important to note that only - and -local interactions are experimentally feasible ; however , higher - order interactions may be reducible to such terms at the cost of auxiliary qubits . for each sample in the scatter plots , we solve the schrdinger equation numerically over the reduced time range for a given computation time , , using the dormand - prince method .this is an adaptive step - size algorithm ; solutions accurate to fourth- and fifth - order in the step size are used to estimate the local error in the former .if it is less than the desired tolerance , then the fifth - order solution is used for the integration .otherwise is decreased .( abscissa ) at ( 6,-0.5 ) minimum gap ( ) ; ( coodinate ) [ rotate=90 ] at ( -0.2,3.5 ) probability ( ) ; for comparison with later scatter plots , figure [ fig : p1 ] plots the probability against minimum gap for a single qubit . since the final hamiltonian is specified by a single parameter, is a ( not quite monotonic ) function of and .a test of accuracy of the simulation is that the small component of the final state should be real : ( which can be verified analytically ) .our numerical calculations reproduce this to high precision .for two or more qubits , the success probability will no longer collapse to a function of the minimum gap and computation time .figure [ fig:2qb_uniform ] is a scatter plot of success probability against minimum gap for a large set of two - qubit problem instances , with the coupling coefficients drawn from the uniform distribution , and a short computation time .observe the sharp upper and lower edges .the lower bound of the success probability is always for infinitesimally small .this arises when : with a four - fold degeneracy at the system remains in its original ground state ( [ eq : gs ] ) .( abscissa ) at ( 4.6,-0.7 ) minimum gap ( ) ; ( coodinate ) [ rotate=90 ] at ( -1,3.075 ) probability ( ) ; ( 0a ) at ( 0.0,-0.15 ) ; ( 1a ) at ( 2.525,-0.15 ) ; ( 2a ) at ( 5.05,-0.15 ) ; ( 3a ) at ( 7.575,-0.15 ) ; ( 4a ) at ( 10.1,-0.15 ) ; ( 0c ) [ left ] at ( 0,0.1 ) ; ( 0c ) [ left ] at ( 0,1.31 ) ; ( 0c ) [ left ] at ( 0,2.52 ) ; ( 0c ) [ left ] at ( 0,3.73 ) ; ( 0c ) [ left ] at ( 0,4.94 ) ; ( 0c ) [ left ] at ( 0,6.15 ) ; ( 0b ) [ right ] at ( 10.8,0.1 ) ; ( 0b ) [ right ] at ( 10.8,3.45 ) ; ( 0b ) [ right ] at ( 10.8,6.75 ) ; it is important to verify that this structure is independent of our choice of random distribution of coupling constants and that it is also not an artefact of the pseudo - random number generators used .figure [ fig:2qb_gaussian ] also shows scatter plots of success probability and minimum gap , but in this case the coupling coefficients are drawn from a gaussian distribution , ( mean , standard deviation ) .the trends and structure in the distributions are similar to those shown in figure [ fig:2qb_uniform ] .however , there are some subtle differences in sharpness between the gaussian and uniform cases . for a large minimum gap, the lowest probability occurs for large , so we see a sharp cutoff in the uniform case but not in the gaussian case . in generalthough , this shows that the results are independent of our choice of coupling constants and , as a different pseudo - random number generator routine was used , we can say that the results are not a numerical artefact .four computation times are shown : , , and .as increases , the distribution shifts and tends towards a success probability of for any , in agreement with the adiabatic theorem .+ + + the two interesting features of these scatter plots are the well - defined sharp edges and the densely - populated bands .we colour the data points according to the strength of the two - qubit interactions , as this is a special direction in the two - qubit parameter space , which will determine the amount of entanglement during the evolution .it is clear that the bands correspond to groups of hamiltonians with similar .the bands where can be seen as two separable one - qubit evolutions for and , so the total success probability is simply the product of the one - qubit success probabilities shown in figure [ fig : p1 ] : where another interesting point to note is that the bands of similar gradually reverse in order in the distribution as the computation time is changed . we have supplemented the uniform random data with sets of chosen on a rectangular grid with the same cut - offs .these have the advantage that all problem hamiltonians with a given value of can be plotted in the -plane and coloured by their minimum gap or success probability ; see figure [ fig : stability ] . the energy structure in the case of two qubits can be simply characterized .the final - state energies are given by [ levels ] the ground - state phase diagram has tetrahedral symmetry , with the regions of parameter space with ground states separated by six planes meeting at the the four lines [ boundary ] the eigenvalue dynamics has lower symmetry , since the degeneracy planes and admit entangled ground states and are inequivalent to the other four planes ; this is borne out by the observation that neither the eigenvalue dynamics nor the success probability is invariant under all permutations of the diagonal elements of .identification of the symmetry structure of larger systems may cast further light on the -qubit case .figure [ fig : stability ] shows a constant- slice through this phase diagram .the degeneracy planes are clearly indicated in the minimum - gap plot ; here the gap vanishes at .the lower two plots demonstrate the non - adiabaticity of the time evolution , with the success probability increasing ( but not completely monotonically ) with distance from the degeneracy planes .the energy error is non - monotonic : it is small at the degeneracy planes ( since the final state will have only a small admixture orthogonal to the degenerate ground states ) and small where a large gap reduces the probability of transitions . through ground - state phase diagram for an ensemble of two - qubit hamiltonians .plots show minimum gap ( top left ) , position of minimum gap ( top right ) , success probability for ( bottom left ) and energy error for ( bottom right).,scaledwidth=50.0% ] these plots suggest a projection of a surface onto the plane ; we seek to find a suitable parameterization of the set of hamiltonians to collapse it onto a low - dimensional surface .we find that a plot of against the minimum gap and the position of the gap indeed shows that all points lie close to a curved surface ( which rises with increasing ) .this is understandable , since those two parameters largely determine the shape of the lowest two energy levels .figure [ fig:3 ] shows a projection of this surface onto the plane .visual inspection shows that the colour is to a good approximation a function only of position in the plane . notethat the position of the points depends only on the hamiltonian parameters , while the success probability depends also on the computation time . against minimum gap .the have been chosen from the uniform distribution for random problem instances .points are coloured by the success probability at .,scaledwidth=75.5% ]we have shown that the relationship between the success probability and is not a pure functional relationship for simple two - qubit systems . however , it is important to determine whether the interesting structure in this relationship remains in larger systems . to determine whether these densely - populated bands represent groups of problem instances that have followed similar evolution paths for the state vector ( e.g. the system remaining mostly in the ground state , then being excited at a single avoided crossing ) , we calculated the average overlap with the ground state : the points in figure [ fig : large_systems ] are coloured with respect to this average overlap value , , and we can see a smooth graduation across the figures , with the average overlap with the ground state increasing with the success probability .the exception to the smooth graduation of is the densely populated band where .this band must consist of cases with a degenerate or near - degenerate ground state at , as it includes cases which remain close to the instantaneous ground state throughout the majority of the evolution but have a low success probability .these results also lend credence to the idea that the structure is closely linked to the choice of hamiltonian parameters .we note that these distributions are reminiscent of the 2d projections of the higher - dimensional equilibrium surfaces seen in catastrophe theory . in this case success probability , and are all internal variables of the system and not independent control parameters , so we are looking at a different situation to those usually studied in catastrophe theory . identifying the nature of this surface and the dimensions of the phase space that it exists inis an important task , as it could have a major impact on adiabatic algorithm design . at this pointwe can conjecture that the constraint originates from an adiabatic invariant of the hamiltonian . real parameters are required to specify the density matrix of qubits , reducing to for a pure state as discussed here .the pechukas - yukawa approach to eigenvalue dynamics ( see e. g. ref . ) , which can be extended to density - matrix dynamics , has at least adiabatic invariants , thus reducing the number of parameters required .we find it strange that , to the best of our knowledge , there has been no research on adiabatic invariants of adiabatic quantum computers .we speculate that a systematic investigation of adiabatic invariants of quantum computers especially adiabatic and approximately adiabatic computers could yield important information about their behaviour and have a major impact on adiabatic algorithm design .we have shown that the relationship between the success probability and the minimum ground state gap may not be as straightforward as is often assumed .there is a rich structure of distinct sharp edges and densely - populated bands in the distribution , particularly in smaller systems .a partial explanation has been proposed , whereby this is the projection of a higher - dimensional surface ; identification of the parameters governing this surface will guide understanding of the set of problems amenable to adiabatic quantum computing .we do not propose a definitive explanation of the origin of this rich structure : this remains an open question .
we explore the relationship between two figures of merit for an adiabatic quantum computation process : the success probability and the minimum gap between the ground and first excited states , investigating to what extent the success probability for an ensemble of problem hamiltonians can be fitted by a function of and the computation time . we study a generic adiabatic algorithm and show that a rich structure exists in the distribution of and . in the case of two qubits , is to a good approximation a function of , of the stage in the evolution at which the minimum occurs and of . this structure persists in examples of larger systems .
identifying the universal principles and patterns that shape cities and urban human activities is crucial for both fundamental and practical reasons , as it not only sheds light on social organizations , but is also crucial for planning and designing safer and more sustainable cities .complex systems approaches provide us with tools and new ways of thinking about urban areas and the ways they may correspond to living organisms .the activities of people in urban areas are responsible for the emergence of patterns on large scales that define the dynamics of cities . until recently , one of the main obstacles to quantify such ideas was the lack of large scale data on flows of people and their activities . however , in the last decade there has been a surge of new technologies that make it possible to obtain real - time data about populations , and these new `` social microscopes '' have changed in fundamental ways the study of social systems .recent examples include characterizing and predicting individual human mobility patterns using mobile phone and social media data . toward that end, twitter has been a valuable tool to track and to identify patterns of mobility and activity , especially using geolocated tweets .geolocated tweets typically use the global positioning system ( gps ) tracking capability installed on mobile devices when enabled by the user to give his or her precise location ( latitude and longitude ) .geolocated tweets have recently been used to study global mobility patterns and spatial patterns and dynamics of sentiment . herewe describe the collective dynamics of the greater metropolitan area of new york city ( nyc ) as reflected in the geographic dynamics of twitter usage .we observe and quantify the patterns that emerge naturally from the hourly activities at different subareas , and discuss how they can be used to understand the social dynamics of urban areas .twitter data can be understood not just by considering where people are but also by the extent to which they are preoccupied or have time and attention to devote to twitter posting .we collected more than 6 million geolocated messages from twitter s streaming application programming interface ( api ) from which more than 90 of geolocated tweets can be downloaded as they occur . from this data we observe wake / sleep cycles and the daily social `` heartbeat '' of the nyc area , reflecting the commuting dynamics from home to work in the diurnal cycle .we identify differences in weekday and weekend dynamics , and find specific locations where activity occurs at certain hours , including the early morning at air transit hubs .we also find anomalous events associated with specific individuals whose high engagement with twitter at specific times can dominate their local region .we discuss how this dataset reflects the collective patterns of human activity and attention in both space and time .we collected tweets between the latitudes ] from august 19 , 2013 to december 31 , 2013 .[ fig : map ] shows the geographical twitter coverage .we aggregated the data of the corresponding days of the week , and in hourly units of time , resulting in time slices describing each hour of a `` typical week . ''we divided the geographic area into cells ( ) . for each hour of the week we obtained the difference in that cell from the average number of tweets over the week as \ ] ] where is the number of tweets in the cell at a given hour , the average number of tweets in a given cell averaged over all days and hours , and a constant that controls the slope of the hyperbolic tangent ( in all figures ) .note that the function bounds to the range $ ] .the values of were then used to generate a heat map of twitter activity in the nyc area for a particular hour ( see fig . [ fig : day ] ) .we also constructed two and three - dimensional movies of those patterns that can be accessed at http://www.necsi.edu .nyc geographical region with the locations of 6 million tweets shown .the sharp land - sea boundary is apparent as is the boundary of land area with high population density.,scaledwidth=100.0% ] + the dynamics of twitter can be understood first by recognition of the dominant diurnal wake - sleep cycle and geographic commuting to work in manhattan from surrounding bedroom communities for the conventional workday hours approximately 9:00am - 5:00pm .earlier in the morning people tweet from their homes , and therefore manhattan has much fewer tweets than average .tweets are concentrated in manhattan in the morning work hours and peak there at mid - day ( around 13h ) , and become much more widely dispersed after work hours .the bedroom community activity is high in the evening , clearly visible at 10:00pm , and decreases as people go to sleep .twitter activity of weekdays compared to weekends , where the difference in the urban life can be clearly seen during the late night ( top panels ) and late afternoon ( bottom panels ) .colors as in fig .[ fig : day],title="fig:",scaledwidth=100.0% ] + twitter activity of weekdays compared to weekends , where the difference in the urban life can be clearly seen during the late night ( top panels ) and late afternoon ( bottom panels ) .colors as in fig .[ fig : day],title="fig:",scaledwidth=100.0% ] twitter activity of weekdays compared to weekends , where the difference in the urban life can be clearly seen during the late night ( top panels ) and late afternoon ( bottom panels ) .colors as in fig .[ fig : day],title="fig:",scaledwidth=100.0% ] + twitter activity of weekdays compared to weekends , where the difference in the urban life can be clearly seen during the late night ( top panels ) and late afternoon ( bottom panels ) .colors as in fig .[ fig : day],title="fig:",scaledwidth=100.0% ] in addition to these largest scale daily patterns , patterns emerge when comparing weekdays ( i.e. , monday to friday ) to weekends . in fig .[ fig : weekdays ] we compare the late - night and late - afternoon twitter activity between sunday and a workday at 2:00am and 5:00pm . while the workday activity is suppressed almost everywhere at 2:00am , the nightlife activity at 2:00am sunday morning ( late saturday night ) has a unique pattern , with high activity in wide swaths of the city and suburbs .high levels of activity span a band extending from lower manhattan across to brooklyn and hoboken , new jersey .other high spots of night activity include the bronx , and union city and more specific spots in surrounding communities .sunday afternoons also present an unusual pattern of widely dispersed but localized spots of activity likely corresponding to tweets in residential community areas of activity .moreover , a peak of activity is observed in central park most of the day sunday that is not observed on other days of the week .examples of times of high activity at locations listed in table [ tab : spots ] .the left panel shows annotated activity plots using the colors as in fig .[ fig : day ] , and the right panel shows the corresponding height as a three dimensional surface.,title="fig:",scaledwidth=100.0% ] + examples of times of high activity at locations listed in table [ tab : spots ] .the left panel shows annotated activity plots using the colors as in fig .[ fig : day ] , and the right panel shows the corresponding height as a three dimensional surface.,title="fig:",scaledwidth=100.0% ] examples of times of high activity at locations listed in table [ tab : spots ] .the left panel shows annotated activity plots using the colors as in fig .[ fig : day ] , and the right panel shows the corresponding height as a three dimensional surface.,title="fig:",scaledwidth=100.0% ] + examples of times of high activity at locations listed in table [ tab : spots ] .the left panel shows annotated activity plots using the colors as in fig .[ fig : day ] , and the right panel shows the corresponding height as a three dimensional surface.,title="fig:",scaledwidth=100.0% ] we also find other interesting activities at particular locations and times , as can be seen in fig .[ fig : spots ] .the top row shows the high activity at 6:00pm on friday at the three main airports of the area : john f. kennedy ( jfk ) airport ( * a * ) , newark airport ( * b * ) and la guardia airport ( * c * ) .weekend activities are seen in the bottom row , we point out the meadowlands sports complex ( * d * ) and the statue of liberty ( * e * ) . .jfk airport & mon - sat 6 am ; everyday 4pm-6pm ; sun 8 pm + * b*. newark airport & everyday 4pm-7pm + * c*. la guardia airport & sun - fri 4pm-7pm + * d*. meadowlands sports complex & sat 2pm-9pm ; sun 9am-7pm + * e*. statue of liberty & sat 11am-4pm ; sun 2 pm + [ tab : spots ] single user anomaly : the annotated peak corresponds to a single user tweeting more than 10 times the average of other users.,title="fig:",scaledwidth=100.0% ] + single user anomaly : the annotated peak corresponds to a single user tweeting more than 10 times the average of other users.,title="fig:",scaledwidth=100.0% ] + finally , fig .[ fig : anom ] shows examples where an individual s behavior dominates the collective twitter activity .we used statistical measures to discover that a single twitter user was responsible for most tweets in a specific region around 5:00am on weekdays .detecting such anomalies enables distinguishing collective social dynamics from individual behavior that can at times dominate aggregate measures . to identify a single measure that can capture key aspects of the collective behavior of the city we considered the average distance of the tweets from a location in manhattan ( central park , , ) over time .the distance was calculated using the haversine formula , \ , \ ] ] where km is the average radius of the earth , and , ( ) are the latitude and longitude of the point .the results are shown in fig .[ fig : avgd ] . average distance from central park at the different hours of the day for each day of the week .the working days of the week , monday to friday , are shown in different shades of blue ( light blue for monday and the darkest blue for friday ) , while the weekends are shown in shades of red ( red for saturday and dark red for sunday).,scaledwidth=100.0% ] we find that while many people are expected to be located in bedroom communities , the average distance to manhattan falls overnight , reflecting the fact that downtown activity continues through the night so that most twitter activity at late hours / early morning ( 12:00am to 5:00am ) happens in areas in or close to manhattan . during weekdays a sharp peak at 6:00am corresponds to people tweeting in suburban areas before commuting .after 8:00am the distance grows gradually during the day and through the evening , until it falls again overnight .a change in this pattern is observed during weekends .there is a horizontal shift in the curves for saturday and sunday of about 3 hours , reflecting people s tendency to tweet later into the night and wake up later the next morning .the pre - commuting peak is entirely missing .in this paper we characterized , for the first time , the patterns of weekly activity of the nyc area using more than 6 million geolocated tweets posted between aug 19 , 2013 and dec 31 , 2013 .we related the collective geographical and temporal patterns of twitter usage to activities of the urban life and the daily `` heartbeat '' of a city .the largest scale daily dynamics are the waking and sleeping cycle and commuting from the suburbs to office areas in manhattan , while the hourly dynamics reflects the interplay between commuting , work and leisure .we showed not only that twitter can provide insight into understanding human social activity patterns , but also that our analysis is capable of identifying interesting locations of activity by focusing on departures from global behaviors .we observed a peak of twitter usage in the suburbs of nyc before people begin their workday , and during evening hours , while the main activity during the workday and late at night is in manhattan .in addition to the daily differences , we also characterized the weekly patterns , especially differences between the weekend and weekdays .daytime recreational activities concentrate at identifiable locations spread widely across residential areas of the city .we determined more specific times and locations of high activity at air transportation hubs , tourist attractions and sports arenas .we analyze the role of particular individuals where they have large impacts on overall twitter activity , and find effects which may be considered outliers in discussion of social activity , but are an important aspect of human activity in the city .we explored the use of the average distance from downtown to understand the dynamics of twitter usage , and while weekdays are similar in the geography of twitter usage , the sleep waking cycle is shifted later by about 3 hours during the weekends , and a pre - commuting twitter activity peak is absent . taken together, those results demonstrate the potential of using social media analysis to develop insight into both geographic social dynamics and activities , and opens the possibility to understand and compare the life of cities on various scales .a. pentland , reality mining of mobile communications : toward a new deal on data . in the global information technology report 20082009 ,s. dutta , i. mia i ( world economic forum , geneva ) p. 75http://hd.media.mit.edu/wef_globalit.pdf a. sadilek , j. krumm , far out : predicting long - term human mobility , in : twenty - sixth aaai conference on artificial intelligence ( 2012 ) , http://research.microsoft.com/en-us/um/people/jckrumm/publications%202012/sadilek-krumm_far-out_aaai-2012.pdf f. morstatter , j. pfeffer ,h. liu , k. m. carley , is the sample good enough ? comparing data from twitter s streaming api with twitter s firehose , 7th international aaai conference on weblogs and social media ( icwsm ) 2013 , arxiv:1306.5204 [ cs.si ] ( 2013 ) .
describing the dynamics of a city is a crucial step to both understanding the human activity in urban environments and to planning and designing cities accordingly . here we describe the collective dynamics of new york city and surrounding areas as seen through the lens of twitter usage . in particular , we observe and quantify the patterns that emerge naturally from the hourly activities in different areas of new york city , and discuss how they can be used to understand the urban areas . using a dataset that includes more than 6 million geolocated twitter messages we construct a movie of the geographic density of tweets . we observe the diurnal `` heartbeat '' of the nyc area . the largest scale dynamics are the waking and sleeping cycle and commuting from residential communities to office areas in manhattan . hourly dynamics reflect the interplay of commuting , work and leisure , including whether people are preoccupied with other activities or actively using twitter . differences between weekday and weekend dynamics point to changes in when people wake and sleep , and engage in social activities . we show that by measuring the average distances to the heart of the city one can quantify the weekly differences and the shift in behavior during weekends . we also identify locations and times of high twitter activity that occur because of specific activities . these include early morning high levels of traffic as people arrive and wait at air transportation hubs , and on sunday at the meadowlands sports complex and statue of liberty . we analyze the role of particular individuals where they have large impacts on overall twitter activity . our analysis points to the opportunity to develop insight into both geographic social dynamics and attention through social media analysis .
it is often useful in the visual arts to depict a scene composed within a very wide angle of view . for scenes that are wide but not tall ( lanscapes ) one can project on a cylinder and then unfold this developable surface isometrically onto a plane to get the so - called cylindrical or panoramic perspective .when the scene is wide all around an axis , it can be projected onto a half - sphere and then deformed onto a disc on the plane .this is the so - called spherical perspective , and has been described thoroughly in in the 1960s .it is in fact a hemi - spherical perspective , that allows a depiction of up to 180 degrees around an axis , wherein images of lines can be easily constructed with ruler and compass within a reasonable approximation .it is often useful , however , to depict an even wider scene .the author , having taught a course in art and mathematics for a few years to a varied audience ( urban sketchers , architects , school teachers , programmers ) has often received requests from students regarding two questions : how to plot ( either freehand or with minimal instruments ) a view wider than the allowed by ( hemi)spherical perspective , and how to draw a sphere reflection .this paper solves the former question and relates it to the latter .it also hopes to help clarify some concepts of anamorphosis and perspective .in their 1968 work , barre and flocon described a ruler and compass method to plot a 180 degrees spherical perspective .this was a work with a focus on the artistic practice of actual freehand drawing . since then several works of a computational nature have proposed various types of wide angle perspectives , by expanding the angle of view up to 360 degrees or by generalizing the shapes of the projection surfaces ( and ) .these works are of a computational nature and not concerned with the artistic practice of freehand or ruler and compass drawing . as far as this authorcan tell , there has been no publication wherein a system is proposed to allow a depiction of a 360 degree spherical perspective in a way adequate for drawing from observation or from orthogonal plans with the use of minimal equipment such as ruler and compass .regarding contributions by artists themselves , dick thermes is well - known for his paintings on spherical surfaces .he has published a book on the subject .his approach is based on gridding in the manner that follows from . his solution to go beyong the 180 degreesis simply to draw the two complementary 180 degree views separately and place them adjacently to each other .when drawing on a sphere he can put each view on on its hemisphere , but that is a work of anamorphosis and not of perspective proper . the most common artistic device for representating views beyond 180 degreesis that of drawing sphere reflections from observation , and we will consider the relation of these with spherical perspective . in sphere reflections have been proposed as way to obtain a wide field of view ( with difficulties that we will discuss ahead ) .again this is a work of a computational nature .perspectives are ways of representing spatial scenes on a plane , with relation to an observer . we take a _ scene _ to mean any subset of the euclidean 3-dimensional space .we represent an _ observer _ by a point in 3-space , usually denoted by . _ _ _ _ our purpose is to map each point of a scene in 3-space onto a point on a plane , the latter point being called the perspective image of the former .mostly we will be concerned with sets of points , lines , planes , or circles , and will use these to approximate more complicated objects .a map from to that fails to project these elementary objects with simplicity is not adequate for the purposes of a perspective to be used `` by hand '' .all the maps that are usually called perspectives ( plane , cylindrical , spherical ) can be constructed as particular cases of a same scheme that we will now describe .let be an observer and a point in a scene .we call the ray the _ ray of sight _ from to .let be the unit sphere centered at .we call the _ direction of sight _ from to .points will be equivalent if they are radial from .these equivalence classes are naturally formalized into points of the projective space in classical perspective . for more general perspectives, however we have to take into account the need to represent both a direction and its diametrically opposite , hence the sphere is the natural manifold of directions . ] _ _ _ _ a point in a scene will define a vector and a ray with the same notation. we will let the ambiguity stand and let context distinguish them .the unit vector on the sphere can be seen as the equivalence class of all the points that have it s direction of sight from , i.e. , all the points in the ray of sight .now recall a few generalities about circles on spheres : a _ great circle _ is a circle on a sphere , defined by the intersection of the sphere with a plane through the origin . _ _ given a point p on the sphere we call _ antipode point _ of p to the its diametrically opposite point on the sphere ._ _ each point on the sphere defines a family of great circles that covers the sphere , all of those circles crossing both and its antipode .each two non - antipodal points and on the sphere define a unique great circle , the intersection of the sphere with the plane .a _ meridian _ is one contiguous half of a great circle .given a point and its antipode we will call -meridian or -meridian to a meridian that is an arc of a great circle whose endpoints are at and at its antipode . _ _we are now ready to consider the problem of vanishing points . [ missing ]let be a line not crossing .there is a single plane through containing .this plane projects radially into a great circle on the sphere .the set of rays of sight of will form a cone , the set which is a half - plane contained in .the boundary of this half - plane is the line , the translation of to the origin .this line corresponds to two rays from none of which is a ray of sight of an actual point of but correspond to the limit of the directions of sight of an observer that follows the line to infinity in both directions .the intersection of with the sphere is the set of directions of sight of .it is half of the great circle , and it does not contain the two antipodal endpoints , which are the intersection of with the sphere . the two missing directions in example [ missing ] correspond to the intuitive notion of vanishing point .as the eye follows a line to infinity , the ray of sight will in the limit become parallel to the line it follows .this will happen at both ends of the line .but since this happens for no actual point on the line , the projection of the line on the sphere of directions will be missing two points at its end .such missing points , for lines or more complex sets , can be added to the cone of directions by taking the topological closure .let be a scene .let be an observer .let be the topological closure of .the rays of sight of the points of define a cone with vertex at .let be the closure of the intersection of this cone with the unit sphere .we call the _ cone of sight _ of relative to . we will abuse the term to also mean the corresponding cone of rays in 3-space stemming from that project radially to , and we will denote that cone of rays by ._ _ taking the topological closure of the cone of sight allows us to obtain its vanishing points in the following definition .note that we took the closure of in order to avoid the appearance of `` false '' vanishing points . in what followswe will always assume the scenes to be closed sets , so this step may be ignored .we call _ vanishing points _ of a scene to the set . __ that is , vanishing points are the frontier points of the cone of directions that are not directions of actual points in ( the closure of ) the scene .going back to example [ missing ] we take the closure of the half circle and obtain its two endpoints .we see that the set of vanishing points of a line is the intersection of its translation to the origin , , with the sphere of directions .the cone of sight of is one half of the great circle of , including its end points .the corresponding cone of rays is the half - plane of h plus the set of two diametrically opposite rays passing through the vanishing points .let be a plane not passing through .the cone of rays of the individual points of forms a half - space whose frontier is a plane parallel to passing through .this frontier is not contained in the set of rays of sight of individual points of .it is however contained in the closure of that set . on the sphere ,the cone of sight will be a hemisphere and the set of vanishing points will be the great circle that forms its boundary .( in classical perspective the visible part of this circle will form a line , called a vanishing line . )we say that a surface is radial with respect to a point if any ray stemming from will hit the surface on at most on point .we will say that a surface is radial if the point is obvious from context . a _ perspective_ is a map from euclidean 3-space to a region of the plane , that is achieved by the composition of two maps .the first we call an _ anamorphism _ and the second a _ flattening _ . _ _ _ _ _ _ given an observer , a surface that is radial for , and a scene , we say that is the _ anamorphosis _ ( or _ anamorphic image _ ) of on relative to . to the map that takes each point to its anamorphic image we call the anamorphism onto relative to . is an image painted on such that the observer at would mistake that image for the spatial scene itself .we should notice here that we are using the word in two senses : there is `` anamorphosis '' as the problem of tromp loeil , and there is anamorphosis as the mathematical construction we just defined .that the latter is the solution of the former can not be demonstrated mathematically ; it is an empirical fact that depends on the approximate validity of linear optics and on the physiology vision . ] . _ _ _ _ a _ flattening _ is a map from to a region of the plane .it takes the anamorphosis of a scene to its perspective image . _ _a perspective is a map where is an anamorphosis and is a flattening .we note that the anamorphism is fully defined by the choice of surface and observer .the map itself is just the conical projection from onto that surface . in classical perspectivethe anamorphic surface is a plane and the flattening can be seen as the identity map up to scaling .lines will project onto lines , but for each line only at most one of the vanishing points will be present in the perspective image .lines that are parallel to the anamorphic plane will have no vanishing points at all in their perspective image .in cilyndrical perspective , is a cylinder with on the axis , and the flattening consists in unrolling the cylinder over a plane .the anamorphic images of straight lines will be arcs of ellipses and the flattening will unroll them onto sinusoids .the pairs of vanishing points of lines on the sphere of directions will project onto the cylinder except if their directions are along the axis of the cylinder. in the the so - called spherical perspective of is a hemisphere of a sphere around and the flattening is a restriction to a hemisphere of the azimuthal equidistant projection .lines will project onto arcs of great circle on the hemisphere , and will have one vanishing point on both the anamorphic image and the perspective image ; or two points if these are on the equator . in the the full spherical perspective considered ahead is a sphere around and the flattening is the azimuthal equidistant projection .lines will project onto halves of great circle on the sphere , and will have always two vanishing points on the anamorphic image . because the anamorphic surface is a sphere , just like the sphere of directions , the anamorphism is a homothety , and can be identified with the identity map .we note an interesting symmetry between spherical and plane perspective . in classical perspectivethe flattening is trivial but the anamorphosis is not .in spherical perspective the reverse is true .this is because in classical perspective the plane of the anamorphosis can be identified with the plane of the perspective , while in the spherical perspective the anamorphic sphere can be identified with the sphere of directions , so the flattening in the former case and the anamorphosis in the latter can be identified with the identity map .we have not placed any constraints on the nature of or on the maps .the fact is that in artistic practice there are examples where the surface in anything from a set of disconnected planes to a myriad of dust - like particles in suspension . is certainly not a necessarily a smooth , nor a connected surface , although it is so in all the usual perspectives such as the one we treat here .we will now define our spherical perspective , within the general scheme outlined above .first we must define the anamorphic surface and the place of our observer .we will take to be a spherical surface of arbitrary positive radius , with the observer at its center .choosing these elements fully defines our anamorphism .what can we say about it ? as we already discussed , the anamorphic sphere being a surface , the anamorphic image coincides with the cone of sight in the sphere of directions .hence the anamorphosis of a line onto a sphere relative to its center is an arc of a great circle : the rays of sight from to the points of the line define ( half of ) a plane that crosses .the anamorphic image of that plane is a great circle , and the image of the line is a half of the great circle , delimited by two vanishing points , determined thus : if is a line , and is the translation of line that passes through , then the vanishing points of are the intersection of with .we will now define our flattening map .we must first define a system of coordinates .we consider a ray stemming from , representing a privileged direction of sight .we call it the _ central ray of sight _ and to its axis we call the _ central axis of sight _ .we place an orthonormal reference frame in , such that the positive side of the axis coincides with the central ray of sight . for quick referencewe name the points where the three axes cut the sphere : we call * * n**orth to the intersection of the central ray of sight with the sphere and * * s**outh to its antipode point ; * * e**ast to the point where the axis touches the sphere and * * w**est to it s antipode ; * * z**enith to the point where the positive axis touches the sphere and na**d**ir to its antipode , and we represent these points by the letters that we wrote in bold . _ _ _ _ we call the plane orthogonal to the central axis of sight ( the xz - plane ) the _ observer s plane _ ; we call the -plane the _ sagital plane _ .the observer s plane intersects the sphere in a great circle we call the _ equator _ . __ _ _ _ _ we call the half - space the anterior half - space ( representing everything in front of the observer ) and to the half - space we call the posterior half - space ( representing all that is behind the observer ) .we construct a flattening map that will project all the points of the sphere onto a disc on a plane , with the exception of the south pole .this map composed with the anamorphosis will result in a perspective that images all points of 3-space except those points located on the ray .we define the flattening thus : we map the sphere minus its south pole one - to - one to a disc on the plane in such a way that each north - south meridian goes to a line segment and \i ) distances are preserved along each meridian .\ii ) angles between meridians are preserved at point .condition i ) means that the map is an isometry for each meridian separately .since distances measured along great circles of the sphere are proportional to angles from the center , this means that if are points on the same meridian and if are their images , then up to multiplication by a scale factor to mean that these equalities are valid modulo product by the adequate scale factors . ] .condition i ) also implies that will be mapped to the center of the disc with the segments corresponding to the meridians radiating from it .the lines stemming from the image of we will call measuring lines , because angular distances are preserved along them. they will be the essential tool for our plotting of points .condition ii ) means that the angles between measuring lines at the image of will be equal to the angles of their meridians at ( which are the angles between the planes that define them ) .it ensures the meridians will be distributed radially preserving their tangents on , that is , they will look as if orthogonally dropped onto the tangent plane to the sphere at .we call _ longitude _ of an n - meridian to the angle between its tangent and that of the east meridian at point . the longitude of a meridian will equal the angle of its measuring line with the measuring line ._ _ condition i ) and ii ) together imply that the two meridians of each great circle through form a diameter of the perspective disc and that distances are preserved along interior points of these diameters . in intuitive terms , we look at the north - south meridians as inextensible threads , connected to the sphere on the north and south poles .we cut them free at the south pole only , and , pulling them straight along their tangents at , make from them a disc on the plane tangent to the sphere at .at the end , the threads are radiating from , each under the orthogonal shadow of its former position , so that the angles their tangents make at are the same as before .distances have not changed between points on the same thread .the threads together form a disc of radius , where is the radius of the sphere , with at the center and each point of the sphere corresponds to a single point on the disc , except for the south pole , that was blown - up and is missing from the outer edge of each thread , hence making the disc open at its boundary .we take the closure of the disc and get a frontier circle which we call the south circle , or the blow - up of the south pole , each of whose points corresponds to one of the directions that the original pole could be apporached from ( each of them can be identified with one of the meridians , or with a ray of the tangent plane at ) .when context is clear we will use the same letters for the points and their images , and will call equator to the perspective image of the equator . see fig .[ disc360 ] for a picture of the perspective image of these points and lines .points in the anterior half - space will be projected by the anamorphosis into the hemisphere in front of the equator , and these will be flattened in turn onto the inner part of the perspective disc that is contained within the image of the equator .this inner disc will have half of the radius of the perspective disc .the posterior half space will be projected onto the outer ring between the image of the equator and the circle of the blow - up of the south pole . in terms of the coordinates , the flattening composed with anamorphosis gives the perspective map where is the sphere of radius and is a disc of radius , that is , we project orthogonally against the , take the unit vector , and then scale to a length equal to the value of the angle .the natural set of spherical coordinates for this map is where that is , is the latitude and is the longitude with respect to the north pole . in these coordinates we see that the perspective map does not in fact depend on the norm of the point , which is to be expected since the anamorphosis is a central projection . , height=340 ]_ solving a scene _ in perspective means finding the perspective images of all points on the scene - classically , and in the interest of solving a scene with simple instruments , we are concerned with the images of points grouped into lines , and especially with their vanishing points .we will now show how to solve a scene in our 360 degree spherical perspective using ruler and compass . _ _ a common technique to solve scenes in classical perspective is to make the plane of the perspective image do double or triple duty by superposing on various orthogonal projections .this technique also works in spherical perspective. we will illustrate it in the following construction that we will make use of in the next section .[ on_equator ] construction of the perspective image of points on the observer s plane : let be a point on the observer s plane . then the ray crosses the equator at a point . then the perspective image of will be at the equator of the perspective disc . also , if is the longitude of the meridian crossing , then .hence the following construction plots the perspective image of : make the perspective plane do double duty , using it to represent also the orthogonal back view of the observer s plane , with the image of in the perspective view coinciding with in the orthogonal view and the perspective scaled in such a way that the equator s perspective image coincides with its orthonal image .now plot in the orthogonal image from its coordinates and trace the ray .the point where touches the equator will be the perspective image of .the problem of solving a scene can be divided into two parts : plotting points and lines in the anterior half - space and in the posterior half - space .plotting inside the anterior half - space is solved in .we will very quickly give a review of the method .it is shown in that the perspective image of lines in the anterior half - space is well approximated by arcs of circle .this is important for two reasons : the first is that in drawing practice , arcs of circle are easy to trace with ruler and compass or even freehand ; the second is that three points determine a unique arc of circle on a plane there passes a single circle . to find that circlefind the perpendicular bisectors of and and intersect them at x , then open the compass from x to p. ] .on what follows we always assume that the lines being plotted do not cross unless stated .( the case where it does is easy , the perspective image consisting of two antipodal points that we will also learn how to plot ) .we have to consider two cases .we say that a plane is _ frontal _ if it is perpendicular to the central axis .let be a line on a frontal plane .first suppose that is not the observer s plane .translating to we find it has two vanishing points which are diametrically opposite points on the equator .they are found by drawing the translated line directly on the perspective disc , to obtain its intersection with the equator ( the perspective plane doing double duty as in construction [ on_equator ] ) .next , we find a third point .if the line is not vertical , it intersects the sagital plane at some point p. we measure the angle of and plot the measure of this angle on the vertical measuring line .if the line is vertical then it crosses the plane and we measure instead the angle with the central ray of sight at this point , and plot it on the horizontal measuring line .the image of the line is well approximated by the arc of circle that crosses the two vanishing points and the third point ( see fig .[ anterior_lines ] ) ._ now suppose that is the observer s plane .we get two vanishing points in the same way as above .the third point is obtained in the same way as before but will now be found on top of the equator of the perspective disc .the arc of circle will be one half of this disc .notice that measuring angles along the vertical plane and along the horizontal plane is very natural in practice , and the kind of thing you can do using an improvised theodolite when drawing from nature .[ arbitrary_anterior ] we note that we now know how to plot an arbitrary point located on the anterior half - space . given a point , consider the frontal plane going through and on it a vertical line and a horizontal line going through .we have just learned how to obtain the perspective images of these lines .the perspective image of will be found at the intersection of the images of and .we say that a line is a _ receding line _ if it intersects the observer s plane at a single point .let be the point of intersection of a receding line with the observer s plane .we construct the image of as in construction [ on_equator ] .denote the image also by .the plane defined by and it must also intersect the equator at the antipodal point . to find a third point , we translate to and intersect it with the sphere to find the two vanishing points . one of these will be on the anterior hemisphere . we find it by construction [ arbitrary_anterior ] for plotting an arbitrary anterior point .let its image be .we trace the auxiliary arc of circle that is the image of the plane .the image of will be the part of the arc that lies between and .a case of particular interest is that of the central lines .we say that a line is central if it is perpendicular to the plane of the observer .in this case point will project onto , hence will be between and .the image of will be the straight line segment and the image of will be the segment ( see fig . [ anterior_lines ] ) . __ is the image of a frontal line .arc is the image of the plane of a receding line , and the arc from to is the image of the line itself .the line segment is the image of a central line .the image of its plane proceeds until the image of the antipode of .see text for details.,height=453 ] this ends the review of ( hemi)spherical perspective as presented in . outside of the anterior space discthe images of lines are no longer well approximated by circles , and that is perhaps why spherical perspective was kept limited to the anterior 180 degrees in its original formulation . however , as it turns out , the generalization is easily constructed both with ruler and compass and even in freehand drawing , by approximating the images of great circles on the posterior hemisphere by `` fat lines '' consisting of segments of circles , in a reasonably easy construction .let be a spatial line .together with , defines a plane through the origin , and this plane defines a great circle . we wish to plot , to a good approximation , the full perspective image of this great circle .the image of will be contained in it and lie between its vanishing points .what we want is to use the arc of circle approximation obtained in last section for the anterior region and use it to obtain a full plot of the great circle .the key lies in plotting antipodal points .let such that be a point in space with antipode point .let and be the perspective images of and respectively . then is the point on such that equals the radius of the perspective disc .[ ruler ] the plane defines the single great circle through and .it is the union of two diametrically opposite n - meridians , .. then .the images of and are diametrically opposite measuring lines , respectively , that together form a diameter of the perspective disc .let be the images of respectively .then is on , and therefore is on . since is the length of a meridian and are measuring lines , then is the length of a measuring line , or a radius of the perspective disc .this proposition allows us to easily plot the antipode of an already plotted point .just draw a line to from , open the compass with radius equal to the radius of the perspective disc , and intersect with the line to find . or ,if using a marked ruler , pass the ruler through and with the zero marker at , and plot the point at the mark of the lenght of the radius .for the purposes of freehand drawing of a perspective it is often useful , when plotting points nearer to the equator than to , to use instead the following result : [ freehand ] let such that be a point in space with antipode point .let and be the perspective images of and respectively .let be the intersection of with the blow - up of .then is the point on such that .also , , where is the point on the perspective disc diametrically opposite to .the plane defines a great circle that contains , and . on that plane ,the lines and intersect at , and therefore we have the equalities between opposing angles and . since is a great circle through , it projects to a diameter of the perspective disc .on we have a cyclic order of points .the perspective map is continuous outside of and preserves the order on the image .we will have where and are the points of the blow - up corresponding to the directions of the two meridians of at .because distances are preserved along measuring lines , the two angle equalities above imply and respectively .the practical interest of this proposition lies in the fact that for freehand plotting of lines in the full spherical perspective it is often easier to transport the measurement by eye than to transport the radius of the disc without an actual compass or ruler .but , having a compass at hand , or a marked ruler , the use of proposition [ ruler ] makes for very efficient plotting of antipodes .this allows us to plot the image of a great circle s posterior meridian from the image of its anterior meridian .[ makefatline ] construction of _ fat lines _ :let be the perspective image of the anterior meridian of a great circle on the sphere . to obtain an approximation of the posterior image of , trace an arbitrary number of measuring lines through .intersect each of these lines with to get points , and use proposition [ ruler ] to to obtain the antipodes . through each successive three of these pointswe trace an arc of circle , thus getting overlapping arcs , , etc .these overlapping arcs from a _ fat line _ that approximates .the degree to which successive arcs fail to exactly overlap ( how `` fat '' the envelope of these arcs is ) indicates the amount of error in the approximation and the need to increase the number of measuring lines ( see fig .[ fatlinesexample ] ) ._ _ _ _ the practical draughtsman , being given , and armed with a marked ruler , will follow the following procedure : stick a nail on the center of the perspective disc , and sliding the ruler against the nail to ensure that it always touches , make its zero mark slide against along the curve , plotting points at the mark of the length of the radius of the disc at desired intervals . with this procedure a great number of points can be marked very quickly , to the point where the antipodal curve can be interpolated by hand with good precision .we are now ready to plot arbitrary lines in the full perspective .we have the following cases : let be a line in a frontal posterior plane .let be the plane defined by and and its great circle .suppose is not vertical .then it crosses the sagital plane at a point . the image of will be such that , on or on according to whether is above or below the observer .the antipode of will map to the point on the same axis and at a distance equal to the radius of the perspective disc .this point will be in the inner disc corresponding to the anterior half space .taking point and the two vanishing points at the equator we plot the approximation of the line through and arc of circle by the method of described above .let this arc be . to obtain an approximation to the posterior image of can now trace an arbitrary number of measuring lines through and follow the procedure of construction [ makefatline ] to trace a fat line approximation of the posterior half of from antipode points of .thus we obtain the full image of . to obtain the image of the line simply discard the anterior half of ( see fig .[ posteriorlinesfig ] ) .we can now locate an arbitrary point on the posterior half - space : just pass vertical and horizontal lines through it , plot them according to the procedure just described , and intersect them to find the image of .let be a line that crosses the observer s plane at a single point .let be the plane defined by and and its great circle .displacing to the origin we obtain two vanishing points ; one on the anterior hemisphere , let it be , the other on the posterior hemisphere , its antipode .plot according to the methods of , then use proposition to plot . let their images be called and . through construction [ on_equator ]we obtain the points and , the perspective images of and its antipode respectively , which are on the equator .trace the arc of circle . from that arc use construction [ makefatline ] to trace the fat line of its antipodal arc .this plots the full image of the great circle . to get the image of ,discard the arc of the curve defined by ( see fig .[ posteriorlinesfig ] ) . in the particular casein which is a central line , its great circle is an n - meridian , hence its image will be a diameter of the disc .one of the vanishing points will be at and the other will be split into two diametrically opposite points and at the blow - up circle , each at the longitude of one of the two meridians from which can be approached along .let be the image at the equator of the point where crosses the observer s plane , which we can obtain by construction [ on_equator ] . will either be on or on .suppose it is the former case .then the image of will be the measuring line . put in another way, the image of will be the measuring line that contains the image of ( see fig .[ posteriorlinesfig ] ) . what we have just learned is enough to solve a scene when we have the cartesian coordinates of its points - for instance when drawing from an arquitectural plan .when drawing from observation , however , the artist is not in the position of the architect , but in that of the astronomer .what he measures are the angles subtended by objects .now we have already seen what the natural spherical coordinates are for this perspective ( the angles and defined above ) , and it is possible to construct a simple device to measure such angles directly , but the more habitual set of angles are the horizontal angle together with the angular elevation , defined thus : is the angle between the central ray and the orthogonal projection of against the horizontal plane . is the angle between and its orthogonal projection on the horizontal plane .these are the angles one would measure with a standard theodolite .lines of constant horizontal angle are the images of vertical lines and we already know how to plot them. lines of constant angular elevation are circles on the anamorphic sphere obtained by intersection with horizontal planes . for short we will call these circles and their images _parallels__. _ _ _ in the anterior hemisphere , following we approximate parallels by arcs of circles and plot them in the following way : let be parallel on the anamorphic sphere , of constant angular altitude . intersects the equator in two points and on the west and east side of the sagital plane respectively and intersects the sagital plane in a point . and will be mapped to the equator at and respectively in such a way that . will be mapped to on the vertical line in such a way that .we draw the arc of circle and take this as an approximation of the image of the parallel . to plot the posterior part of the parallel we make use of the following proposition : [ plotparallels ] let be a parallel on the anamorphic sphere .let be a point of , with perspective image .let where is the image of the equator .let be the point such that is the middle point of .then is the perspective image of a point of .parallels and n - meridians lines are both invariant with regard to reflection on the observer plane ( because so are their defining planes and the sphere itself and hence their intersection ). then the intersection set of parallels and measuring lines is itself invariant for reflection on the observer plane .this intersection is made up of no points at all or of two symmetric points .let be a parallel and let be a point on .there is a single n - meridian through , and this meridian must cross the parallel at another point . by mirror symmetry ,if is the point where the meridian line crosses the equator , we must have , and , since is an n - meridian its image is a measuring line through the origin and the points ( image points of respectively ) will verify . to plot the posterior half of a parallel , plot first the anterior half as an arc of circle , then plot a set of measuring lines , intersect them with at points , find the points from proposition [ plotparallels ] , and trace a fat line through the .figure [ elevationfig].a ) shows a computer plot of a uniform grid of parallels and verticals calculated directly from map [ exactxyz ] .figure [ elevationfig].b ) shows the approximation of the parallels of elevation 10 , 45 , 80 , and 85 degrees plotted by the method above .it is quite evident that near the equator the approximation is not very good ; the curves are not smooth at the transition from the anterior to the posterior hemisphere .this is an artefact of the approximations , as it is easy to see from equation [ exactxyz ] that the perspective images of constant elevation curves are differentiable .the error comers not from proposition [ plotparallels ] , which is exact , but from the initial approximation of the parallel by an arc of circle inside the anterior disc . near the equatorthe draughtsman might do well to avoid plotting vanishing points from parallels and verticals and use horizontals and verticals instead .as is well known in classical perspective drawing , as long as we can plot a grid of squares we can plot any object to any required precision , by caging it inside a fine enough grid and interpolating through the intermediate points .we will therefore concern ourselves with the basic examples of grid construction . in fig .[ onepoint360 ] we build the image of a central perspective grid , i.e , we consider the perspective image of a horizontal grid of squares with one axis perpendicular to the observer s plane . we assume for simplicity that a vertex is directly under the observer ,hence one axis is directly under the axis of the sphere .the plane of perspective does triple duty here , serving also to represent a top orthogonal view of the scene and a back view of the observer s plane .we take the sphere s arbitrary radius to be equal to the height of the observer relative to the ground plane , so that a horizontal line through the nadir point coincides with an axis of the grid and will be made to represent both the ground plane on the back view and the observer s plane on top view . on the top view the receding lines of the grid intersect the ground line of the observer s plane at uniformly spaced points .the perspective of each such point is obtained by intersecting the line that joins them to with the equator .the same straight line is the perspective image of the central receding line of the grid that crosses that point .thus the image of the receding lines of the grid is a set of straight lines going from to the blow up and passing through the .note that this is exactly the same construction as in classical perspective , though with a different interpretation . to plot the frontal lines we first trace a line on the plane of the grid , such that it makes a 45 degree angle to the right of the observer and crosses the nadir point .this line will cross each row of squares , touching a vertex of each row .we plot the great circle of the plane defined by and .first we plot the anterior part by drawing the arc where is the 45 degree vanishing point on the horizontal axis .then we note that each of the receding lines crosses a vertex of the grid where it touches the anterior part of the 45-degree line . to find these points we intersect each with the upper half of on the anterior disc and take the antipodal point of that intersection .since this point will be on the posterior space and will belong to and to , then it will be the required vertex .so we can plot a posterior horizontal line through it to get a posterior planar line of the grid .note that we have used the receding lines of the grid as the natural measuring lines to construct the posterior fat line of . in this fashion we can plot the full 360 degree grid to any required precision and extension . in fig .[ room45 ] we represent a tiled cubic room drawn from the point of view of an observer at its center , looking straight into the center of one of the walls .the whole setup is drawn very simply from a judicious use of vertical and horizontal frontal lines at 45 degrees to the observer ; these lines do double duty , as , for instance , the frontal vertical at 45 degrees to the right of the observer has the same projection as the line on the plane of the horizon that goes from the nadir to the 45 degree mark on the measuring line .often we will want our grids to be oriented at some arbitrary angle to the ray of sight . in fig .[ arbitraryback ] we represent a square on a horizontal plane , below , behind , and to the left of the observer , such that one side of the square makes a 60 degree angle with the ray of sight .once again the plane of perspective does triple duty , serving also to represent a top orthogonal view of the scene and a back view of the observer s plane and we take the sphere s arbitrary radius to be equal to the height of the observer relative to the ground plane , so that a horizontal line through the nadir point represents both the ground plane on the back view and the observer s plane on top view . on this top viewwe draw the square and project its sides until they intersect the line of the observer s plane . from the back viewwe can now trace lines to these intersection points and find their projections on the equator . from these projections andthe vanishing points we can find the arcs of circle representing the lines that extend the sides of the square ( note that the vanishing points are all on the horizontal measuring line , one set of lines converging to the points at and and the other to and . from the arc of the anterior perspectivewe can obtain the corresponding fat lines of the posterior perspective . by intersecting these lineswe find the perspective images of the points . from this squarewe can plot a grid by an adaptation of the previous methods .it is apparent from the plot of the 45 degree room in fig [ room45 ] that our perspective bears striking resemblances to a reflection on a sphere .it is natural to ask if it is indeed a reflection .recall how a reflection works ( see fig [ obstruction ] ) : an observer at point will see a point reflected at a point on the sphere according to these rules : will be on the intersection of the sphere with the plane and ( angle of incidence equals angle of reflection ) .we notice several difficulties . given , it is easy to find the incident and reflected rays , but the inverse problem of obtaining from is non - trivial . in generalit requires solving an algebraic equation of order four ( see ) .also , obstructions are non - trivial . in fig .[ obstruction ] we can see that points and will have the same reflection even though they are not in the same ray from either the center or the observer . by contrast , obstructions are always radial for perspectives , being fully determined before the flattening , at the step of the anamorphosis . finally , our spherical perspective has an angle of view of .the angle of view captured by a reflection depends on the distance of the observer to the sphere .the points of the sphere define a cone with the observer at the vertex , the cone of shadow , and every point outside of this cone of shadow will be viewable on the sphere . in angular terms , the field of view will be with , where is the radius of the sphere and the distance of the observer from the center of the sphere .there is however a limiting case wherein these problems disappear .there are two ways of making go to zero : you can move away from the sphere ( and look at it from a telescope to compensate ) or stay put and shrink it ( and look at it with a microscope ) , or some combination of the two that will make the negligible small compared to .either way will , in the limit , make for a angle of view ( the first option will leave a tubular shadow of finite section , however , while the second will reduce the shadow to a ray ) . in any case , in the limit , the rays coming from the observer to the sphere with center become parallel to , and the angle of reflexion becomes equal to the angle .hence ( see fig . [ parallelreflection ] ) .still , the position of is not determined by but by the angle of fig .[ obstruction ] .we can however make these angles equal by either restricting the reflection to objects at infinte distance ( say , plotting a reflection of the celestial sphere ) or by making go to zero .when goes to zero , goes to and becomes .in that limit , the projection becomes radial ( therefore making obstructions trivial ) , and the whole space of directions is mapped onto the hemisphere visible from .this can be seen as a sphere anamorphosis followed by a linear contraction onto a hemisphere by halving the angle of each point on the sphere .seen from point , since all rays are parallel to the axis , the reflection will look like the orthogonal projection along of the image on the sphere .hence the reflection , seen from , is anamorphically equivalent to a perspective obtained by the trivial anamorphism onto the sphere composed with a flattening which is the composition of a linear compression onto a hemisphere followed by an orthogonal projection . in the spherical coordinates of [ natcoords ] ( with the axis on and in the perpendicular plane through ) and rescaling the sphere to ,this perspective is the map where the first map is the trivial anamorphosis , the second is the crunching into the hemisphere and the last step is the orthogonal projection .this is a 360 degree perspective , but different from our spherical perspective .it is not linear along , squashing the outer angles more , and can not be easily used for drawing by hand without the help of pre - computed grids ( since we lose the isometry along measuring lines ) .but we can see why there is a qualitative similarity between the two. becomes equal to .if p goes to infinity then goes to .,width=264 ] it has been noted in that reflections on a sphere could be used as a form of wide angle `` perspective '' .this is well inspired in art history , in fact , as reflections drawn from observation have been the time honoured tool of the artist to represent a wide angle of view , escher s self portrait being a well known example .we have already seen the difficulties in this approach .first , reflections are hard to calculate .second , they are not perspectives in our sense of the word ( i.e. , `` radial '' perspectives ) , and they have non - trivial obstructions .as noted in this causes difficulties for hidden - face removal algorithms .if the purpose is to represent a wide angle view , spherical perspective is a much more natural proposal .it allows for ( up to ) a 360 degree view ; it is a perspective in the sense we defined above and therefore , like all such perspectives , has trivial obstructions , with hidden - face algorithms working exactly as in the classical case , being calculated at the anamorphosis step .furthermore , spherical perspective is easy to calculate by map [ exactxyz ] and , unlike the limiting case of reflections mentioned above , it can actually be used by an artist armed only with ruler and compass or even , after some practice , in freehand drawing from nature .note : further notes , computer code and illustrations will be made available at the author s page : http://www.univ-ab.pt/~aaraujo/full360.htmlthis work was supported by fundao para a cincia e a tecnologia ( fct ) project uid / mat/04561/2013 .trapp , m. and dllner , j. ( 2008 ) , a generalization approach for 3d viewing deformations of single - center projections , in proceedings of international conference on computer graphics theory and applications ( grapp ) , 162 - 170 .link : url[http://www.hpi.unipotsdam.de / fileadmin / hpi / fgdoellner / publications/ 2008/td08/paper107nonplanarprojectiontrappdoellner.pdf ]
we describe a general setup for anamorphosis and perspective , then obtain a ruler and compass construction for the 360 degrees spherical perspective . we consider its uses in freehand drawing and computer visualization , and its relation with reflections on a sphere .
diffusion strategies - were first invented to solve distributed estimation problems in real - time environments where data are continuously streamed . here , all nodes employ adaptive filter algorithms to process the streaming data , and simultaneously share their instantaneous estimates with their neighbors .these approaches are also very useful to model many self - organizing systems .recently , in - , diffusion lms schemes have been used to estimate sparse vectors , or equivalently , to identify fir systems that have most of the impulse response coefficients either zero or negligibly small . in these papers , certain sparsity promoting norms of the filter coefficient vectorshave been used to regularize the standard lms cost function , prominent amongst them being the norm of the coefficient vector that leads to the sparsity aware , zero attracting lms ( za - lms ) - form of weight adaptation .these diffusion sparse lms algorithms manifest superior performance in terms of lesser steady state network mean square deviation ( nmsd ) compared with the simple diffusion lms . in this paper , we show that the minimum level of the steady state nmsd achieved using za - lms based update at _ all _ the nodes of the network can also be obtained by a _heterogeneous _ network with only a fraction of the nodes using the za - lms update rule ( referred as sparsity aware nodes in this paper ) while the rest employing the standard lms update ( referred as sparsity agnostic nodes in this paper ) , provided the nodes using the za - lms are distributed over the network maintaining some uniformity " .note that reduction in the number of sparsity aware nodes reduces the overall computational burden of the network , especially when more complicated sparsity aware algorithms involving significant amount of computation are deployed to exploit sparsity . as shown in this paper , the only adjustment to be made to achieve the above reduction in the number of sparsity aware nodes is a proportional increase in the value of the optimum zero attracting coefficient .analytical expressions explaining the above behavior are provided and the claims made are validated via detailed simulation studies . finally , the proposed analysis ,though restricted to the -norm regularized algorithm ( i.e. , za - lms ) only , can be trivially extended to the case of more general norms and thus similar behavior can also be expected from the corresponding heterogeneous networks .we consider a connected network consisting of nodes that are spatially distributed . at every time index , each node collects some scalar measurement and some vector which are related by the following model : where is the measurement noise at the node and is the unknown vector , known a priori to be sparse , which is required to be estimated .both and are variates generated from some gaussian distributions , with and being mutually independent for all . in the diffusion scheme , every -th node , deploys a adaptive filter to estimate , which takes and respectively as the local desired response and input vectors .the estimates of , i.e. , for each are exchanged with the neighbors of the -th node , i.e. , nodes directly connected to it , and are used to refine the estimates in one of the two following manners : ( a ) adapt - then - combine ( atc ) where is first updated to an intermediate estimate , which is then linearly combined with similar estimates received from the neighbors , and ( b ) combine - then - adapt ( cta ) where is first linearly combined with similar estimates received from the neighbors and then updated .originally , the diffusion schemes were proposed assuming lms form of weight adaptation at each node - . in the context of sparse estimation , certain sparsity exploiting norms of were added to the corresponding lms cost function - , the most popular of them being the norm penalty which results in the introduction of the zero attracting terms ]before presenting the proposed heterogeneous network and its at par behavior with the za - atc based diffusion network of , it will be useful to consider some of the major results of here .for this , we first define the average network mean - square deviation at the time index as , where is the individual mean - square deviation of the node at the time index , i.e. , = e[\|{\bf { \tilde w}}_k(n)\|^2]\ ] ] where is the weight deviation vector for the -th node at -th index .the expression for steady - state ( i.e. , ) of the za - atc algorithm was derived analytically in . however , considered a more general form of diffusion , in which both and are also exchanged with the neighbors along with the local estimates .in contrast , in this paper , we consider exchange of only which is also the most common form of diffusion .additionally , we introduce a few more simplifications in .firstly , we assume same step - size for all nodes . next, both the input signal and noise at each node are assumed to be spatially and temporally i.i.d . under these , it is easy to check that the expression for the za - atc algorithm simplifies to the following : ^t({\bf i } - { \bf f})^{-1}{\bf q } \nonumber\\ & + & \frac{1}{n}(\beta(\infty ) - \alpha(\infty ) ) , \label{eq : msd}\end{aligned}\ ] ] with ^t{\boldsymbol \varomega}{\bf c}{\bf c}^t({\bf i } - \mu{\bf d}){\bf { \tilde w}}(\infty)]\ ] ] and \|_{{\boldsymbol \varomega}{\bf c}{\bf c}^t{\boldsymbol \varomega}}^{2}],\ ] ] where is an operator that stacks the columns of its argument matrix on top of each other , , and , with denoting an operator that carries out stacking of its argument column vectors on top of each other , and and are the variances of the noise and input signal respectively .the matrices , , and are defined as follows : + [ defines the right kronecker product . ]+ + , + .[ also note that for a vector and a matrix , indicates . ]it is noticed that the first term in the r.h.s . of ( 6 )is actually the steady - state network msd of simple atc diffusion lms and is independent of .let us denote the second term as , i.e. , .it is easy to see that one can express as , where , ^t{\bf c}{\bf c}^t({\bf i } - \mu{\bf d}){\bf { \tilde w}}(\infty)]\ ] ] and \|_{{\bf c}{\bf c}^t}^{2}]~(>\;0).\ ] ] the function has two zero - crossing points , one at and the other at , and between them , takes only negative values with the minima occurring at , which , from ( 6 ) , also minimizes . for systems that are highly sparse, it follows from that and conversely , for non - sparse systems , . since for proper zero attraction , must be positive , the optimum value of is then given by .\ ] ] the corresponding minimum value of ( when ) is then given as * the proposed heterogeneous diffusion network :* + + in this section , we show that the same level of as given by ( 10 ) and therefore , the same ] where . using this and the fact that , and modify to and , given as follows : ^t{\boldsymbol \varomega}_s{\bf c}{\bf c}^{t}{\bf { \tilde w}}(\infty)]\end{aligned}\ ] ] and ^t{\boldsymbol \varomega}_s{\bf c}{\bf c}^{t}{\boldsymbol \varomega}_s sgn[{\bf w}(\infty ) ] ] \end{aligned}\ ] ] note that unlike and , it is lot more difficult to express and as a function of , since unlike , can not be written simply as .instead , one needs to analyze the rhs of ( 11 ) and ( 12 ) to express and in terms of . towards this, we make the following assumptions : + it is then possible to prove the following : + for a network satisfying the and as given above , we have , n_s.\end{aligned}\ ] ] proof : skipped due to page limitation .+ for a network satisfying the , and as given above , we have , {s}^{2}}{n}.\end{aligned}\ ] ] proof : skipped due to page limitation . + substituting and in , then differentiating w.r.t . and equating the derivative to zero , we obtain , }{\mu tr[{\boldsymbol \psi}]n_s}].\ ] ] the corresponding minimum value of [ when , i.e. , the system is sparse ] , say , is given as ^ 2}{tr[{\boldsymbol \psi}]}.\ ] ] note that as given in ( 16 ) is independent of . therefore , _ its value remains same when , i.e. , when the network becomes _ _ homogeneous with all nodes being sparsity aware_. this also implies that if as given by ( 10 ) is analyzed using the assumptions i and ii , it would give rise to the same expression as that of ( i.e. , ( 16 ) ) . from this and ( 15 ) , we then make the following two conclusions : + + the ] ( i.e. , }{\mu tr[{\boldsymbol \psi}]n_s} ] , one can reduce the number of sparsity aware nodes by introducing proportional increase in the value of .to test the performance of the heterogeneous networks , we use a strongly connected network of nodes placed randomly in a geographic region .the weights of the edges are determined by the uniform combination rule .the goal of the network is to estimate a vetor which is highly sparse ( only one coefficient being non - zero ) .we choose the same step - size for all the nodes . among these nodes , number of nodes use the za - lms and rest of the nodes use simple lms update , with the former spaced uniformly ( i.e. , satisfying assumptions i.a and i.b ) over the network .the input signals and noise variables are drawn from gaussian distributions , and they are temporally and spatially independent .also , the input and noise statistics are same for all the nodes , with , and . to start with, the value of is kept fixed at for all the sparsity aware nodes .the simulation is then carried out for iterations and the network steady state msd is evaluated by taking ensemble average over independent runs .this is done for different values of ( ranging from to ) and based on this , the network steady state msd is plotted as a function of . the value of is then increased progressively to take the following five values : , one at a time for all the za - lms based nodes .1 displays the network steady state msd vs. plots with as a parameter .it is easily seen from fig .1 that ( i ) the minima reached by each msd - vs- plot is same for all the plots , and ( ii ) as increases , the value of where the minima occurs reduces and vice versa . in other words ,1 validates the theoretical conjectures made in the previous section .1 a. h. sayed , `` diffusion adaptation over networks , '' in _ e - reference signal processing _, r. chellapa and s. theodoridis , eds .amsterdam , the netherlands : elsevier , available online at http://arxiv.org/abs/1205.4220 , to be published .y. gu , y. chen and a. o. hero , sparse lms for system identification " , _ proc .ieee intl .taipei , taiwan , _ apr .k. shi and p. shi , `` convergence analysis of sparse lms algorithms with -norm penalty based on white input signal , '' _ signal process .3289 - 3293 , dec . 2010 .bijit kumar das and m. chakraborty `` sparse adaptive filtering by an adaptive convex combination of the lms and the za - lms algorithms , '' _ ieee trans .circuits syst.i , reg.papers_ , vol .5 , pp . 1499 - 1507 , may 2014 .
in - network distributed estimation of sparse parameter vectors via diffusion lms strategies has been studied and investigated in recent years . in all the existing works , some convex regularization approach has been used at each node of the network in order to achieve an overall network performance superior to that of the simple diffusion lms , albeit at the cost of increased computational overhead . in this paper , we provide analytical as well as experimental results which show that the convex regularization can be selectively applied only to some chosen nodes keeping rest of the nodes sparsity agnostic , while still enjoying the same optimum behavior as can be realized by deploying the convex regularization at all the nodes . due to the incorporation of unregularized learning at a subset of nodes , less computational cost is needed in the proposed approach . we also provide a guideline for selection of the sparsity aware nodes and a closed form expression for the optimum regularization parameter . * index terms*adaptive network , diffusion lms , sparse systems , excess mean square error , adaptive filter , norm .
the problem of pricing american and bermudan style options is of fundamental importance in option pricing theory . in the continuous time settingsmckean ( 1965 ) proposed an algorithm for pricing an american put option with an infinite maturity via ordinary differential equations and partial differential equations .further developments of this technique and applications to other american style securities are considered in peskir and shiryaev ( 2006 ) .another approach is pricing via monte - carlo simulations that is described by glasserman ( 2004 ) .one of the most difficult tasks in the theory of pricing of american and bermudan options is the determination of an optimal stopping rule and the valuing of the option under such a rule .longstaff and schwartz ( 2001 ) proposed an algorithm for pricing american and bermudan style options via monte carlo simulations , least squares monte carlo or lsm .this technique is especially useful when we deal with multi - factor processes . in this casethe methods based on binomial , trinomial trees , or partial differential equations become slow and thus inefficient due to the high dimensionality of the problem . as in the majority of the numerical algorithms the starting point of lsm for american options is a substitution of the continuous time interval with a discrete set of exercise dates .practically , by doing this we substitute the american option with a bermudan one .then for each exercise time ( except the first and the last one ) we project the value of continuation onto a set of basis functions via linear regression .clement , lamberton and protter ( 2002 ) investigated the convergence of the algorithm with the growth of the number of the basis functions and the monte carlo simulations .under fairly general conditions they proved the almost sure convergence of the complete algorithm . also , they obtained the rate of convergence when the number of monte carlo simulations increases and showed that the normalized error of the algorithm is asymptotically gaussian .however , they considered a fixed partition of the time interval and thus , essentially , they discussed the properties of the bermudan , not american option .glasserman and yu ( 2004 ) investigated the behavior of lsm with the simultaneous grows of the number of the basis functions and the number of the monte - carlo simulations and estimated the rate of convergence in some more specific settings .moreno and navas ( 2001 ) considered the lsm for different basis functions , namely , power series , laguerre , legendre , chebyshev polynomials , and deduced that the algorithm converges at least for american put options when the underlying problem has a small number of factors .stentoft ( 2004 ) obtained the rate of convergence of the algorithm in the two period multidimensional case . in the present work we consider the stability of lsm algorithm , when the number of exercise dates increases in such a way that there are exercise dates close to an initial time , which we assume to be equal to zero without loss of generality .we prove that the algorithm is unstable when the time parameter is close to zero , because the underlying regression problem is ill - conditioned .the remainder of this work is organized as follows . in section [ description ] , we describe the algorithm . in section [ ill - conditioning ] , we prove the main result , which is formulated in proposition [ mainprop ] , instability of the algorithm for the small values of the time parameter due to the ill - conditioning of the corresponding matrix in the regression problem .in addition we present the results of the numerical simulations that illustrate the assertions of proposition [ mainprop ] .in section [ conclusion ] , we give the concluding remarks .assume that the stock price process } ] is a brownian motion ( possibly multidimensional ) on a filtered probability space } , \mathbb{p}\right), ] , see karatzas and shreve , chapter 5 , for the discussion of this topic .the time horizon is which we assume to be a finite constant .usually equations of such a form are used to describe the evolution of the stock prices in practice .let the payoff of an american option at the time of exercise is given by where is the corresponding payoff function .then the value of the option is determined by the formula : .\ ] ] here } ] to approximate numerically let us conduct monte - carlo simulations of the process .first , we need to divide the time interval ] of the length , where , .thus at every moment we obtain realizations of the process , .second , for each simulation we compute the value of the option at time ( under the assumption that the option was not exercised before ) : discounting these values we get a cash flow vector where is the discount factor . to obtain the value of the option at ( under the assumption the optionwas not exercised before ) , we chose a hypothesis of linear regression and project the cash flow vector for example , on a constant , and . according to is one of the simplest yet successful regression models . according to a good alternative choice of basis functions can be hermite , laguerre , legendre , or chebyshev polynomials .if we use and as the basis , the estimate of the conditional expectation becomes = \alpha + \beta x_{t_{m-1 } } + \gamma x^2_{t_{m-1}},\ ] ] where , , are some constants .then along every path we compare values of immediate exercise , , with values of continuation that are obtained by substitution of into equation ( [ condexpreg ] ) .the bigger of two gives , . if value of immediate exercise is bigger we set .similarly we obtain ] in order to compute the estimates of the value of the option at each ( under the assumption that it was not exercised before ) we solve the linear regression problem where is an unknown vector of coefficients , vector is determined by equation ( [ cashflowvector2 ] ) , and the matrix depends on the regression hypothesis and the outcome of the monte carlo simulations .assume that we have chosen continuous functions as the hypothesis . examples of such functions are power series , laguerre , legendre , hermite polynomials , etc .in this case has the following form where is the number of the simulations .we show that , if the underlying process is almost surely continuous , then for small the problem ( [ regression ] ) is ill - conditioned . for a matrix in the normthe * condition number * is defined as where and are maximal and minimal singular values of the matrix respectively .a problem with a low condition number is called * well - conditioned * , while a problem with a high condition number is called * ill - conditioned*. usually , problem ( [ regression ] ) is solved via one of the following methods : householder triangularization , gram - schmidt orthogonalization , singular value decomposition , or normal equations .let be the condition number of matrix .if exists then the exact solution to the least - squares problem is given by the vector i.e. it is a product of the left - inverse of the matrix and the vector one can see that the solution to the problem ( [ regression ] ) , obtained via normal equations , is governed by , whereas the solution obtained via svd , householder or gram - schmidt is governed by .consequently , normal equations are the least stable with respect to the grows of the condition number .nevertheless , the analytical solution to the least - squares problem is defined in terms of the normal equations .let denote the condition number of the matrix for . ] - lognormal process , paths , time steps , milstein discretization scheme.,scaledwidth=90.0% ] as a function of , ] the matrix is defined by equation ( [ matrixa ] ) and is the condition number of then = 1.\ ] ] * proof.*consider equation ( [ matrixa ] ) .it follows from equation ( [ sde ] ) that all rows of are identical .consequently the rank of equals to .let us look at the following matrix : the matrix has two eigenvalues : is the first one ( with multiplicity ) , is the second one ( with multiplicity ) .thus has singular values and consequently . note that the matrix is real and symmetric , thus the eigenvalues of are real for all . since eigenvalues are continuous functions of the components of the matrix and the components in turn are a.s .continuous processes , we deduce that for small the first eigenvalue is in the neighborhood of which is greater then zero by the assumption of proposition , whereas all other eigenvalues are is in the neighborhood of .the conclusion , = 1 $ ] follows from continuity of the underlying process and the basis functions s , ( 6,6)[br ] + intuitively proposition [ mainprop ] shows that for small values of the condition number is large .we proved that for a continuous underlying stock price process , lsm algorithm for pricing american options is unstable when time parameter is small .an interesting question is to obtain an exact bound on applicability of this algorithm .a possible criterion of applicability is the condition number of matrix , .for example , if exceeds a certain value , one can treat ( [ regression ] ) as a rank deficient least squares problem ( see for details ) , or switch to another method : backward induction or the method introduced by mckean of option pricing via ordinary differential equations or partial differential equations considered on a smaller domain . for certain problemsit is possible to obtain desired accuracy using relatively small number of time intervals , then one does not have to solve the regression problem for small , and consequently the algorithm can be stable . also , if the underlying process is discontinuous with high probability the algorithm can be stable even for small values of the time parameter .
consider least squares monte carlo ( lsm ) algorithm , which is proposed by longstaff and schwartz ( 2001 ) for pricing american style securities . this algorithm is based on the projection of the value of continuation onto a certain set of basis functions via the least squares problem . we analyze the stability of the algorithm when the number of exercise dates increases and prove that , if the underlying process for the stock price is continuous , then the regression problem is ill - conditioned for small values of the time parameter . _ * keywords : * _ option pricing , optimal stopping , american option , least squares monte carlo , monte carlo methods , stability , ill - conditioning .